ERIC Educational Resources Information Center
Sathick, Javubar; Venkat, Jaya
2015-01-01
Mining social web data is a challenging task and finding user interest for personalized and non-personalized recommendation systems is another important task. Knowledge sharing among web users has become crucial in determining usage of web data and personalizing content in various social websites as per the user's wish. This paper aims to design a…
NASA Astrophysics Data System (ADS)
Herz, A.; Herz, E.; Center, K.; George, P.; Axelrad, P.; Mutschler, S.; Jones, B.
2016-09-01
The Space Surveillance Network (SSN) is tasked with the increasingly difficult mission of detecting, tracking, cataloging and identifying artificial objects orbiting the Earth, including active and inactive satellites, spent rocket bodies, and fragmented debris. Much of the architecture and operations of the SSN are limited and outdated. Efforts are underway to modernize some elements of the systems. Even so, the ability to maintain the best current Space Situational Awareness (SSA) picture and identify emerging events in a timely fashion could be significantly improved by leveraging non-traditional sensor sites. Orbit Logic, the University of Colorado and the University of Texas at Austin are developing an innovative architecture and operations concept to coordinate the tasking and observation information processing of non - traditional assets based on information-theoretic approaches. These confirmed tasking schedules and the resulting data can then be used to "inform" the SSN tasking process. The 'Heimdall Web' system is comprised of core tasking optimization components and accompanying Web interfaces within a secure, split architecture that will for the first time allow non-traditional sensors to support SSA and improve SSN tasking. Heimdall Web application components appropriately score/prioritize space catalog objects based on covariance, priority, observability, expected information gain, and probability of detect - then coordinate an efficient sensor observation schedule for non-SSN sensors contributing to the overall SSA picture maintained by the Joint Space Operations Center (JSpOC). The Heimdall Web Ops concept supports sensor participation levels of "Scheduled", "Tasked" and "Contributing". Scheduled and Tasked sensors are provided optimized observation schedules or object tracking lists from central algorithms, while Contributing sensors review and select from a list of "desired track objects". All sensors are "Web Enabled" for tasking and feedback, supplying observation schedules, confirmed observations and related data back to Heimdall Web to complete the feedback loop for the next scheduling iteration.
Distriblets: Java-Based Distributed Computing on the Web.
ERIC Educational Resources Information Center
Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris
1999-01-01
Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)
Knowledge-driven enhancements for task composition in bioinformatics.
Sutherland, Karen; McLeod, Kenneth; Ferguson, Gus; Burger, Albert
2009-10-01
A key application area of semantic technologies is the fast-developing field of bioinformatics. Sealife was a project within this field with the aim of creating semantics-based web browsing capabilities for the Life Sciences. This includes meaningfully linking significant terms from the text of a web page to executable web services. It also involves the semantic mark-up of biological terms, linking them to biomedical ontologies, then discovering and executing services based on terms that interest the user. A system was produced which allows a user to identify terms of interest on a web page and subsequently connects these to a choice of web services which can make use of these inputs. Elements of Artificial Intelligence Planning build on this to present a choice of higher level goals, which can then be broken down to construct a workflow. An Argumentation System was implemented to evaluate the results produced by three different gene expression databases. An evaluation of these modules was carried out on users from a variety of backgrounds. Users with little knowledge of web services were able to achieve tasks that used several services in much less time than they would have taken to do this manually. The Argumentation System was also considered a useful resource and feedback was collected on the best way to present results. Overall the system represents a move forward in helping users to both construct workflows and analyse results by incorporating specific domain knowledge into the software. It also provides a mechanism by which web pages can be linked to web services. However, this work covers a specific domain and much co-ordinated effort is needed to make all web services available for use in such a way, i.e. the integration of underlying knowledge is a difficult but essential task.
Large area sheet task: Advanced Dendritic Web Growth Development
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Schruben, J.
1981-01-01
A melt level control system was implemented to provide stepless silicon feed rates from zero to rates exactly matching the silicon consumed during web growth. Bench tests of the unit were successfully completed and the system mounted in a web furnace for operational verification. Tests of long term temperature drift correction techniques were made; web width monitoring seems most appropriate for feedback purposes. A system to program the initiation of the web growth cycle was successfully tested. A low cost temperature controller was tested which functions as well as units four times as expensive.
FOCIH: Form-Based Ontology Creation and Information Harvesting
NASA Astrophysics Data System (ADS)
Tao, Cui; Embley, David W.; Liddle, Stephen W.
Creating an ontology and populating it with data are both labor-intensive tasks requiring a high degree of expertise. Thus, scaling ontology creation and population to the size of the web in an effort to create a web of data—which some see as Web 3.0—is prohibitive. Can we find ways to streamline these tasks and lower the barrier enough to enable Web 3.0? Toward this end we offer a form-based approach to ontology creation that provides a way to create Web 3.0 ontologies without the need for specialized training. And we offer a way to semi-automatically harvest data from the current web of pages for a Web 3.0 ontology. In addition to harvesting information with respect to an ontology, the approach also annotates web pages and links facts in web pages to ontological concepts, resulting in a web of data superimposed over the web of pages. Experience with our prototype system shows that mappings between conceptual-model-based ontologies and forms are sufficient for creating the kind of ontologies needed for Web 3.0, and experiments with our prototype system show that automatic harvesting, automatic annotation, and automatic superimposition of a web of data over a web of pages work well.
Web-based expert system for foundry pollution prevention
NASA Astrophysics Data System (ADS)
Moynihan, Gary P.
2004-02-01
Pollution prevention is a complex task. Many small foundries lack the in-house expertise to perform these tasks. Expert systems are a type of computer information system that incorporates artificial intelligence. As noted in the literature, they provide a means of automating specialized expertise. This approach may be further leveraged by implementing the expert system on the internet (or world-wide web). This will allow distribution of the expertise to a variety of geographically-dispersed foundries. The purpose of this research is to develop a prototype web-based expert system to support pollution prevention for the foundry industry. The prototype system identifies potential emissions for a specified process, and also provides recommendations for the prevention of these contaminants. The system is viewed as an initial step toward assisting the foundry industry in better meeting government pollution regulations, as well as improving operating efficiencies within these companies.
Nondestructive web thickness measurement of micro-drills with an integrated laser inspection system
NASA Astrophysics Data System (ADS)
Chuang, Shui-Fa; Chen, Yen-Chung; Chang, Wen-Tung; Lin, Ching-Chih; Tarng, Yeong-Shin
2010-09-01
Nowadays, the electric and semiconductor industries use numerous micro-drills to machine micro-holes in printed circuit boards. The measurement of web thickness of micro-drills, a key parameter of micro-drill geometry influencing drill rigidity and chip-removal ability, is quite important to ensure quality control. Traditionally, inefficiently destructive measuring method is adopted by inspectors. To improve quality and efficiency of the web thickness measuring tasks, a nondestructive measuring method is required. In this paper, based on the laser micro-gauge (LMG) and laser confocal displacement meter (LCDM) techniques, a nondestructive measuring principle of web thickness of micro-drills is introduced. An integrated laser inspection system, mainly consisting of a LMG, a LCDM and a two-axis-driven micro-drill fixture device, was developed. Experiments meant to inspect web thickness of micro-drill samples with a nominal diameter of 0.25 mm were conducted to test the feasibility of the developed laser inspection system. The experimental results showed that the web thickness measurement could achieve an estimated repeatability of ± 1.6 μm and a worst repeatability of ± 7.5 μm. The developed laser inspection system, combined with the nondestructive measuring principle, was able to undertake the web thickness measuring tasks for certain micro-drills.
The BioExtract Server: a web-based bioinformatic workflow platform
Lushbough, Carol M.; Jennewein, Douglas M.; Brendel, Volker P.
2011-01-01
The BioExtract Server (bioextract.org) is an open, web-based system designed to aid researchers in the analysis of genomic data by providing a platform for the creation of bioinformatic workflows. Scientific workflows are created within the system by recording tasks performed by the user. These tasks may include querying multiple, distributed data sources, saving query results as searchable data extracts, and executing local and web-accessible analytic tools. The series of recorded tasks can then be saved as a reproducible, sharable workflow available for subsequent execution with the original or modified inputs and parameter settings. Integrated data resources include interfaces to the National Center for Biotechnology Information (NCBI) nucleotide and protein databases, the European Molecular Biology Laboratory (EMBL-Bank) non-redundant nucleotide database, the Universal Protein Resource (UniProt), and the UniProt Reference Clusters (UniRef) database. The system offers access to numerous preinstalled, curated analytic tools and also provides researchers with the option of selecting computational tools from a large list of web services including the European Molecular Biology Open Software Suite (EMBOSS), BioMoby, and the Kyoto Encyclopedia of Genes and Genomes (KEGG). The system further allows users to integrate local command line tools residing on their own computers through a client-side Java applet. PMID:21546552
Patterns of usage for a Web-based clinical information system.
Chen, Elizabeth S; Cimino, James J
2004-01-01
Understanding how clinicians are using clinical information systems to assist with their everyday tasks is valuable to the system design and development process. Developers of such systems are interested in monitoring usage in order to make enhancements. System log files are rich resources for gaining knowledge about how the system is being used. We have analyzed the log files of our Web-based clinical information system (WebCIS) to obtain various usage statistics including which WebCIS features are frequently being used. We have also identified usage patterns, which convey how the user is traversing the system. We present our method and these results as well as describe how the results can be used to customize menus, shortcut lists, and patient reports in WebCIS and similar systems.
Studying Different Tasks of Implicit Learning across Multiple Test Sessions Conducted on the Web
Sævland, Werner; Norman, Elisabeth
2016-01-01
Implicit learning is usually studied through individual performance on a single task, with the most common tasks being the Serial Reaction Time (SRT) task, the Dynamic System Control (DSC) task, and Artificial Grammar Learning (AGL). Few attempts have been made to compare performance across different implicit learning tasks within the same study. The current study was designed to explore the relationship between performance on the DSC Sugar factory task and the Alternating Serial Reaction Time (ASRT) task. We also addressed another limitation of traditional implicit learning experiments, namely that implicit learning is usually studied in laboratory settings over a restricted time span lasting for less than an hour. In everyday situations, implicit learning is assumed to involve a gradual accumulation of knowledge across several learning episodes over a longer time span. One way to increase the ecological validity of implicit learning experiments could be to present the learning material repeatedly across shorter test sessions. This can most easily be done by using a web-based setup in which participants can access the material from home. We therefore created an online web-based system for measuring implicit learning that could be administered in either single or multiple sessions. Participants (n = 66) were assigned to either a single session or a multiple session condition. Learning occurred on both tasks, and awareness measures suggested that acquired knowledge was not fully conscious on either of the tasks. Learning and the degree of conscious awareness of the learned regularities were compared across conditions and tasks. On the DSC task, performance was not affected by whether learning had taken place in one or over multiple sessions. On the ASRT task, RT improvement across blocks was larger in the multiple-session condition. Learning in the two tasks was not related. PMID:27375512
An investigation of multitasking information behavior and the influence of working memory and flow
NASA Astrophysics Data System (ADS)
Alexopoulou, Peggy; Hepworth, Mark; Morris, Anne
2015-02-01
This study explored the multitasking information behaviour of Web users and how this is influenced by working memory, flow and Personal, Artefact and Task characteristics, as described in the PAT model. The research was exploratory using a pragmatic, mixed method approach. Thirty University students participated; 10 psychologists, 10 accountants and 10 mechanical engineers. The data collection tools used were: pre and post questionnaires, a working memory test, a flow state scale test, audio-visual data, web search logs, think aloud data, observation, and the critical decision method. All participants searched information on the Web for four topics: two for which they had prior knowledge and two more without prior knowledge. Perception of task complexity was found to be related to working memory. People with low working memory reported a significant increase in task complexity after they had completed information searching tasks for which they had no prior knowledge, this was not the case for tasks with prior knowledge. Regarding flow and task complexity, the results confirmed the suggestion of the PAT model (Finneran and Zhang, 2003), which proposed that a complex task can lead to anxiety and low flow levels as well as to perceived challenge and high flow levels. However, the results did not confirm the suggestion of the PAT model regarding the characteristics of web search systems and especially perceived vividness. All participants experienced high vividness. According to the PAT model, however, only people with high flow should experience high levels of vividness. Flow affected the degree of change of knowledge of the participants. People with high flow gained more knowledge for tasks without prior knowledge rather than people with low flow. Furthermore, accountants felt that tasks without prior knowledge were less complex at the end of the web seeking procedure than psychologists and mechanical engineers. Finally, the three disciplines appeared to differ regarding the multitasking information behaviour characteristics such as queries, web search sessions and opened tabs/windows.
Research on SaaS and Web Service Based Order Tracking
NASA Astrophysics Data System (ADS)
Jiang, Jianhua; Sheng, Buyun; Gong, Lixiong; Yang, Mingzhong
To solve the order tracking of across enterprises in Dynamic Virtual Enterprise (DVE), a SaaS and web service based order tracking solution was designed by analyzing the order management process in DVE. To achieve the system, the SaaS based architecture of data management on order tasks manufacturing states was constructed, and the encapsulation method of transforming application system into web service was researched. Then the process of order tracking in the system was given out. Finally, the feasibility of this study was verified by the development of a prototype system.
Live video monitoring robot controlled by web over internet
NASA Astrophysics Data System (ADS)
Lokanath, M.; Akhil Sai, Guruju
2017-11-01
Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.
Brain-controlled applications using dynamic P300 speller matrices.
Halder, Sebastian; Pinegger, Andreas; Käthner, Ivo; Wriessnegger, Selina C; Faller, Josef; Pires Antunes, João B; Müller-Putz, Gernot R; Kübler, Andrea
2015-01-01
Access to the world wide web and multimedia content is an important aspect of life. We present a web browser and a multimedia user interface adapted for control with a brain-computer interface (BCI) which can be used by severely motor impaired persons. The web browser dynamically determines the most efficient P300 BCI matrix size to select the links on the current website. This enables control of the web browser with fewer commands and smaller matrices. The multimedia player was based on an existing software. Both applications were evaluated with a sample of ten healthy participants and three end-users. All participants used a visual P300 BCI with face-stimuli for control. The healthy participants completed the multimedia player task with 90% accuracy and the web browsing task with 85% accuracy. The end-users completed the tasks with 62% and 58% accuracy. All healthy participants and two out of three end-users reported that they felt to be in control of the system. In this study we presented a multimedia application and an efficient web browser implemented for control with a BCI. Both applications provide access to important areas of modern information retrieval and entertainment. Copyright © 2014 Elsevier B.V. All rights reserved.
Web mapping system for complex processing and visualization of environmental geospatial datasets
NASA Astrophysics Data System (ADS)
Titov, Alexander; Gordov, Evgeny; Okladnikov, Igor
2016-04-01
Environmental geospatial datasets (meteorological observations, modeling and reanalysis results, etc.) are used in numerous research applications. Due to a number of objective reasons such as inherent heterogeneity of environmental datasets, big dataset volume, complexity of data models used, syntactic and semantic differences that complicate creation and use of unified terminology, the development of environmental geodata access, processing and visualization services as well as client applications turns out to be quite a sophisticated task. According to general INSPIRE requirements to data visualization geoportal web applications have to provide such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. It should be noted that modern web mapping systems as integrated geoportal applications are developed based on the SOA and might be considered as complexes of interconnected software tools for working with geospatial data. In the report a complex web mapping system including GIS web client and corresponding OGC services for working with geospatial (NetCDF, PostGIS) dataset archive is presented. There are three basic tiers of the GIS web client in it: 1. Tier of geospatial metadata retrieved from central MySQL repository and represented in JSON format 2. Tier of JavaScript objects implementing methods handling: --- NetCDF metadata --- Task XML object for configuring user calculations, input and output formats --- OGC WMS/WFS cartographical services 3. Graphical user interface (GUI) tier representing JavaScript objects realizing web application business logic Metadata tier consists of a number of JSON objects containing technical information describing geospatial datasets (such as spatio-temporal resolution, meteorological parameters, valid processing methods, etc). The middleware tier of JavaScript objects implementing methods for handling geospatial metadata, task XML object, and WMS/WFS cartographical services interconnects metadata and GUI tiers. The methods include such procedures as JSON metadata downloading and update, launching and tracking of the calculation task running on the remote servers as well as working with WMS/WFS cartographical services including: obtaining the list of available layers, visualizing layers on the map, exporting layers in graphical (PNG, JPG, GeoTIFF), vector (KML, GML, Shape) and digital (NetCDF) formats. Graphical user interface tier is based on the bundle of JavaScript libraries (OpenLayers, GeoExt and ExtJS) and represents a set of software components implementing web mapping application business logic (complex menus, toolbars, wizards, event handlers, etc.). GUI provides two basic capabilities for the end user: configuring the task XML object functionality and cartographical information visualizing. The web interface developed is similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Web mapping system developed has shown its effectiveness in the process of solving real climate change research problems and disseminating investigation results in cartographical form. The work is supported by SB RAS Basic Program Projects VIII.80.2.1 and IV.38.1.7.
Creating an Internal Content Management System
ERIC Educational Resources Information Center
Sennema, Greg
2004-01-01
In this article, the author talks about an internal content management system that they have created at Calvin College. It is a hybrid of CMS and intranet that organizes Web site content and a variety of internal tools to help librarians complete their daily tasks. Hobbes is a Web-based tool that uses Common Gateway Interface (CGI) scripts written…
ERIC Educational Resources Information Center
She, Hsiao-Ching; Cheng, Meng-Tzu; Li, Ta-Wei; Wang, Chia-Yu; Chiu, Hsin-Tien; Lee, Pei-Zon; Chou, Wen-Chi; Chuang, Ming-Hua
2012-01-01
This study investigates the effect of Web-based Chemistry Problem-Solving, with the attributes of Web-searching and problem-solving scaffolds, on undergraduate students' problem-solving task performance. In addition, the nature and extent of Web-searching strategies students used and its correlation with task performance and domain knowledge also…
78 FR 26423 - Railroad Safety Advisory Committee; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
... Engineering and System Safety Task Forces. This agenda is subject to change, including the possible addition.... See the RSAC Web site for details on prior RSAC activities and pending tasks at: http://rsac.fra.dot...
ERIC Educational Resources Information Center
Shee, Daniel Y.; Wang, Yi-Shun
2008-01-01
The web-based e-learning system (WELS) has emerged as a new means of skill training and knowledge acquisition, encouraging both academia and industry to invest resources in the adoption of this system. Traditionally, most pre- and post-adoption tasks related to evaluation are carried out from the viewpoints of technology. Since users have been…
Query-Structure Based Web Page Indexing
2012-11-01
the massive amount of data present on the web. In our third participation in the web track at TREC 2012, we explore the idea of building an...the ad-hoc and diversity task. 1 INTRODUCTION The rapid growth and massive quantities of data on the Internet have increased the importance and...complexity of information retrieval systems. The amount and the diversity of the web data introduce shortcomings in the way search engines rank their
NASA Technical Reports Server (NTRS)
Gawadiak, Yuri; Wong, Alan; Maluf, David; Bell, David; Gurram, Mohana; Tran, Khai Peter; Hsu, Jennifer; Yagi, Kenji; Patel, Hemil
2007-01-01
The Program Management Tool (PMT) is a comprehensive, Web-enabled business intelligence software tool for assisting program and project managers within NASA enterprises in gathering, comprehending, and disseminating information on the progress of their programs and projects. The PMT provides planning and management support for implementing NASA programmatic and project management processes and requirements. It provides an online environment for program and line management to develop, communicate, and manage their programs, projects, and tasks in a comprehensive tool suite. The information managed by use of the PMT can include monthly reports as well as data on goals, deliverables, milestones, business processes, personnel, task plans, monthly reports, and budgetary allocations. The PMT provides an intuitive and enhanced Web interface to automate the tedious process of gathering and sharing monthly progress reports, task plans, financial data, and other information on project resources based on technical, schedule, budget, and management criteria and merits. The PMT is consistent with the latest Web standards and software practices, including the use of Extensible Markup Language (XML) for exchanging data and the WebDAV (Web Distributed Authoring and Versioning) protocol for collaborative management of documents. The PMT provides graphical displays of resource allocations in the form of bar and pie charts using Microsoft Excel Visual Basic for Application (VBA) libraries. The PMT has an extensible architecture that enables integration of PMT with other strategic-information software systems, including, for example, the Erasmus reporting system, now part of the NASA Integrated Enterprise Management Program (IEMP) tool suite, at NASA Marshall Space Flight Center (MSFC). The PMT data architecture provides automated and extensive software interfaces and reports to various strategic information systems to eliminate duplicative human entries and minimize data integrity issues among various NASA systems that impact schedules and planning.
Using task analysis to improve the requirements elicitation in health information system.
Teixeira, Leonor; Ferreira, Carlos; Santos, Beatriz Sousa
2007-01-01
This paper describes the application of task analysis within the design process of a Web-based information system for managing clinical information in hemophilia care, in order to improve the requirements elicitation and, consequently, to validate the domain model obtained in a previous phase of the design process (system analysis). The use of task analysis in this case proved to be a practical and efficient way to improve the requirements engineering process by involving users in the design process.
Marine Air Ground Task Force Distribution In The Battlespace
2016-09-01
benefit of this research is a proposed systemic structure with an associated web application that provides the MAGTF commander with critical...associated web application that provides the MAGTF commander with critical information for supporting operations. vi THIS PAGE INTENTIONALLY LEFT BLANK... web analytics in order to support the decision making process. The potential benefit of this research is a methodology with associated application
Large-area sheet task advanced dendritic web growth development
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Schruben, J.
1982-01-01
The thermal stress model was used to generate the design of a low stress lid and shield configuration, which was fabricated and tested experimentally. In preliminary tests, the New Experimental Web Growth Facility performed as designed, producing web on the first run. These experiments suggested desirable design modifications in the melt level sensing system to improve further its performance, and these are being implemented.
Bioinformatics workflows and web services in systems biology made easy for experimentalists.
Jimenez, Rafael C; Corpas, Manuel
2013-01-01
Workflows are useful to perform data analysis and integration in systems biology. Workflow management systems can help users create workflows without any previous knowledge in programming and web services. However the computational skills required to build such workflows are usually above the level most biological experimentalists are comfortable with. In this chapter we introduce workflow management systems that reuse existing workflows instead of creating them, making it easier for experimentalists to perform computational tasks.
2011-01-01
Background In the past decade, the use of technologies to persuade, motivate, and activate individuals’ health behavior change has been a quickly expanding field of research. The use of the Web for delivering interventions has been especially relevant. Current research tends to reveal little about the persuasive features and mechanisms embedded in Web-based interventions targeting health behavior change. Objectives The purpose of this systematic review was to extract and analyze persuasive system features in Web-based interventions for substance use by applying the persuasive systems design (PSD) model. In more detail, the main objective was to provide an overview of the persuasive features within current Web-based interventions for substance use. Methods We conducted electronic literature searches in various databases to identify randomized controlled trials of Web-based interventions for substance use published January 1, 2004, through December 31, 2009, in English. We extracted and analyzed persuasive system features of the included Web-based interventions using interpretive categorization. Results The primary task support components were utilized and reported relatively widely in the reviewed studies. Reduction, self-monitoring, simulation, and personalization seem to be the most used features to support accomplishing user’s primary task. This is an encouraging finding since reduction and self-monitoring can be considered key elements for supporting users to carry out their primary tasks. The utilization of tailoring was at a surprisingly low level. The lack of tailoring may imply that the interventions are targeted for too broad an audience. Leveraging reminders was the most common way to enhance the user-system dialogue. Credibility issues are crucial in website engagement as users will bind with sites they perceive credible and navigate away from those they do not find credible. Based on the textual descriptions of the interventions, we cautiously suggest that most of them were credible. The prevalence of social support in the reviewed interventions was encouraging. Conclusions Understanding the persuasive elements of systems supporting behavior change is important. This may help users to engage and keep motivated in their endeavors. Further research is needed to increase our understanding of how and under what conditions specific persuasive features (either in isolation or collectively) lead to positive health outcomes in Web-based health behavior change interventions across diverse health contexts and populations. PMID:21795238
Lehto, Tuomas; Oinas-Kukkonen, Harri
2011-07-22
In the past decade, the use of technologies to persuade, motivate, and activate individuals' health behavior change has been a quickly expanding field of research. The use of the Web for delivering interventions has been especially relevant. Current research tends to reveal little about the persuasive features and mechanisms embedded in Web-based interventions targeting health behavior change. The purpose of this systematic review was to extract and analyze persuasive system features in Web-based interventions for substance use by applying the persuasive systems design (PSD) model. In more detail, the main objective was to provide an overview of the persuasive features within current Web-based interventions for substance use. We conducted electronic literature searches in various databases to identify randomized controlled trials of Web-based interventions for substance use published January 1, 2004, through December 31, 2009, in English. We extracted and analyzed persuasive system features of the included Web-based interventions using interpretive categorization. The primary task support components were utilized and reported relatively widely in the reviewed studies. Reduction, self-monitoring, simulation, and personalization seem to be the most used features to support accomplishing user's primary task. This is an encouraging finding since reduction and self-monitoring can be considered key elements for supporting users to carry out their primary tasks. The utilization of tailoring was at a surprisingly low level. The lack of tailoring may imply that the interventions are targeted for too broad an audience. Leveraging reminders was the most common way to enhance the user-system dialogue. Credibility issues are crucial in website engagement as users will bind with sites they perceive credible and navigate away from those they do not find credible. Based on the textual descriptions of the interventions, we cautiously suggest that most of them were credible. The prevalence of social support in the reviewed interventions was encouraging. Understanding the persuasive elements of systems supporting behavior change is important. This may help users to engage and keep motivated in their endeavors. Further research is needed to increase our understanding of how and under what conditions specific persuasive features (either in isolation or collectively) lead to positive health outcomes in Web-based health behavior change interventions across diverse health contexts and populations.
Designing learning management system interoperability in semantic web
NASA Astrophysics Data System (ADS)
Anistyasari, Y.; Sarno, R.; Rochmawati, N.
2018-01-01
The extensive adoption of learning management system (LMS) has set the focus on the interoperability requirement. Interoperability is the ability of different computer systems, applications or services to communicate, share and exchange data, information, and knowledge in a precise, effective and consistent way. Semantic web technology and the use of ontologies are able to provide the required computational semantics and interoperability for the automation of tasks in LMS. The purpose of this study is to design learning management system interoperability in the semantic web which currently has not been investigated deeply. Moodle is utilized to design the interoperability. Several database tables of Moodle are enhanced and some features are added. The semantic web interoperability is provided by exploited ontology in content materials. The ontology is further utilized as a searching tool to match user’s queries and available courses. It is concluded that LMS interoperability in Semantic Web is possible to be performed.
An Empirical Comparison of Visualization Tools To Assist Information Retrieval on the Web.
ERIC Educational Resources Information Center
Heo, Misook; Hirtle, Stephen C.
2001-01-01
Discusses problems with navigation in hypertext systems, including cognitive overload, and describes a study that tested information visualization techniques to see which best represented the underlying structure of Web space. Considers the effects of visualization techniques on user performance on information searching tasks and the effects of…
E-Learning System Overview Based on Semantic Web
ERIC Educational Resources Information Center
Alsultanny, Yas A.
2006-01-01
The challenge of the semantic web is the provision of distributed information with well-defined meaning, understandable for different parties. e-Learning is efficient task relevant and just-in-time learning grown from the learning requirements of the new dynamically changing, distributed business world. In this paper we design an e-Learning system…
Large-area sheet task advanced dendritic web growth development
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.
1983-01-01
Modeling in the development of low stress configurations for wide web growth is presented. Parametric sensitivity to identify design features which can be used for dynamic trimming of the furnace element was studied. Temperature measurements of experimental growth behavior led to modification in the growth system to improve lateral temperature distributions.
Sensor Webs as Virtual Data Systems for Earth Science
NASA Astrophysics Data System (ADS)
Moe, K. L.; Sherwood, R.
2008-05-01
The NASA Earth Science Technology Office established a 3-year Advanced Information Systems Technology (AIST) development program in late 2006 to explore the technical challenges associated with integrating sensors, sensor networks, data assimilation and modeling components into virtual data systems called "sensor webs". The AIST sensor web program was initiated in response to a renewed emphasis on the sensor web concepts. In 2004, NASA proposed an Earth science vision for a more robust Earth observing system, coupled with remote sensing data analysis tools and advances in Earth system models. The AIST program is conducting the research and developing components to explore the technology infrastructure that will enable the visionary goals. A working statement for a NASA Earth science sensor web vision is the following: On-demand sensing of a broad array of environmental and ecological phenomena across a wide range of spatial and temporal scales, from a heterogeneous suite of sensors both in-situ and in orbit. Sensor webs will be dynamically organized to collect data, extract information from it, accept input from other sensor / forecast / tasking systems, interact with the environment based on what they detect or are tasked to perform, and communicate observations and results in real time. The focus on sensor webs is to develop the technology and prototypes to demonstrate the evolving sensor web capabilities. There are 35 AIST projects ranging from 1 to 3 years in duration addressing various aspects of sensor webs involving space sensors such as Earth Observing-1, in situ sensor networks such as the southern California earthquake network, and various modeling and forecasting systems. Some of these projects build on proof-of-concept demonstrations of sensor web capabilities like the EO-1 rapid fire response initially implemented in 2003. Other projects simulate future sensor web configurations to evaluate the effectiveness of sensor-model interactions for producing improved science predictions. Still other projects are maturing technology to support autonomous operations, communications and system interoperability. This paper will highlight lessons learned by various projects during the first half of the AIST program. Several sensor web demonstrations have been implemented and resulting experience with evolving standards, such as the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) among others, will be featured. The role of sensor webs in support of the intergovernmental Group on Earth Observations' Global Earth Observation System of Systems (GEOSS) will also be discussed. The GEOSS vision is a distributed system of systems that builds on international components to supply observing and processing systems that are, in the whole, comprehensive, coordinated and sustained. Sensor web prototypes are under development to demonstrate how remote sensing satellite data, in situ sensor networks and decision support systems collaborate in applications of interest to GEO, such as flood monitoring. Furthermore, the international Committee on Earth Observation Satellites (CEOS) has stepped up to the challenge to provide the space-based systems component for GEOSS. CEOS has proposed "virtual constellations" to address emerging data gaps in environmental monitoring, avoid overlap among observing systems, and make maximum use of existing space and ground assets. Exploratory applications that support the objectives of virtual constellations will also be discussed as a future role for sensor webs.
Designing Effective Web Forms for Older Web Users
ERIC Educational Resources Information Center
Li, Hui; Rau, Pei-Luen Patrick; Fujimura, Kaori; Gao, Qin; Wang, Lin
2012-01-01
This research aims to provide insight for web form design for older users. The effects of task complexity and information structure of web forms on older users' performance were examined. Forty-eight older participants with abundant computer and web experience were recruited. The results showed significant differences in task time and error rate…
Evaluation of expert system application based on usability aspects
NASA Astrophysics Data System (ADS)
Munaiseche, C. P. C.; Liando, O. E. S.
2016-04-01
Usability usually defined as a point of human acceptance to a product or a system based on understands and right reaction to an interface. The performance of web application has been influence by the quality of the interface of that web to supporting information transfer process. Preferably, before the applications of expert systems were installed in the operational environment, these applications must be evaluated first by usability testing. This research aimed to measure the usability of the expert system application using tasks as interaction media. This study uses an expert system application to diagnose skin disease in human using questionnaire method which utilize the tasks as interaction media in measuring the usability. Certain tasks were executed by the participants in observing usability value of the application. The usability aspects observed were learnability, efficiency, memorability, errors, and satisfaction. Each questionnaire question represent aspects of usability. The results present the usability value for each aspect and the total average merit for all the five-usability aspect was 4.28, this indicated that the tested expert system application is in the range excellent for the usability level, so the application can be implemented as the operated system by user. The main contribution of the study is the research became the first step in using task model in the usability evaluation for the expert system application software.
An ontology-driven tool for structured data acquisition using Web forms.
Gonçalves, Rafael S; Tu, Samson W; Nyulas, Csongor I; Tierney, Michael J; Musen, Mark A
2017-08-01
Structured data acquisition is a common task that is widely performed in biomedicine. However, current solutions for this task are far from providing a means to structure data in such a way that it can be automatically employed in decision making (e.g., in our example application domain of clinical functional assessment, for determining eligibility for disability benefits) based on conclusions derived from acquired data (e.g., assessment of impaired motor function). To use data in these settings, we need it structured in a way that can be exploited by automated reasoning systems, for instance, in the Web Ontology Language (OWL); the de facto ontology language for the Web. We tackle the problem of generating Web-based assessment forms from OWL ontologies, and aggregating input gathered through these forms as an ontology of "semantically-enriched" form data that can be queried using an RDF query language, such as SPARQL. We developed an ontology-based structured data acquisition system, which we present through its specific application to the clinical functional assessment domain. We found that data gathered through our system is highly amenable to automatic analysis using queries. We demonstrated how ontologies can be used to help structuring Web-based forms and to semantically enrich the data elements of the acquired structured data. The ontologies associated with the enriched data elements enable automated inferences and provide a rich vocabulary for performing queries.
Video control system for a drilling in furniture workpiece
NASA Astrophysics Data System (ADS)
Khmelev, V. L.; Satarov, R. N.; Zavyalova, K. V.
2018-05-01
During last 5 years, Russian industry has being starting to be a robotic, therefore scientific groups got new tasks. One of new tasks is machine vision systems, which should solve problem of automatic quality control. This type of systems has a cost of several thousand dollars each. The price is impossible for regional small business. In this article, we describe principle and algorithm of cheap video control system, which one uses web-cameras and notebook or desktop computer as a computing unit.
SUMO: operation and maintenance management web tool for astronomical observatories
NASA Astrophysics Data System (ADS)
Mujica-Alvarez, Emma; Pérez-Calpena, Ana; García-Vargas, María. Luisa
2014-08-01
SUMO is an Operation and Maintenance Management web tool, which allows managing the operation and maintenance activities and resources required for the exploitation of a complex facility. SUMO main capabilities are: information repository, assets and stock control, tasks scheduler, executed tasks archive, configuration and anomalies control and notification and users management. The information needed to operate and maintain the system must be initially stored at the tool database. SUMO shall automatically schedule the periodical tasks and facilitates the searching and programming of the non-periodical tasks. Tasks planning can be visualized in different formats and dynamically edited to be adjusted to the available resources, anomalies, dates and other constrains that can arise during daily operation. SUMO shall provide warnings to the users notifying potential conflicts related to the required personal availability or the spare stock for the scheduled tasks. To conclude, SUMO has been designed as a tool to help during the operation management of a scientific facility, and in particular an astronomical observatory. This is done by controlling all operating parameters: personal, assets, spare and supply stocks, tasks and time constrains.
Nadkarni, Prakash M.; Brandt, Cynthia M.; Marenco, Luis
2000-01-01
The task of creating and maintaining a front end to a large institutional entity-attribute-value (EAV) database can be cumbersome when using traditional client-server technology. Switching to Web technology as a delivery vehicle solves some of these problems but introduces others. In particular, Web development environments tend to be primitive, and many features that client-server developers take for granted are missing. WebEAV is a generic framework for Web development that is intended to streamline the process of Web application development for databases having a significant EAV component. It also addresses some challenging user interface issues that arise when any complex system is created. The authors describe the architecture of WebEAV and provide an overview of its features with suitable examples. PMID:10887163
Lessons Learned from Deploying an Analytical Task Management Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.; Welch, Clara; Arceneaux, Joshua; Bulgatz, Dennis; Hunt, Mitch; Young, Stephen
2007-01-01
Defining requirements, missions, technologies, and concepts for space exploration involves multiple levels of organizations, teams of people with complementary skills, and analytical models and simulations. Analytical activities range from filling a To-Be-Determined (TBD) in a requirement to creating animations and simulations of exploration missions. In a program as large as returning to the Moon, there are hundreds of simultaneous analysis activities. A way to manage and integrate efforts of this magnitude is to deploy a centralized database that provides the capability to define tasks, identify resources, describe products, schedule deliveries, and generate a variety of reports. This paper describes a web-accessible task management system and explains the lessons learned during the development and deployment of the database. Through the database, managers and team leaders can define tasks, establish review schedules, assign teams, link tasks to specific requirements, identify products, and link the task data records to external repositories that contain the products. Data filters and spreadsheet export utilities provide a powerful capability to create custom reports. Import utilities provide a means to populate the database from previously filled form files. Within a four month period, a small team analyzed requirements, developed a prototype, conducted multiple system demonstrations, and deployed a working system supporting hundreds of users across the aeros pace community. Open-source technologies and agile software development techniques, applied by a skilled team enabled this impressive achievement. Topics in the paper cover the web application technologies, agile software development, an overview of the system's functions and features, dealing with increasing scope, and deploying new versions of the system.
Aryanto, K Y E; Broekema, A; Langenhuysen, R G A; Oudkerk, M; van Ooijen, P M A
2015-05-01
To develop and test a fast and easy rule-based web-environment with optional de-identification of imaging data to facilitate data distribution within a hospital environment. A web interface was built using Hypertext Preprocessor (PHP), an open source scripting language for web development, and Java with SQL Server to handle the database. The system allows for the selection of patient data and for de-identifying these when necessary. Using the services provided by the RSNA Clinical Trial Processor (CTP), the selected images were pushed to the appropriate services using a protocol based on the module created for the associated task. Five pipelines, each performing a different task, were set up in the server. In a 75 month period, more than 2,000,000 images are transferred and de-identified in a proper manner while 20,000,000 images are moved from one node to another without de-identification. While maintaining a high level of security and stability, the proposed system is easy to setup, it integrate well with our clinical and research practice and it provides a fast and accurate vendor-neutral process of transferring, de-identifying, and storing DICOM images. Its ability to run different de-identification processes in parallel pipelines is a major advantage in both clinical and research setting.
WebEAV: automatic metadata-driven generation of web interfaces to entity-attribute-value databases.
Nadkarni, P M; Brandt, C M; Marenco, L
2000-01-01
The task of creating and maintaining a front end to a large institutional entity-attribute-value (EAV) database can be cumbersome when using traditional client-server technology. Switching to Web technology as a delivery vehicle solves some of these problems but introduces others. In particular, Web development environments tend to be primitive, and many features that client-server developers take for granted are missing. WebEAV is a generic framework for Web development that is intended to streamline the process of Web application development for databases having a significant EAV component. It also addresses some challenging user interface issues that arise when any complex system is created. The authors describe the architecture of WebEAV and provide an overview of its features with suitable examples.
Design and Implementation of Distributed Crawler System Based on Scrapy
NASA Astrophysics Data System (ADS)
Fan, Yuhao
2018-01-01
At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.
QMachine: commodity supercomputing in web browsers.
Wilkinson, Sean R; Almeida, Jonas S
2014-06-09
Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics' "Big Data" from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running "download and install" software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments.
ERIC Educational Resources Information Center
Stockwell, Esther
2016-01-01
This study adapted web-based exploratory tasks using WebQuests as a means of enabling students to understand and reflect on both the target and their own culture. Learners actively used various authentic resources selected to meet their linguistic and cognitive needs to complete the tasks. The aim of this study was to help Japanese university…
Cognitive and Task Influences on Web Searching Behavior.
ERIC Educational Resources Information Center
Kim, Kyung-Sun; Allen, Bryce
2002-01-01
Describes results from two independent investigations of college students that were conducted to study the impact of differences in users' cognition and search tasks on Web search activities and outcomes. Topics include cognitive style; problem-solving; and implications for the design and use of the Web and Web search engines. (Author/LRW)
ERIC Educational Resources Information Center
Sengel, Erhan
2014-01-01
This study aims to investigate the usability level of web site of a university by observing 10 participants who are required to complete 11 tasks, which have been defined by the researchers before to gather data about effectiveness, efficiency and satisfaction. System Usability Scale was used to collect data about satisfaction. The research…
NASA Astrophysics Data System (ADS)
Buszko, Marian L.; Buszko, Dominik; Wang, Daniel C.
1998-04-01
A custom-written Common Gateway Interface (CGI) program for remote control of an NMR spectrometer using a World Wide Web browser has been described. The program, running on a UNIX workstation, uses multiple processes to handle concurrent tasks of interacting with the user and with the spectrometer. The program's parent process communicates with the browser and sends out commands to the spectrometer; the child process is mainly responsible for data acquisition. Communication between the processes is via the shared memory mechanism. The WWW pages that have been developed for the system make use of the frames feature of web browsers. The CGI program provides an intuitive user interface to the NMR spectrometer, making, in effect, a complex system an easy-to-use Web appliance.
One EPA Web: Purpose, Audiences, Top Tasks (Round 2 Sites, April 2012 – January 2013)
Examples of the top audiences and tasks identified for priority topics can help EICs identify their own audiences and tasks for new web areas, as an important part of the content transformation process.
QMachine: commodity supercomputing in web browsers
2014-01-01
Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605
Efficient Evaluation System for Learning Management Systems
ERIC Educational Resources Information Center
Cavus, Nadire
2009-01-01
A learning management system (LMS) provides the platform for web-based learning environment by enabling the management, delivery, tracking of learning, testing, communication, registration process and scheduling. There are many LMS systems on the market that can be obtained for free or through payment. It has now become an important task to choose…
The Role of Learning Tasks on Attitude Change Using Cognitive Flexibility Hypertext Systems
ERIC Educational Resources Information Center
Godshalk, Veronica M.; Harvey, Douglas M.; Moller, Leslie
2004-01-01
In this study, the authors examined the impact of task assignment on the effectiveness of a Web-based experiential exercise based on cognitive flexibility theory to enlighten learner attitudes toward the ill-structured topic of sexual harassment. In the research study, we sought to shed light on the use of a cognitive flexibility approach when…
Designing a web site for high school geoscience teaching in Iceland
NASA Astrophysics Data System (ADS)
Douglas, George R.
1998-08-01
The need to construct an earth science teaching site on the web prompted a survey of existing sites which, in spite of containing much of value, revealed many weaknesses in basic design, particularly as regards the organisation of links to information resources. Few web sites take into consideration the particular pedagogic needs of the high school science student and there has, as yet, been little serious attempt to exploit and organise the more outstanding advantages offered by the internet to science teaching, such as accessing real-time data. A web site has been constructed which, through basic design, enables students to access relevant information resources over a wide range of subjects and topics easily and rapidly, while at the same time performing an instructional role in how to handle both on-line and off-line resources. Key elements in the design are selection and monitoring by the teacher, task oriented pages and the use of the Dewey decimal classification system. The intention is to increase gradually the extent to which most teaching tasks are carried out via the web pages, in the belief that they can become an efficient central point for all the earth science curriculum.
He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian
2015-02-01
Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.
Modelling of Tethered Space-Web Structures
NASA Astrophysics Data System (ADS)
McKenzie, D. J.; Cartnell, M. P.
Large structures in space are an essential milestone in the path of many projects, from solar power collectors to space stations. In space, as on Earth, these large projects may be split up into more manageable sections, dividing the task into multiple replicable parts. Specially constructed spider robots could assemble these structures piece by piece over a membrane or space- web, giving a method for building a structure while on orbit. The modelling and applications of these space-webs are discussed, along with the derivation of the equations of motion of the structure. The presentation of some preliminary results from the solution of these equations will show that space-webs can take a variety of different forms, and give some guidelines for configuring the space-web system.
ERIC Educational Resources Information Center
Zimmerman, Don; Paschal, Dawn Bastian
2009-01-01
In an exploratory study, participants (n = 18) completed 11 usability tasks to assess ease of use of two Web sites, and then a Web site perception questionnaire for each. Participants rated both Web sites positively, but 25% and 36% could not complete all tasks; doing so required more than a minute to complete. (Contains 2 figures and 7 tables.)
Buszko; Buszko; Wang
1998-04-01
A custom-written Common Gateway Interface (CGI) program for remote control of an NMR spectrometer using a World Wide Web browser has been described. The program, running on a UNIX workstation, uses multiple processes to handle concurrent tasks of interacting with the user and with the spectrometer. The program's parent process communicates with the browser and sends out commands to the spectrometer; the child process is mainly responsible for data acquisition. Communication between the processes is via the shared memory mechanism. The WWW pages that have been developed for the system make use of the frames feature of web browsers. The CGI program provides an intuitive user interface to the NMR spectrometer, making, in effect, a complex system an easy-to-use Web appliance. Copyright 1998 Academic Press.
An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing
NASA Astrophysics Data System (ADS)
Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.
2015-07-01
Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.
The OGC Sensor Web Enablement framework
NASA Astrophysics Data System (ADS)
Cox, S. J.; Botts, M.
2006-12-01
Sensor observations are at the core of natural sciences. Improvements in data-sharing technologies offer the promise of much greater utilisation of observational data. A key to this is interoperable data standards. The Open Geospatial Consortium's (OGC) Sensor Web Enablement initiative (SWE) is developing open standards for web interfaces for the discovery, exchange and processing of sensor observations, and tasking of sensor systems. The goal is to support the construction of complex sensor applications through real-time composition of service chains from standard components. The framework is based around a suite of standard interfaces, and standard encodings for the message transferred between services. The SWE interfaces include: Sensor Observation Service (SOS)-parameterized observation requests (by observation time, feature of interest, property, sensor); Sensor Planning Service (SPS)-tasking a sensor- system to undertake future observations; Sensor Alert Service (SAS)-subscription to an alert, usually triggered by a sensor result exceeding some value. The interface design generally follows the pattern established in the OGC Web Map Service (WMS) and Web Feature Service (WFS) interfaces, where the interaction between a client and service follows a standard sequence of requests and responses. The first obtains a general description of the service capabilities, followed by obtaining detail required to formulate a data request, and finally a request for a data instance or stream. These may be implemented in a stateless "REST" idiom, or using conventional "web-services" (SOAP) messaging. In a deployed system, the SWE interfaces are supplemented by Catalogue, data (WFS) and portrayal (WMS) services, as well as authentication and rights management. The standard SWE data formats are Observations and Measurements (O&M) which encodes observation metadata and results, Sensor Model Language (SensorML) which describes sensor-systems, Transducer Model Language (TML) which covers low-level data streams, and domain-specific GML Application Schemas for definitions of the target feature types. The SWE framework has been demonstrated in several interoperability testbeds. These were based around emergency management, security, contamination and environmental monitoring scenarios.
Exploring the Influence of Web-Based Portfolio Development on Learning To Teach Elementary Science.
ERIC Educational Resources Information Center
Avraamidou, Lucy; Zembal-Saul, Carla
This study examined how Web-based portfolio development supported reflective thinking and learning within a Professional Development School (PDS). It investigated the evidence-based philosophies developed by prospective teachers as a central part of the Web-based portfolio task, noting how technology contributed to the portfolio task. Participants…
Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.
2011-01-01
Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.
ERIC Educational Resources Information Center
Coiro, Julie; Fogleman, Jay
2011-01-01
Online resources can deepen student learning--if teachers design the right tasks and learner supports. In this article, the authors look at instructional websites teachers will want to use with their students. They focus on three types of web-based learning environments--(1) informational reading systems; (2) interactive learning systems; and (3)…
Flipping Introduction to MIS for a Connected World
ERIC Educational Resources Information Center
Law, Wai K.
2014-01-01
It has been increasingly challenging to provide an introductory coverage of the rapidly expanding fields in Information Systems (IS). The task has been further complicated by the popularity of web resources and cloud services. A new generation of technically savvy learners, while recognizing the significance of information systems, expects…
Coherent visualization of spatial data adapted to roles, tasks, and hardware
NASA Astrophysics Data System (ADS)
Wagner, Boris; Peinsipp-Byma, Elisabeth
2012-06-01
Modern crisis management requires that users with different roles and computer environments have to deal with a high volume of various data from different sources. For this purpose, Fraunhofer IOSB has developed a geographic information system (GIS) which supports the user depending on available data and the task he has to solve. The system provides merging and visualization of spatial data from various civilian and military sources. It supports the most common spatial data standards (OGC, STANAG) as well as some proprietary interfaces, regardless if these are filebased or database-based. To set the visualization rules generic Styled Layer Descriptors (SLDs) are used, which are an Open Geospatial Consortium (OGC) standard. SLDs allow specifying which data are shown, when and how. The defined SLDs consider the users' roles and task requirements. In addition it is possible to use different displays and the visualization also adapts to the individual resolution of the display. Too high or low information density is avoided. Also, our system enables users with different roles to work together simultaneously using the same data base. Every user is provided with the appropriate and coherent spatial data depending on his current task. These so refined spatial data are served via the OGC services Web Map Service (WMS: server-side rendered raster maps), or the Web Map Tile Service - (WMTS: pre-rendered and cached raster maps).
Development Status of the Advanced Life Support On-Line Project Information System
NASA Technical Reports Server (NTRS)
Levri, Julie A.; Hogan, John A.; Cavazzoni, Jim; Brodbeck, Christina; Morrow, Rich; Ho, Michael; Kaehms, Bob; Whitaker, Dawn R.
2005-01-01
The Advanced Life Support Program has recently accelerated an effort to develop an On-line Project Information System (OPIS) for research project and technology development data centralization and sharing. The core functionality of OPIS will launch in October of 2005. This paper presents the current OPIS development status. OPIS core functionality involves a Web-based annual solicitation of project and technology data directly from ALS Principal Investigators (PIS) through customized data collection forms. Data provided by PIs will be reviewed by a Technical Task Monitor (TTM) before posting the information to OPIS for ALS Community viewing via the Web. The data will be stored in an object-oriented relational database (created in MySQL(R)) located on a secure server at NASA ARC. Upon launch, OPIS can be utilized by Managers to identify research and technology development gaps and to assess task performance. Analysts can employ OPIS to obtain.
Dashboard Task Monitor for Managing ATLAS User Analysis on the Grid
NASA Astrophysics Data System (ADS)
Sargsyan, L.; Andreeva, J.; Jha, M.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Schovancova, J.; Tuckett, D.; Atlas Collaboration
2014-06-01
The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.
Final Report for DOE Project: Portal Web Services: Support of DOE SciDAC Collaboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mary Thomas, PI; Geoffrey Fox, Co-PI; Gannon, D
2007-10-01
Grid portals provide the scientific community with familiar and simplified interfaces to the Grid and Grid services, and it is important to deploy grid portals onto the SciDAC grids and collaboratories. The goal of this project is the research, development and deployment of interoperable portal and web services that can be used on SciDAC National Collaboratory grids. This project has four primary task areas: development of portal systems; management of data collections; DOE science application integration; and development of web and grid services in support of the above activities.
NASA Technical Reports Server (NTRS)
2000-01-01
Oak Grove Reactor, developed by Oak Grove Systems, is a new software program that allows users to integrate workflow processes. It can be used with portable communication devices. The software can join e-mail, calendar/scheduling and legacy applications into one interactive system via the web. Priority tasks and due dates are organized and highlighted to keep the user up to date with developments. Reactor works with existing software and few new skills are needed to use it. Using a web browser, a user can can work on something while other users can work on the same procedure or view its status while it is being worked on at another site. The software was developed by the Jet Propulsion Lab and originally put to use at Johnson Space Center.
Using Heuristic Task Analysis to Create Web-Based Instructional Design Theory
ERIC Educational Resources Information Center
Fiester, Herbert R.
2010-01-01
The first purpose of this study was to identify procedural and heuristic knowledge used when creating web-based instruction. The second purpose of this study was to develop suggestions for improving the Heuristic Task Analysis process, a technique for eliciting, analyzing, and representing expertise in cognitively complex tasks. Three expert…
Instructor Perceptions of Web Technology Feature and Instructional Task Fit
ERIC Educational Resources Information Center
Strader, Troy J.; Reed, Diana; Suh, Inchul; Njoroge, Joyce W.
2015-01-01
In this exploratory study, university faculty (instructor) perceptions of the extent to which eight unique features of Web technology are useful for various instructional tasks are identified. Task-technology fit propositions are developed and tested using data collected from a survey of instructors in business, pharmacy, and arts/humanities. It…
Collaborative Tasks in Web Conferencing: A Case Study on Chinese Online
ERIC Educational Resources Information Center
Guo, Sijia; Möllering, Martina
2017-01-01
This case study aimed to explore best practice in applying task-based language teaching (TBLT) via a Web-conferencing tool, Blackboard Collaborate, in a beginners' online Chinese course by evaluating the pedagogical values and limitations of the software and the tasks designed. Chapelle's (2001) criteria for computer-assisted language learning…
Schäuble, Sascha; Stavrum, Anne-Kristin; Bockwoldt, Mathias; Puntervoll, Pål; Heiland, Ines
2017-06-24
Systems Biology Markup Language (SBML) is the standard model representation and description language in systems biology. Enriching and analysing systems biology models by integrating the multitude of available data, increases the predictive power of these models. This may be a daunting task, which commonly requires bioinformatic competence and scripting. We present SBMLmod, a Python-based web application and service, that automates integration of high throughput data into SBML models. Subsequent steady state analysis is readily accessible via the web service COPASIWS. We illustrate the utility of SBMLmod by integrating gene expression data from different healthy tissues as well as from a cancer dataset into a previously published model of mammalian tryptophan metabolism. SBMLmod is a user-friendly platform for model modification and simulation. The web application is available at http://sbmlmod.uit.no , whereas the WSDL definition file for the web service is accessible via http://sbmlmod.uit.no/SBMLmod.wsdl . Furthermore, the entire package can be downloaded from https://github.com/MolecularBioinformatics/sbml-mod-ws . We envision that SBMLmod will make automated model modification and simulation available to a broader research community.
Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.
Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart
2017-01-01
Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits ( r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets ( r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built-in web cameras are a standard feature of most smart devices (e.g., laptops, tablets, smart phones) and can be effectively employed to track eye movements on decisional tasks with high accuracy and minimal cost.
ERIC Educational Resources Information Center
Jakovljevic, Maria; Ankiewicz, Piet; De swardt, Estelle; Gross, Elna
2004-01-01
Traditional instructional methodology in the Information System Design (ISD) environment lacks explicit strategies for promoting the cognitive skills of prospective system designers. This contributes to the fragmented knowledge and low motivational and creative involvement of learners in system design tasks. In addition, present ISD methodologies,…
Omics Metadata Management Software v. 1 (OMMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and to perform bioinformatics analyses and information management tasks via a simple and intuitive web-based interface. Several use cases with short-read sequence datasets are provided to showcase the full functionality of the OMMS, from metadata curation tasks, to bioinformatics analyses and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for web-based deployment supporting geographically dispersed research teams. Our software was developed with open-source bundles, is flexible, extensible and easily installedmore » and run by operators with general system administration and scripting language literacy.« less
Adaptive Semantic and Social Web-based learning and assessment environment for the STEM
NASA Astrophysics Data System (ADS)
Babaie, Hassan; Atchison, Chris; Sunderraman, Rajshekhar
2014-05-01
We are building a cloud- and Semantic Web-based personalized, adaptive learning environment for the STEM fields that integrates and leverages Social Web technologies to allow instructors and authors of learning material to collaborate in semi-automatic development and update of their common domain and task ontologies and building their learning resources. The semi-automatic ontology learning and development minimize issues related to the design and maintenance of domain ontologies by knowledge engineers who do not have any knowledge of the domain. The social web component of the personal adaptive system will allow individual and group learners to interact with each other and discuss their own learning experience and understanding of course material, and resolve issues related to their class assignments. The adaptive system will be capable of representing key knowledge concepts in different ways and difficulty levels based on learners' differences, and lead to different understanding of the same STEM content by different learners. It will adapt specific pedagogical strategies to individual learners based on their characteristics, cognition, and preferences, allow authors to assemble remotely accessed learning material into courses, and provide facilities for instructors to assess (in real time) the perception of students of course material, monitor their progress in the learning process, and generate timely feedback based on their understanding or misconceptions. The system applies a set of ontologies that structure the learning process, with multiple user friendly Web interfaces. These include the learning ontology (models learning objects, educational resources, and learning goal); context ontology (supports adaptive strategy by detecting student situation), domain ontology (structures concepts and context), learner ontology (models student profile, preferences, and behavior), task ontologies, technological ontology (defines devices and places that surround the student), pedagogy ontology, and learner ontology (defines time constraint, comment, profile).
IDEAS and App Development Internship in Hardware and Software Design
NASA Technical Reports Server (NTRS)
Alrayes, Rabab D.
2016-01-01
In this report, I will discuss the tasks and projects I have completed while working as an electrical engineering intern during the spring semester of 2016 at NASA Kennedy Space Center. In the field of software development, I completed tasks for the G-O Caching Mobile App and the Asbestos Management Information System (AMIS) Web App. The G-O Caching Mobile App was written in HTML, CSS, and JavaScript on the Cordova framework, while the AMIS Web App is written in HTML, CSS, JavaScript, and C# on the AngularJS framework. My goals and objectives on these two projects were to produce an app with an eye-catching and intuitive User Interface (UI), which will attract more employees to participate; to produce a fully-tested, fully functional app which supports workforce engagement and exploration; to produce a fully-tested, fully functional web app that assists technicians working in asbestos management. I also worked in hardware development on the Integrated Display and Environmental Awareness System (IDEAS) wearable technology project. My tasks on this project were focused in PCB design and camera integration. My goals and objectives for this project were to successfully integrate fully functioning custom hardware extenders on the wearable technology headset to minimize the size of hardware on the smart glasses headset for maximum user comfort; to successfully integrate fully functioning camera onto the headset. By the end of this semester, I was able to successfully develop four extender boards to minimize hardware on the headset, and assisted in integrating a fully-functioning camera into the system.
Mining Tasks from the Web Anchor Text Graph: MSR Notebook Paper for the TREC 2015 Tasks Track
2015-11-20
Mining Tasks from the Web Anchor Text Graph: MSR Notebook Paper for the TREC 2015 Tasks Track Paul N. Bennett Microsoft Research Redmond, USA pauben...anchor text graph has proven useful in the general realm of query reformulation [2], we sought to quantify the value of extracting key phrases from...anchor text in the broader setting of the task understanding track. Given a query, our approach considers a simple method for identifying a relevant
Identification and Illustration of Insecure Direct Object References and their Countermeasures
NASA Astrophysics Data System (ADS)
KumarShrestha, Ajay; Singh Maharjan, Pradip; Paudel, Santosh
2015-03-01
The insecure direct object reference simply represents the flaws in the system design without the full protection mechanism for the sensitive system resources or data. It basically occurs when the web application developer provides direct access to objects in accordance with the user input. So any attacker can exploit this web vulnerability and gain access to privileged information by bypassing the authorization. The main aim of this paper is to demonstrate the real effect and the identification of the insecure direct object references and then to provide the feasible preventive solutions such that the web applications do not allow direct object references to be manipulated by attackers. The experiment of the insecure direct object referencing is carried out using the insecure J2EE web application called WebGoat and its security testing is being performed using another JAVA based tool called BURP SUITE. The experimental result shows that the access control check for gaining access to privileged information is a very simple problem but at the same time its correct implementation is a tricky task. The paper finally presents some ways to overcome this web vulnerability.
Evaluation of a Web Conferencing Tool and Collaborative Tasks in an Online Chinese Course
ERIC Educational Resources Information Center
Guo, Sijia
2014-01-01
This case study aims to explore the best practice of applying task-based language teaching (TBLT) via the web conferencing tool Blackboard Collaborate in a beginners' online Chinese course by evaluating the technical capacity of the software and the pedagogical values and limitations of the tasks designed. In this paper, Chapelle's (2001) criteria…
Using a web-based system for the continuous distance education in cytopathology.
Stergiou, Nikolaos; Georgoulakis, Giannis; Margari, Niki; Aninos, Dionisios; Stamataki, Melina; Stergiou, Efi; Pouliakis, Abraam; Karakitsos, Petros
2009-12-01
The evolution of information technologies and telecommunications has made the World Wide Web a low cost and easily accessible tool for the dissemination of information and knowledge. Continuous Medical Education (CME) sites dedicated in cytopathology field are rather poor, they do not succeed in following the constant changes and lack the ability of providing cytopathologists with a dynamic learning environment, adaptable to the development of cytopathology. Learning methods including skills such as decision making, reasoning and problem solving are critical in the development of such a learning environment. The objectives of this study are (1) to demonstrate on the basis of a web-based training system the successful application of traditional learning theories and methods and (2) to effectively evaluate users' perception towards the educational program, using a combination of observers, theories and methods. Trainees are given the opportunity to browse through the educational material, collaborate in synchronous and asynchronous mode, practice their skills through problems and tasks and test their knowledge using the self-evaluation tool. On the other hand, the trainers are responsible for editing learning material, attending students' progress and organizing the problem-based and task-based scenarios. The implementation of the web-based training system is based on the three-tier architecture and uses an Apache Tomcat web server and a MySQL database server. By December 2008, CytoTrainer's learning environment contains two courses in cytopathology: Gynaecological Cytology and Thyroid Cytology offering about 2000 digital images and 20 case sessions. Our evaluation method is a combination of both qualitative and quantitative approaches to explore how the various parts of the system and students' attitudes work together. Trainees approved of the course's content, methodology and learning activities. The triangulation of evaluation methods revealed that the training program is suitable for the continuous distance education in cytopathology and that it has improved the trainees' skills in diagnostic cytopathology. The web-based training system can be successfully involved in the continuous distance education in cytopathology. It provides the opportunity to access learning material from any place at any time and supports the acquisition of diagnostic knowledge.
PathCase-SB architecture and database design
2011-01-01
Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889
The Creative task Creator: a tool for the generation of customized, Web-based creativity tasks.
Pretz, Jean E; Link, John A
2008-11-01
This article presents a Web-based tool for the creation of divergent-thinking and open-ended creativity tasks. A Java program generates HTML forms with PHP scripting that run an Alternate Uses Task and/or open-ended response items. Researchers may specify their own instructions, objects, and time limits, or use default settings. Participants can also be prompted to select their best responses to the Alternate Uses Task (Silvia et al., 2008). Minimal programming knowledge is required. The program runs on any server, and responses are recorded in a standard MySQL database. Responses can be scored using the consensual assessment technique (Amabile, 1996) or Torrance's (1998) traditional scoring method. Adoption of this Web-based tool should facilitate creativity research across cultures and access to eminent creators. The Creative Task Creator may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.
Protecting Database Centric Web Services against SQL/XPath Injection Attacks
NASA Astrophysics Data System (ADS)
Laranjeiro, Nuno; Vieira, Marco; Madeira, Henrique
Web services represent a powerful interface for back-end database systems and are increasingly being used in business critical applications. However, field studies show that a large number of web services are deployed with security flaws (e.g., having SQL Injection vulnerabilities). Although several techniques for the identification of security vulnerabilities have been proposed, developing non-vulnerable web services is still a difficult task. In fact, security-related concerns are hard to apply as they involve adding complexity to already complex code. This paper proposes an approach to secure web services against SQL and XPath Injection attacks, by transparently detecting and aborting service invocations that try to take advantage of potential vulnerabilities. Our mechanism was applied to secure several web services specified by the TPC-App benchmark, showing to be 100% effective in stopping attacks, non-intrusive and very easy to use.
Development of wide area environment accelerator operation and diagnostics method
NASA Astrophysics Data System (ADS)
Uchiyama, Akito; Furukawa, Kazuro
2015-08-01
Remote operation and diagnostic systems for particle accelerators have been developed for beam operation and maintenance in various situations. Even though fully remote experiments are not necessary, the remote diagnosis and maintenance of the accelerator is required. Considering remote-operation operator interfaces (OPIs), the use of standard protocols such as the hypertext transfer protocol (HTTP) is advantageous, because system-dependent protocols are unnecessary between the remote client and the on-site server. Here, we have developed a client system based on WebSocket, which is a new protocol provided by the Internet Engineering Task Force for Web-based systems, as a next-generation Web-based OPI using the Experimental Physics and Industrial Control System Channel Access protocol. As a result of this implementation, WebSocket-based client systems have become available for remote operation. Also, as regards practical application, the remote operation of an accelerator via a wide area network (WAN) faces a number of challenges, e.g., the accelerator has both experimental device and radiation generator characteristics. Any error in remote control system operation could result in an immediate breakdown. Therefore, we propose the implementation of an operator intervention system for remote accelerator diagnostics and support that can obviate any differences between the local control room and remote locations. Here, remote-operation Web-based OPIs, which resolve security issues, are developed.
A Novel Architecture for E-Learning Knowledge Assessment Systems
ERIC Educational Resources Information Center
Gierlowski, Krzysztof; Nowicki, Krzysztof
2009-01-01
In this article we propose a novel e-learning system, dedicated strictly to knowledge assessment tasks. In its functioning it utilizes web-based technologies, but its design differs radically from currently popular e-learning solutions which rely mostly on thin-client architecture. Our research proved that such architecture, while well suited for…
CSHM: Web-based safety and health monitoring system for construction management.
Cheung, Sai On; Cheung, Kevin K W; Suen, Henry C H
2004-01-01
This paper describes a web-based system for monitoring and assessing construction safety and health performance, entitled the Construction Safety and Health Monitoring (CSHM) system. The design and development of CSHM is an integration of internet and database systems, with the intent to create a total automated safety and health management tool. A list of safety and health performance parameters was devised for the management of safety and health in construction. A conceptual framework of the four key components of CSHM is presented: (a) Web-based Interface (templates); (b) Knowledge Base; (c) Output Data; and (d) Benchmark Group. The combined effect of these components results in a system that enables speedy performance assessment of safety and health activities on construction sites. With the CSHM's built-in functions, important management decisions can theoretically be made and corrective actions can be taken before potential hazards turn into fatal or injurious occupational accidents. As such, the CSHM system will accelerate the monitoring and assessing of performance safety and health management tasks.
Hybrid Exploration Agent Platform and Sensor Web System
NASA Technical Reports Server (NTRS)
Stoffel, A. William; VanSteenberg, Michael E.
2004-01-01
A sensor web to collect the scientific data needed to further exploration is a major and efficient asset to any exploration effort. This is true not only for lunar and planetary environments, but also for interplanetary and liquid environments. Such a system would also have myriad direct commercial spin-off applications. The Hybrid Exploration Agent Platform and Sensor Web or HEAP-SW like the ANTS concept is a Sensor Web concept. The HEAP-SW is conceptually and practically a very different system. HEAP-SW is applicable to any environment and a huge range of exploration tasks. It is a very robust, low cost, high return, solution to a complex problem. All of the technology for initial development and implementation is currently available. The HEAP Sensor Web or HEAP-SW consists of three major parts, The Hybrid Exploration Agent Platforms or HEAP, the Sensor Web or SW and the immobile Data collection and Uplink units or DU. The HEAP-SW as a whole will refer to any group of mobile agents or robots where each robot is a mobile data collection unit that spends most of its time acting in concert with all other robots, DUs in the web, and the HEAP-SWs overall Command and Control (CC) system. Each DU and robot is, however, capable of acting independently. The three parts of the HEAP-SW system are discussed in this paper. The Goals of the HEAP-SW system are: 1) To maximize the amount of exploration enhancing science data collected; 2) To minimize data loss due to system malfunctions; 3) To minimize or, possibly, eliminate the risk of total system failure; 4) To minimize the size, weight, and power requirements of each HEAP robot; 5) To minimize HEAP-SW system costs. The rest of this paper discusses how these goals are attained.
Web Based Information System for Job Training Activities Using Personal Extreme Programming (PXP)
NASA Astrophysics Data System (ADS)
Asri, S. A.; Sunaya, I. G. A. M.; Rudiastari, E.; Setiawan, W.
2018-01-01
Job training is one of the subjects in university or polytechnic that involves many users and reporting activities. Time and distance became problems for users to reporting and to do obligations tasks during job training due to the location where the job training took place. This research tried to develop a web based information system of job training to overcome the problems. This system was developed using Personal Extreme Programming (PXP). PXP is one of the agile methods is combination of Extreme Programming (XP) and Personal Software Process (PSP). The information system that has developed and tested which are 24% of users are strongly agree, 74% are agree, 1% disagree and 0% strongly disagree about system functionality.
Shared Values as Anchors of a Learning Community: A Case Study in Information Systems Design
ERIC Educational Resources Information Center
Giordano, Daniela
2004-01-01
This paper examines the role in both individual and organizational learning of the system of values sustained by a community undertaking a design task. The discussion is based on the results of a longitudinal study of a community of novice information system designers supported by a Web-based shared design memory which allows reuse of design…
NASA Astrophysics Data System (ADS)
Duan, Zhongyan
This paper, under 3-using principle in the philosophy of caliber-oriented education to success (CETS), makes a tentative qualitative study on the application of task-based approach in the teaching of English-Chinese translation based on the web. Translation teaching is characterized by its practicality. Therefore, the task-based approach can be employed to guide the web-based content collection and the process of English translation teaching. In this way, the prospect for enhancing student's translation ability is quite encouraging, which has been verified by one year's teaching.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
One EPA Web Guidances and Checklists
These One EPA Web resources are available to editors with Web Guide access. Learn about content development, web council and EIC responsibilities, audiences and top tasks, website format and structure, and site review and approval.
Alvarez-Bermejo, J A; Hernández-Capel, D M; Belmonte-Ureña, L J; Roca-Piera, J
2009-01-01
Ensuring the quality of services provided in centres where dependent persons are seen by specialist services, by improving and enhancing how information -salary, control of tasks, patients' records, etc.- is shared between staff and carers. A web information system has been developed and experimentally deployed to accomplish this. The accuracy of the system was evaluated by assessing how confident the employees were with it rather than relying on statistical data. It was experimentally deployed since January 2009 in Asociación de Personas con Discapacidad "El Saliente" that manages several day centres in Almeria, for dependent persons over 65 years old, particularly those affected by Alzheimer' disease. Incidence data was collected during the experimental period. A total of 84% of the employees thought that the system helped to manage documents, administrative duties, etc., and 92.4% said they could attend to really important tasks because the system was responsible for alerting them of every task, such as medication timetables, checking all patients were present (to prevent an Alzheimer affected person leaving the centre) etc. During this period the incidences reported were reduced by about a 30%, although data is still partially representative. As the life expectancy of the population gets longer, these centres will increase. Providing systems such as the one presented here would be of great help for administrative duties (sensitive data protection...) as well as ensuring high quality care and attention.
2009-12-01
Deal describe political organizations not as mindless, robotic entities, but as “living, screaming political arenas that host a complex web of...Peace, The Failed State Index, http://www.fundforpeace.org/ web /index.php?option=com_content&task=view& id=391&Itemid=549, accessed October 22, 2009 2:10...of the South American peoples. SICA - Sistema de Integracion Centroamericana, or Central American Integration System (Est. 1991) The Central
Usability Evaluation of Public Web Mapping Sites
NASA Astrophysics Data System (ADS)
Wang, C.
2014-04-01
Web mapping sites are interactive maps that are accessed via Webpages. With the rapid development of Internet and Geographic Information System (GIS) field, public web mapping sites are not foreign to people. Nowadays, people use these web mapping sites for various reasons, in that increasing maps and related map services of web mapping sites are freely available for end users. Thus, increased users of web mapping sites led to more usability studies. Usability Engineering (UE), for instance, is an approach for analyzing and improving the usability of websites through examining and evaluating an interface. In this research, UE method was employed to explore usability problems of four public web mapping sites, analyze the problems quantitatively and provide guidelines for future design based on the test results. Firstly, the development progress for usability studies were described, and simultaneously several usability evaluation methods such as Usability Engineering (UE), User-Centered Design (UCD) and Human-Computer Interaction (HCI) were generally introduced. Then the method and procedure of experiments for the usability test were presented in detail. In this usability evaluation experiment, four public web mapping sites (Google Maps, Bing maps, Mapquest, Yahoo Maps) were chosen as the testing websites. And 42 people, who having different GIS skills (test users or experts), gender (male or female), age and nationality, participated in this test to complete the several test tasks in different teams. The test comprised three parts: a pretest background information questionnaire, several test tasks for quantitative statistics and progress analysis, and a posttest questionnaire. The pretest and posttest questionnaires focused on gaining the verbal explanation of their actions qualitatively. And the design for test tasks targeted at gathering quantitative data for the errors and problems of the websites. Then, the results mainly from the test part were analyzed. The success rate from different public web mapping sites was calculated and compared, and displayed by the means of diagram. And the answers from questionnaires were also classified and organized in this part. Moreover, based on the analysis, this paper expands the discussion about the layout, map visualization, map tools, search logic and etc. Finally, this paper closed with some valuable guidelines and suggestions for the design of public web mapping sites. Also, limitations for this research stated in the end.
Using Web GIS "Climate" for Adaptation to Climate Change
NASA Astrophysics Data System (ADS)
Gordova, Yulia; Martynova, Yulia; Shulgina, Tamara
2015-04-01
A work is devoted to the application of an information-computational Web GIS "Climate" developed by joint team of the Institute of Monitoring of Climatic and Ecological Systems SB RAS and Tomsk State University to raise awareness about current and future climate change as a basis for further adaptation. Web-GIS "Climate» (http://climate.scert.ru/) based on modern concepts of Web 2.0 provides opportunities to study regional climate change and its consequences by providing access to climate and weather models, a large set of geophysical data and means of processing and visualization. Also, the system is used for the joint development of software applications by distributed research teams, research based on these applications and undergraduate and graduate students training. In addition, the system capabilities allow creating information resources to raise public awareness about climate change, its causes and consequences, which is a necessary step for the subsequent adaptation to these changes. Basic information course on climate change is placed in the public domain and is aimed at local population. Basic concepts and problems of modern climate change and its possible consequences are set out and illustrated in accessible language. Particular attention is paid to regional climate changes. In addition to the information part, the course also includes a selection of links to popular science network resources on current issues in Earth Sciences and a number of practical tasks to consolidate the material. These tasks are performed for a particular territory. Within the tasks users need to analyze the prepared within the "Climate" map layers and answer questions of direct interest to the public: "How did the minimum value of winter temperatures change in your area?", "What are the dynamics of maximum summer temperatures?", etc. Carrying out the analysis of the dynamics of climate change contributes to a better understanding of climate processes and further adaptation. Passing this course raises awareness of the general public, as well as prepares the user for subsequent registration in the system and work with its tools in conducting independent research. This work is partially supported by SB RAS project VIII.80.2.1, RFBR grants 13-05-12034 and 14-05-00502.
NOAO observing proposal processing system
NASA Astrophysics Data System (ADS)
Bell, David J.; Gasson, David; Hartman, Mia
2002-12-01
Since going electronic in 1994, NOAO has continued to refine and enhance its observing proposal handling system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form or via Gemini's downloadable Phase-I Tool. NOAO staff can use online interfaces for administrative tasks, technical reviews, telescope scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available online. The system, now known as ANDES, is designed as a thin-client architecture (web pages are now used for almost all database functions) built using open source tools (FreeBSD, Apache, MySQL, Perl, PHP) to process descriptively-marked (LaTeX, XML) proposal documents.
Usability Evaluation of a Web-Based Symptom Monitoring Application for Heart Failure.
Wakefield, Bonnie; Pham, Kassie; Scherubel, Melody
2015-07-01
Symptom recognition and reporting by patients with heart failure are critical to avoid hospitalization. This project evaluated a patient symptom tracking application. Fourteen end users (nine patients, five clinicians) from a Midwestern Veterans Affairs Medical Center evaluated the website using a think aloud protocol. A structured observation protocol was used to assess success or failure for each task. Measures included task time, success, and satisfaction. Patients had a mean age of 70 years; clinicians averaged 42 years in age. Patients took 9.3 min and clinicians took less than 3 min per scenario. Most patients needed some assistance, but few patients were completely unable to complete some tasks. Clinicians demonstrated few problems navigating the site. Patient System Usability Scale item scores ranged from 2.0 to 3.6; clinician item scores ranged from 1.8 to 4.0. Further work is needed to determine whether using the web-based tool improves symptom recognition and reporting. © The Author(s) 2015.
Web Exclusive--Is the Sky the Limit to Educational Improvement?
ERIC Educational Resources Information Center
Schleicher, Andreas
2012-01-01
Today, education systems need to enable people to become lifelong learners, to manage complex ways of thinking and complex ways of working that computers can't take over easily. The task for educators and policy makers is to ensure that countries rise to this challenge. High performing education systems like Finland's and Singapore's tend to…
Write, read and answer emails with a dry 'n' wireless brain-computer interface system.
Pinegger, Andreas; Deckert, Lisa; Halder, Sebastian; Barry, Norbert; Faller, Josef; Käthner, Ivo; Hintermüller, Christoph; Wriessnegger, Selina C; Kübler, Andrea; Müller-Putz, Gernot R
2014-01-01
Brain-computer interface (BCI) users can control very complex applications such as multimedia players or even web browsers. Therefore, different biosignal acquisition systems are available to noninvasively measure the electrical activity of the brain, the electroencephalogram (EEG). To make BCIs more practical, hardware and software are nowadays designed more user centered and user friendly. In this paper we evaluated one of the latest innovations in the area of BCI: A wireless EEG amplifier with dry electrode technology combined with a web browser which enables BCI users to use standard webmail. With this system ten volunteers performed a daily life task: Write, read and answer an email. Experimental results of this study demonstrate the power of the introduced BCI system.
ICTNET at Web Track 2009 Diversity task
2009-11-01
performance. On the World Wide Web, there exist many documents which represents several implicit subtopics. We used commerce search engines to gather those...documents. In this task, our work can be divided into five steps. First, we collect documents returned by commerce search engines , and considered
Arab, Lenore; Hahn, Harry; Henry, Judith; Chacko, Sara; Winter, Ashley; Cambou, Mary C
2010-03-01
Screening and tracking subjects and data management in clinical trials require significant investments in manpower that can be reduced through the use of web-based systems. To support a validation trial of various dietary assessment tools that required multiple clinic visits and eight repeats of online assessments, we developed an interactive web-based system to automate all levels of management of a biomarker-based clinical trial. The "Energetics System" was developed to support 1) the work of the study coordinator in recruiting, screening and tracking subject flow, 2) the need of the principal investigator to review study progress, and 3) continuous data analysis. The system was designed to automate web-based self-screening into the trial. It supported scheduling tasks and triggered tailored messaging for late and non-responders. For the investigators, it provided real-time status overviews on all subjects, created electronic case reports, supported data queries and prepared analytic data files. Encryption and multi-level password protection were used to insure data privacy. The system was programmed iteratively and required six months of a web programmer's time along with active team engagement. In this study the enhancement in speed and efficiency of recruitment and quality of data collection as a result of this system outweighed the initial investment. Web-based systems have the potential to streamline the process of recruitment and day-to-day management of clinical trials in addition to improving efficiency and quality. Because of their added value they should be considered for trials of moderate size or complexity. Copyright 2009 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson Khosah
2007-07-31
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project was conducted in two phases. Phase One included the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two involved the development of a platform for on-line data analysis. Phase Two included the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now technically completed.« less
ERIC Educational Resources Information Center
Silva, Mary Lourdes; Adams Delaney, Susan; Cochran, Jolene; Jackson, Ruth; Olivares, Cory
2015-01-01
The majority of research on the implementation of ePortfolios focuses on curriculum, faculty development, or student buy-in. When ePortfolio systems have been described in technical terms, the focus has been on the functionality, affordances, and limitations of ePortfolio systems (e.g., TaskStream, LiveText), free web tools (e.g., Google Docs),…
Relevance of Web Documents:Ghosts Consensus Method.
ERIC Educational Resources Information Center
Gorbunov, Andrey L.
2002-01-01
Discusses how to improve the quality of Internet search systems and introduces the Ghosts Consensus Method which is free from the drawbacks of digital democracy algorithms and is based on linear programming tasks. Highlights include vector space models; determining relevant documents; and enriching query terms. (LRW)
Seamless Management of Paper and Electronic Documents for Task Knowledge Sharing
NASA Astrophysics Data System (ADS)
Kojima, Hiroyuki; Iwata, Ken
Due to the progress of Internet technology and the increase of distributed information on networks, the present knowledge management has been based more and more on the performance of various experienced users. In addition to the increase of electronic documents, the use of paper documents has not been reduced because of their convenience. This paper describes a method of tracking paper document locations and contents using radio frequency identification (RFID) technology. This research also focuses on the expression of a task process and the seamless structuring of related electronic and paper documents as a result of task knowledge formalization using information organizing. A system is proposed here that implements information organization for both Web documents and paper documents with the task model description and RFID technology. Examples of a prototype system are also presented.
Large area sheet task: Advanced dendritic web growth development
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Schruben, J.
1981-01-01
The growth of silicon dendritic web for photovoltaic applications was investigated. The application of a thermal model for calculating buckling stresses as a function of temperature profile in the web is discussed. Lid and shield concepts were evaluated to provide the data base for enhancing growth velocity. An experimental web growth machine which embodies in one unit the mechanical and electronic features developed in previous work was developed. In addition, evaluation of a melt level control system was begun, along with preliminary tests of an elongated crucible design. The economic analysis was also updated to incorporate some minor cost changes. The initial applications of the thermal model to a specific configuration gave results consistent with experimental observation in terms of the initiation of buckling vs. width for a given crystal thickness.
webpic: A flexible web application for collecting distance and count measurements from images
2018-01-01
Despite increasing ability to store and analyze large amounts of data for organismal and ecological studies, the process of collecting distance and count measurements from images has largely remained time consuming and error-prone, particularly for tasks for which automation is difficult or impossible. Improving the efficiency of these tasks, which allows for more high quality data to be collected in a shorter amount of time, is therefore a high priority. The open-source web application, webpic, implements common web languages and widely available libraries and productivity apps to streamline the process of collecting distance and count measurements from images. In this paper, I introduce the framework of webpic and demonstrate one readily available feature of this application, linear measurements, using fossil leaf specimens. This application fills the gap between workflows accomplishable by individuals through existing software and those accomplishable by large, unmoderated crowds. It demonstrates that flexible web languages can be used to streamline time-intensive research tasks without the use of specialized equipment or proprietary software and highlights the potential for web resources to facilitate data collection in research tasks and outreach activities with improved efficiency. PMID:29608592
NASA Astrophysics Data System (ADS)
Tong, Rong
As a primary digital library portal for astrophysics researchers, SAO/NASA ADS (Astrophysics Data System) 2.0 interface features several visualization tools such as Author Network and Metrics. This research study involves 20 ADS long term users who participated in a usability and eye tracking research session. Participants first completed a cognitive test, and then performed five tasks in ADS 2.0 where they explored its multiple visualization tools. Results show that over half of the participants were Imagers and half of the participants were Analytic. Cognitive styles were found to have significant impacts on several efficiency-based measures. Analytic-oriented participants were observed to spent shorter time on web pages and apps, made fewer web page changes than less-Analytic-driving participants in performing common tasks, whereas AI (Analytic-Imagery) participants also completed their five tasks faster than non-AI participants. Meanwhile, self-identified Imagery participants were found to be more efficient in their task completion through multiple measures including total time on task, number of mouse clicks, and number of query revisions made. Imagery scores were negatively associated with frequency of confusion and the observed counts of being surprised. Compared to those who did not claimed to be a visual person, self-identified Imagery participants were observed to have significantly less frequency in frustration and hesitation during their task performance. Both demographic variables and past user experiences were found to correlate with task performance; query revision also correlated with multiple time-based measurements. Considered as an indicator of efficiency, query revisions were found to correlate negatively with the rate of complete with ease, and positively with several time-based efficiency measures, rate of complete with some difficulty, and the frequency of frustration. These results provide rich insights into the cognitive styles of ADS' core users, the impact of such styles and demographic attributes on their task performance their affective and cognitive experiences, and their interaction behaviors while using the visualization component of ADS 2.0, and would subsequently contribute to the design of bibliographic retrieval systems for scientists.
A Database-Based and Web-Based Meta-CASE System
NASA Astrophysics Data System (ADS)
Eessaar, Erki; Sgirka, Rünno
Each Computer Aided Software Engineering (CASE) system provides support to a software process or specific tasks or activities that are part of a software process. Each meta-CASE system allows us to create new CASE systems. The creators of a new CASE system have to specify abstract syntax of the language that is used in the system and functionality as well as non-functional properties of the new system. Many meta-CASE systems record their data directly in files. In this paper, we introduce a meta-CASE system, the enabling technology of which is an object-relational database system (ORDBMS). The system allows users to manage specifications of languages and create models by using these languages. The system has web-based and form-based user interface. We have created a proof-of-concept prototype of the system by using PostgreSQL ORDBMS and PHP scripting language.
Understanding and Improving Knowledge Transactions in Command and Control
2003-06-01
implications for the development of tools to facilitate efficient and effectiv and knowledge exchange. Cognitive task analysis (CTA) in support...makers]?” *quotes taken from K-web cognitive task analysis , Global 2000 and Global 2001 War Games, interviews with Carl Vinson K-Web users following
EST-PAC a web package for EST annotation and protein sequence prediction
Strahm, Yvan; Powell, David; Lefèvre, Christophe
2006-01-01
With the decreasing cost of DNA sequencing technology and the vast diversity of biological resources, researchers increasingly face the basic challenge of annotating a larger number of expressed sequences tags (EST) from a variety of species. This typically consists of a series of repetitive tasks, which should be automated and easy to use. The results of these annotation tasks need to be stored and organized in a consistent way. All these operations should be self-installing, platform independent, easy to customize and amenable to using distributed bioinformatics resources available on the Internet. In order to address these issues, we present EST-PAC a web oriented multi-platform software package for expressed sequences tag (EST) annotation. EST-PAC provides a solution for the administration of EST and protein sequence annotations accessible through a web interface. Three aspects of EST annotation are automated: 1) searching local or remote biological databases for sequence similarities using Blast services, 2) predicting protein coding sequence from EST data and, 3) annotating predicted protein sequences with functional domain predictions. In practice, EST-PAC integrates the BLASTALL suite, EST-Scan2 and HMMER in a relational database system accessible through a simple web interface. EST-PAC also takes advantage of the relational database to allow consistent storage, powerful queries of results and, management of the annotation process. The system allows users to customize annotation strategies and provides an open-source data-management environment for research and education in bioinformatics. PMID:17147782
A web-based platform for virtual screening.
Watson, Paul; Verdonk, Marcel; Hartshorn, Michael J
2003-09-01
A fully integrated, web-based, virtual screening platform has been developed to allow rapid virtual screening of large numbers of compounds. ORACLE is used to store information at all stages of the process. The system includes a large database of historical compounds from high throughput screenings (HTS) chemical suppliers, ATLAS, containing over 3.1 million unique compounds with their associated physiochemical properties (ClogP, MW, etc.). The database can be screened using a web-based interface to produce compound subsets for virtual screening or virtual library (VL) enumeration. In order to carry out the latter task within ORACLE a reaction data cartridge has been developed. Virtual libraries can be enumerated rapidly using the web-based interface to the cartridge. The compound subsets can be seamlessly submitted for virtual screening experiments, and the results can be viewed via another web-based interface allowing ad hoc querying of the virtual screening data stored in ORACLE.
Schmutz, Sven; Sonderegger, Andreas; Sauer, Juergen
2016-06-01
We examined the consequences of implementing Web accessibility guidelines for nondisabled users. Although there are Web accessibility guidelines for people with disabilities available, they are rarely used in practice, partly due to the fact that practitioners believe that such guidelines provide no benefits, or even have negative consequences, for nondisabled people, who represent the main user group of Web sites. Despite these concerns, there is a lack of empirical research on the effects of current Web accessibility guidelines on nondisabled users. Sixty-one nondisabled participants used one of three Web sites differing in levels of accessibility (high, low, and very low). Accessibility levels were determined by following established Web accessibility guidelines (WCAG 2.0). A broad methodological approach was used, including performance measures (e.g., task completion time) and user ratings (e.g., perceived usability). A high level of Web accessibility led to better performance (i.e., task completion time and task completion rate) than low or very low accessibility. Likewise, high Web accessibility improved user ratings (i.e., perceived usability, aesthetics, workload, and trustworthiness) compared to low or very low Web accessibility. There was no difference between the very low and low Web accessibility conditions for any of the outcome measures. Contrary to some concerns in the literature and among practitioners, high conformance with Web accessibility guidelines may provide benefits to users without disabilities. The findings may encourage more practitioners to implement WCAG 2.0 for the benefit of users with disabilities and nondisabled users. © 2016, Human Factors and Ergonomics Society.
Developing an electronic system to manage and track emergency medications.
Hamm, Mark W; Calabrese, Samuel V; Knoer, Scott J; Duty, Ashley M
2018-03-01
The development of a Web-based program to track and manage emergency medications with radio frequency identification (RFID) is described. At the Cleveland Clinic, medication kit restocking records and dispense locations were historically documented using a paper record-keeping system. The Cleveland Clinic investigated options to replace the paper-based tracking logs with a Web-based program that could track the real-time location and inventory of emergency medication kits. Vendor collaboration with a board of pharmacy (BOP) compliance inspector and pharmacy personnel resulted in the creation of a dual barcoding system using medication and pocket labels. The Web-based program was integrated with a Cleveland Clinic-developed asset tracking system using active RFID tags to give the real-time location of the medication kit. The Web-based program and the asset tracking system allowed identification of kits nearing expiration or containing recalled medications. Conversion from a paper-based system to a Web-based program began in October 2013. After 119 days, data were evaluated to assess the success of the conversion. Pharmacists spent an average of 27 minutes per day approving medication kits during the postimplementation period versus 102 minutes daily using the paper-based system, representing a 74% decrease in pharmacist time spent on this task. Prospective reports are generated monthly to allow the manager to assess the expected workload and adjust staffing for the next month. Implementation of a BOP-approved Web-based system for managing and tracking emergency medications with RFID integration decreased pharmacist review time, minimized compliance risk, and increased access to real-time data. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability.
Huang, Chih-Yuan; Wu, Cheng-Hung
2016-08-31
The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human's daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner.
Psychological Dynamics of Adolescent Satanism.
ERIC Educational Resources Information Center
Moriarty, Anthony R.; Story, Donald W.
1990-01-01
Attempts to describe the psychological processes that predispose an individual to adopt a Satanic belief system. Describes processes in terms of child-parent relationships and the developmental tasks of adolescence. Proposes a model called the web of psychic tension to represent the process of Satanic cult adoption. Describes techniques for…
New tools and methods for direct programmatic access to the dbSNP relational database.
Saccone, Scott F; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A; Rice, John P
2011-01-01
Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale.
A wirelessly programmable actuation and sensing system for structural health monitoring
NASA Astrophysics Data System (ADS)
Long, James; Büyüköztürk, Oral
2016-04-01
Wireless sensor networks promise to deliver low cost, low power and massively distributed systems for structural health monitoring. A key component of these systems, particularly when sampling rates are high, is the capability to process data within the network. Although progress has been made towards this vision, it remains a difficult task to develop and program 'smart' wireless sensing applications. In this paper we present a system which allows data acquisition and computational tasks to be specified in Python, a high level programming language, and executed within the sensor network. Key features of this system include the ability to execute custom application code without firmware updates, to run multiple users' requests concurrently and to conserve power through adjustable sleep settings. Specific examples of sensor node tasks are given to demonstrate the features of this system in the context of structural health monitoring. The system comprises of individual firmware for nodes in the wireless sensor network, and a gateway server and web application through which users can remotely submit their requests.
Vishnyakova, Dina; Pasche, Emilie; Ruch, Patrick
2012-01-01
We report on the original integration of an automatic text categorization pipeline, so-called ToxiCat (Toxicogenomic Categorizer), that we developed to perform biomedical documents classification and prioritization in order to speed up the curation of the Comparative Toxicogenomics Database (CTD). The task can be basically described as a binary classification task, where a scoring function is used to rank a selected set of articles. Then components of a question-answering system are used to extract CTD-specific annotations from the ranked list of articles. The ranking function is generated using a Support Vector Machine, which combines three main modules: an information retrieval engine for MEDLINE (EAGLi), a gene normalization service (NormaGene) developed for a previous BioCreative campaign and finally, a set of answering components and entity recognizer for diseases and chemicals. The main components of the pipeline are publicly available both as web application and web services. The specific integration performed for the BioCreative competition is available via a web user interface at http://pingu.unige.ch:8080/Toxicat.
PolarHub: A Global Hub for Polar Data Discovery
NASA Astrophysics Data System (ADS)
Li, W.
2014-12-01
This paper reports the outcome of a NSF project in developing a large-scale web crawler PolarHub to discover automatically the distributed polar dataset in the format of OGC web services (OWS) in the cyberspace. PolarHub is a machine robot; its goal is to visit as many webpages as possible to find those containing information about polar OWS, extract this information and store it into the backend data repository. This is a very challenging task given huge data volume of webpages on the Web. Three unique features was introduced in PolarHub to make it distinctive from earlier crawler solutions: (1) a multi-task, multi-user, multi-thread support to the crawling tasks; (2) an extensive use of thread pool and Data Access Object (DAO) design patterns to separate persistent data storage and business logic to achieve high extendibility of the crawler tool; (3) a pattern-matching based customizable crawling algorithm to support discovery of multi-type geospatial web services; and (4) a universal and portable client-server communication mechanism combining a server-push and client pull strategies for enhanced asynchronous processing. A series of experiments were conducted to identify the impact of crawling parameters to the overall system performance. The geographical distribution pattern of all PolarHub identified services is also demonstrated. We expect this work to make a major contribution to the field of geospatial information retrieval and geospatial interoperability, to bridge the gap between data provider and data consumer, and to accelerate polar science by enhancing the accessibility and reusability of adequate polar data.
BIRCH: a user-oriented, locally-customizable, bioinformatics system.
Fristensky, Brian
2007-02-09
Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.
BIRCH: A user-oriented, locally-customizable, bioinformatics system
Fristensky, Brian
2007-01-01
Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351
NASA Astrophysics Data System (ADS)
Cao, Y. B.; Hua, Y. X.; Zhao, J. X.; Guo, S. M.
2013-11-01
With China's rapid economic development and comprehensive national strength growing, Border work has become a long-term and important task in China's diplomatic work. How to implement rapid plotting, real-time sharing and mapping surrounding affairs has taken great significance for government policy makers and diplomatic staff. However, at present the already exists Boundary information system are mainly have problems of Geospatial data update is heavily workload, plotting tools are in a state of serious lack of, Geographic events are difficult to share, this phenomenon has seriously hampered the smooth development of the border task. The development and progress of Geographic information system technology especially the development of Web GIS offers the possibility to solve the above problems, this paper adopts four layers of B/S architecture, with the support of Google maps service, uses the free API which is offered by Google maps and its features of openness, ease of use, sharing characteristics, highresolution images to design and implement the surrounding transaction plotting and management system based on the web development technology of ASP.NET, C#, Ajax. The system can provide decision support for government policy makers as well as diplomatic staff's real-time plotting and sharing of surrounding information. The practice has proved that the system has good usability and strong real-time.
NASA Astrophysics Data System (ADS)
Steinberg, P. D.; Bednar, J. A.; Rudiger, P.; Stevens, J. L. R.; Ball, C. E.; Christensen, S. D.; Pothina, D.
2017-12-01
The rich variety of software libraries available in the Python scientific ecosystem provides a flexible and powerful alternative to traditional integrated GIS (geographic information system) programs. Each such library focuses on doing a certain set of general-purpose tasks well, and Python makes it relatively simple to glue the libraries together to solve a wide range of complex, open-ended problems in Earth science. However, choosing an appropriate set of libraries can be challenging, and it is difficult to predict how much "glue code" will be needed for any particular combination of libraries and tasks. Here we present a set of libraries that have been designed to work well together to build interactive analyses and visualizations of large geographic datasets, in standard web browsers. The resulting workflows run on ordinary laptops even for billions of data points, and easily scale up to larger compute clusters when available. The declarative top-level interface used in these libraries means that even complex, fully interactive applications can be built and deployed as web services using only a few dozen lines of code, making it simple to create and share custom interactive applications even for datasets too large for most traditional GIS systems. The libraries we will cover include GeoViews (HoloViews extended for geographic applications) for declaring visualizable/plottable objects, Bokeh for building visual web applications from GeoViews objects, Datashader for rendering arbitrarily large datasets faithfully as fixed-size images, Param for specifying user-modifiable parameters that model your domain, Xarray for computing with n-dimensional array data, Dask for flexibly dispatching computational tasks across processors, and Numba for compiling array-based Python code down to fast machine code. We will show how to use the resulting workflow with static datasets and with simulators such as GSSHA or AdH, allowing you to deploy flexible, high-performance web-based dashboards for your GIS data or simulations without needing major investments in code development or maintenance.
New Directions in the NOAO Observing Proposal System
NASA Astrophysics Data System (ADS)
Gasson, David; Bell, Dave
For the past eight years NOAO has been refining its on-line observing proposal system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form, or via the Gemini Phase I Tool. NOAO staff can use the system to do administrative tasks, scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available on-line, including the proposals themselves (in HTML, PDF and PostScript) and technical comments. Grades and TAC comments are entered and edited through web forms, and can be sorted and filtered according to specified criteria. Current developments include a move away from proprietary solutions, toward open standards such as SQL (in the form of the MySQL relational database system), Perl, PHP and XML.
Chalil Madathil, Kapil; Greenstein, Joel S
2017-11-01
Collaborative virtual reality-based systems have integrated high fidelity voice-based communication, immersive audio and screen-sharing tools into virtual environments. Such three-dimensional collaborative virtual environments can mirror the collaboration among usability test participants and facilitators when they are physically collocated, potentially enabling moderated usability tests to be conducted effectively when the facilitator and participant are located in different places. We developed a virtual collaborative three-dimensional remote moderated usability testing laboratory and employed it in a controlled study to evaluate the effectiveness of moderated usability testing in a collaborative virtual reality-based environment with two other moderated usability testing methods: the traditional lab approach and Cisco WebEx, a web-based conferencing and screen sharing approach. Using a mixed methods experimental design, 36 test participants and 12 test facilitators were asked to complete representative tasks on a simulated online shopping website. The dependent variables included the time taken to complete the tasks; the usability defects identified and their severity; and the subjective ratings on the workload index, presence and satisfaction questionnaires. Remote moderated usability testing methodology using a collaborative virtual reality system performed similarly in terms of the total number of defects identified, the number of high severity defects identified and the time taken to complete the tasks with the other two methodologies. The overall workload experienced by the test participants and facilitators was the least with the traditional lab condition. No significant differences were identified for the workload experienced with the virtual reality and the WebEx conditions. However, test participants experienced greater involvement and a more immersive experience in the virtual environment than in the WebEx condition. The ratings for the virtual environment condition were not significantly different from those for the traditional lab condition. The results of this study suggest that participants were productive and enjoyed the virtual lab condition, indicating the potential of a virtual world based approach as an alternative to conventional approaches for synchronous usability testing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Differences and Similarities in Information Seeking: Children and Adults as Web Users.
ERIC Educational Resources Information Center
Bilal, Dania; Kirby, Joe
2002-01-01
Analyzed and compared the success and information seeking behaviors of seventh grade science students and graduate students in using the Yahooligans! Web search engine. Discusses cognitive, affective, and physical behaviors during a fact-finding task, including searching, browsing, and time to complete the task; navigational styles; and focus on…
Conesa, David; López-Quílez, Antonio; Martínez-Beneito, Miguel Angel; Miralles, María Teresa; Verdejo, Francisco
2009-07-29
The early identification of influenza outbreaks has became a priority in public health practice. A large variety of statistical algorithms for the automated monitoring of influenza surveillance have been proposed, but most of them require not only a lot of computational effort but also operation of sometimes not-so-friendly software. In this paper, we introduce FluDetWeb, an implementation of a prospective influenza surveillance methodology based on a client-server architecture with a thin (web-based) client application design. Users can introduce and edit their own data consisting of a series of weekly influenza incidence rates. The system returns the probability of being in an epidemic phase (via e-mail if desired). When the probability is greater than 0.5, it also returns the probability of an increase in the incidence rate during the following week. The system also provides two complementary graphs. This system has been implemented using statistical free-software (R and WinBUGS), a web server environment for Java code (Tomcat) and a software module created by us (Rdp) responsible for managing internal tasks; the software package MySQL has been used to construct the database management system. The implementation is available on-line from: http://www.geeitema.org/meviepi/fludetweb/. The ease of use of FluDetWeb and its on-line availability can make it a valuable tool for public health practitioners who want to obtain information about the probability that their system is in an epidemic phase. Moreover, the architecture described can also be useful for developers of systems based on computationally intensive methods.
2009-01-01
Background The early identification of influenza outbreaks has became a priority in public health practice. A large variety of statistical algorithms for the automated monitoring of influenza surveillance have been proposed, but most of them require not only a lot of computational effort but also operation of sometimes not-so-friendly software. Results In this paper, we introduce FluDetWeb, an implementation of a prospective influenza surveillance methodology based on a client-server architecture with a thin (web-based) client application design. Users can introduce and edit their own data consisting of a series of weekly influenza incidence rates. The system returns the probability of being in an epidemic phase (via e-mail if desired). When the probability is greater than 0.5, it also returns the probability of an increase in the incidence rate during the following week. The system also provides two complementary graphs. This system has been implemented using statistical free-software (ℝ and WinBUGS), a web server environment for Java code (Tomcat) and a software module created by us (Rdp) responsible for managing internal tasks; the software package MySQL has been used to construct the database management system. The implementation is available on-line from: http://www.geeitema.org/meviepi/fludetweb/. Conclusion The ease of use of FluDetWeb and its on-line availability can make it a valuable tool for public health practitioners who want to obtain information about the probability that their system is in an epidemic phase. Moreover, the architecture described can also be useful for developers of systems based on computationally intensive methods. PMID:19640304
Software Project Management and Measurement on the World-Wide-Web (WWW)
NASA Technical Reports Server (NTRS)
Callahan, John; Ramakrishnan, Sudhaka
1996-01-01
We briefly describe a system for forms-based, work-flow management that helps members of a software development team overcome geographical barriers to collaboration. Our system, called the Web Integrated Software Environment (WISE), is implemented as a World-Wide-Web service that allows for management and measurement of software development projects based on dynamic analysis of change activity in the workflow. WISE tracks issues in a software development process, provides informal communication between the users with different roles, supports to-do lists, and helps in software process improvement. WISE minimizes the time devoted to metrics collection and analysis by providing implicit delivery of messages between users based on the content of project documents. The use of a database in WISE is hidden from the users who view WISE as maintaining a personal 'to-do list' of tasks related to the many projects on which they may play different roles.
Harvey, Eric; Séguin, Annie; Nozais, Christian; Archambault, Philippe; Gravel, Dominique
2013-01-01
Understanding the impacts of species extinctions on the functioning of food webs is a challenging task because of the complexity of ecological interactions. We report the impacts of experimental species extinctions on the functioning of two food webs of freshwater and marine systems. We used a linear model to partition the variance among the multiple components of the diversity effect (linear group richness, nonlinear group richness, and identity). The identity of each functional group was the best explaining variable of ecosystem functioning for both systems. We assessed the contribution of each functional group in multifunctional space and found that, although the effect of functional group varied across ecosystem functions, some functional groups shared common effects on functions. This study is the first experimental demonstration that functional identity dominates the effects of extinctions on ecosystem functioning, suggesting that generalizations are possible despite the inherent complexity of interactions.
Secure web book to store structural genomics research data.
Manjasetty, Babu A; Höppner, Klaus; Mueller, Uwe; Heinemann, Udo
2003-01-01
Recently established collaborative structural genomics programs aim at significantly accelerating the crystal structure analysis of proteins. These large-scale projects require efficient data management systems to ensure seamless collaboration between different groups of scientists working towards the same goal. Within the Berlin-based Protein Structure Factory, the synchrotron X-ray data collection and the subsequent crystal structure analysis tasks are located at BESSY, a third-generation synchrotron source. To organize file-based communication and data transfer at the BESSY site of the Protein Structure Factory, we have developed the web-based BCLIMS, the BESSY Crystallography Laboratory Information Management System. BCLIMS is a relational data management system which is powered by MySQL as the database engine and Apache HTTP as the web server. The database interface routines are written in Python programing language. The software is freely available to academic users. Here we describe the storage, retrieval and manipulation of laboratory information, mainly pertaining to the synchrotron X-ray diffraction experiments and the subsequent protein structure analysis, using BCLIMS.
A Module Experimental Process System Development Unit (MEPSDU)
NASA Technical Reports Server (NTRS)
1981-01-01
Design work for a photovoltaic module, fabricated using single crystal silicon dendritic web sheet material, resulted in the identification of surface treatment to the module glass superstrate which improved module efficiencies. A final solar module environmental test, a simulated hailstone impact test, was conducted on full size module superstrates to verify that the module's tempered glass superstrate can withstand specified hailstone impacts near the corners and edges of the module. Process sequence design work on the metallization process selective, liquid dopant investigation, dry processing, and antireflective/photoresist application technique tasks, and optimum thickness for Ti/Pd are discussed. A noncontact cleaning method for raw web cleaning was identified and antireflective and photoresist coatings for the dendritic webs were selected. The design of a cell string conveyor, an interconnect feed system, rolling ultrasonic spot bonding heat, and the identification of the optimal commercially available programmable control system are also discussed. An economic analysis to assess cost goals of the process sequence is also given.
Design of a RESTful web information system for drug prescription and administration.
Bianchi, Lorenzo; Paganelli, Federica; Pettenati, Maria Chiara; Turchi, Stefano; Ciofi, Lucia; Iadanza, Ernesto; Giuli, Dino
2014-05-01
Drug prescription and administration processes strongly impact on the occurrence of risks in medical settings for they can be sources of adverse drug events (ADEs). A properly engineered use of information and communication technologies has proven to be a promising approach to reduce these risks. In this study, we propose PHARMA, a web information system which supports healthcare staff in the secure cooperative execution of drug prescription, transcription and registration tasks. PHARMA allows the easy sharing and management of documents containing drug-related information (i.e., drug prescriptions, medical reports, screening), which is often inconsistent and scattered across different information systems and heterogeneous organization domains (e.g., departments, other hospital facilities). PHARMA enables users to access such information in a consistent and secure way, through the adoption of REST and web-oriented design paradigms and protocols. We describe the implementation of the PHARMA prototype, and we discuss the results of the usability evaluation that we carried out with the staff of a hospital in Florence, Italy.
Online Hydrologic Impact Assessment Decision Support System using Internet and Web-GIS Capability
NASA Astrophysics Data System (ADS)
Choi, J.; Engel, B. A.; Harbor, J.
2002-05-01
Urban sprawl and the corresponding land use change from lower intensity uses, such as agriculture and forests, to higher intensity uses including high density residential and commercial has various long- and short-term environment impacts on ground water recharge, water pollution, and storm water drainage. A web-based Spatial Decision Support System, SDSS, for Web-based operation of long-term hydrologic impact modeling and analysis was developed. The system combines a hydrologic model, databases, web-GIS capability and HTML user interfaces to create a comprehensive hydrologic analysis system. The hydrologic model estimates daily direct runoff using the NRCS Curve Number technique and annual nonpoint source pollution loading by an event mean concentration approach. This is supported by a rainfall database with over 30 years of daily rainfall for the continental US. A web-GIS interface and a robust Web-based watershed delineation capability were developed to simplify the spatial data preparation task that is often a barrier to hydrologic model operation. The web-GIS supports browsing of map layers including hydrologic soil groups, roads, counties, streams, lakes and railroads, as well as on-line watershed delineation for any geographic point the user selects with a simple mouse click. The watershed delineation results can also be used to generate data for the hydrologic and water quality models available in the DSS. This system is already being used by city and local government planners for hydrologic impact evaluation of land use change from urbanization, and can be found at http://pasture.ecn.purdue.edu/~watergen/hymaps. This system can assist local community, city and watershed planners, and even professionals when they are examining impacts of land use change on water resources. They can estimate the hydrologic impact of possible land use changes using this system with readily available data supported through the Internet. This system provides a cost effective approach to serve potential users who require easy-to-use tools.
A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability
Huang, Chih-Yuan; Wu, Cheng-Hung
2016-01-01
The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human’s daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner. PMID:27589759
Virtual Reality Training System for Anytime/Anywhere Acquisition of Surgical Skills: A Pilot Study.
Zahiri, Mohsen; Booton, Ryan; Nelson, Carl A; Oleynikov, Dmitry; Siu, Ka-Chun
2018-03-01
This article presents a hardware/software simulation environment suitable for anytime/anywhere surgical skills training. It blends the advantages of physical hardware and task analogs with the flexibility of virtual environments. This is further enhanced by a web-based implementation of training feedback accessible to both trainees and trainers. Our training system provides a self-paced and interactive means to attain proficiency in basic tasks that could potentially be applied across a spectrum of trainees from first responder field medical personnel to physicians. This results in a powerful training tool for surgical skills acquisition relevant to helping injured warfighters.
Socio-contextual Network Mining for User Assistance in Web-based Knowledge Gathering Tasks
NASA Astrophysics Data System (ADS)
Rajendran, Balaji; Kombiah, Iyakutti
Web-based Knowledge Gathering (WKG) is a specialized and complex information seeking task carried out by many users on the web, for their various learning, and decision-making requirements. We construct a contextual semantic structure by observing the actions of the users involved in WKG task, in order to gain an understanding of their task and requirement. We also build a knowledge warehouse in the form of a master Semantic Link Network (SLX) that accommodates and assimilates all the contextual semantic structures. This master SLX, which is a socio-contextual network, is then mined to provide contextual inputs to the current users through their agents. We validated our approach through experiments and analyzed the benefits to the users in terms of resource explorations and the time saved. The results are positive enough to motivate us to implement in a larger scale.
High-performance web viewer for cardiac images
NASA Astrophysics Data System (ADS)
dos Santos, Marcelo; Furuie, Sergio S.
2004-04-01
With the advent of the digital devices for medical diagnosis the use of the regular films in radiology has decreased. Thus, the management and handling of medical images in digital format has become an important and critical task. In Cardiology, for example, the main difficulty is to display dynamic images with the appropriated color palette and frame rate used on acquisition process by Cath, Angio and Echo systems. In addition, other difficulty is handling large images in memory by any existing personal computer, including thin clients. In this work we present a web-based application that carries out these tasks with robustness and excellent performance, without burdening the server and network. This application provides near-diagnostic quality display of cardiac images stored as DICOM 3.0 files via a web browser and provides a set of resources that allows the viewing of still and dynamic images. It can access image files from the local disks, or network connection. Its features include: allows real-time playback, dynamic thumbnails image viewing during loading, access to patient database information, image processing tools, linear and angular measurements, on-screen annotations, image printing and exporting DICOM images to other image formats, and many others, all characterized by a pleasant user-friendly interface, inside a Web browser by means of a Java application. This approach offers some advantages over the most of medical images viewers, such as: facility of installation, integration with other systems by means of public and standardized interfaces, platform independence, efficient manipulation and display of medical images, all with high performance.
NASA Astrophysics Data System (ADS)
Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.
2017-01-01
Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.
ERIC Educational Resources Information Center
Akayuure, Peter; Apawu, Jones
2015-01-01
The study was designed to engage prospective mathematics teachers in creating web learning modules. The aim was to examine the mathematical task and perceived pedagogical usability of the modules for mathematics instructions in Ghana. The study took place at University of Education, Winneba. Classes of 172 prospective mathematics teachers working…
Web-Based Seamless Migration for Task-Oriented Mobile Distance Learning
ERIC Educational Resources Information Center
Zhang, Degan; Li, Yuan-chao; Zhang, Huaiyu; Zhang, Xinshang; Zeng, Guangping
2006-01-01
As a new kind of computing paradigm, pervasive computing will meet the requirements of human being that anybody maybe obtain services in anywhere and at anytime, task-oriented seamless migration is one of its applications. Apparently, the function of seamless mobility is suitable for mobile services, such as mobile Web-based learning. In this…
NASA Technical Reports Server (NTRS)
Mandl, Daniel; Unger, Stephen; Ames, Troy; Frye, Stuart; Chien, Steve; Cappelaere, Pat; Tran, Danny; Derezinski, Linda; Paules, Granville
2007-01-01
This paper will describe the progress of a 3 year research award from the NASA Earth Science Technology Office (ESTO) that began October 1, 2006, in response to a NASA Announcement of Research Opportunity on the topic of sensor webs. The key goal of this research is to prototype an interoperable sensor architecture that will enable interoperability between a heterogeneous set of space-based, Unmanned Aerial System (UAS)-based and ground based sensors. Among the key capabilities being pursued is the ability to automatically discover and task the sensors via the Internet and to automatically discover and assemble the necessary science processing algorithms into workflows in order to transform the sensor data into valuable science products. Our first set of sensor web demonstrations will prototype science products useful in managing wildfires and will use such assets as the Earth Observing 1 spacecraft, managed out of NASA/GSFC, a UASbased instrument, managed out of Ames and some automated ground weather stations, managed by the Forest Service. Also, we are collaborating with some of the other ESTO awardees to expand this demonstration and create synergy between our research efforts. Finally, we are making use of Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) suite of standards and some Web 2.0 capabilities to Beverage emerging technologies and standards. This research will demonstrate and validate a path for rapid, low cost sensor integration, which is not tied to a particular system, and thus be able to absorb new assets in an easily evolvable, coordinated manner. This in turn will help to facilitate the United States contribution to the Global Earth Observation System of Systems (GEOSS), as agreed by the U.S. and 60 other countries at the third Earth Observation Summit held in February of 2005.
Hahn, Harry; Henry, Judith; Chacko, Sara; Winter, Ashley; Cambou, Mary C
2010-01-01
Screening and tracking subjects and data management in clinical trials require significant investments in manpower that can be reduced through the use of web-based systems. To support a validation trial of various dietary assessment tools that required multiple clinic visits and eight repeats of online assessments, we developed an interactive web-based system to automate all levels of management of a biomarker-based clinical trial. The “Energetics System” was developed to support 1) the work of the study coordinator in recruiting, screening and tracking subject flow, 2) the need of the principal investigator to review study progress, and 3) continuous data analysis. The system was designed to automate web-based self-screening into the trial. It supported scheduling tasks and triggered tailored messaging for late and non-responders. For the investigators, it provided real time status overviews on all subjects, created electronic case reports, supported data queries and prepared analytic data files. Encryption and multi-level password protection were used to insure data privacy. The system was programmed iteratively and required six months of a web programmer's time along with active team engagement. In this study the enhancement in speed and efficiency of recruitment and quality of data collection as a result of this system outweighed the initial investment. Web-based systems have the potential to streamline the process of recruitment and day-to-day management of clinical trials in addition to improving efficiency and quality. Because of their added value they should be considered for trials of moderate size or complexity. Grant support: NIH funded R01CA105048. PMID:19925884
Wiegers, Thomas C; Davis, Allan Peter; Mattingly, Carolyn J
2014-01-01
The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/ © The Author(s) 2014. Published by Oxford University Press.
Biowep: a workflow enactment portal for bioinformatics applications.
Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano
2007-03-08
The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics - LITBIO.
Biowep: a workflow enactment portal for bioinformatics applications
Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano
2007-01-01
Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics – LITBIO. PMID:17430563
Wiegers, Thomas C.; Davis, Allan Peter; Mattingly, Carolyn J.
2014-01-01
The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/ PMID:24919658
An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.
Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya
2015-01-01
Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.
An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm
Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya
2015-01-01
Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894
Modeling the customer in electronic commerce.
Helander, M G; Khalid, H M
2000-12-01
This paper reviews interface design of web pages for e-commerce. Different tasks in e-commerce are contrasted. A systems model is used to illustrate the information flow between three subsystems in e-commerce: store environment, customer, and web technology. A customer makes several decisions: to enter the store, to navigate, to purchase, to pay, and to keep the merchandize. This artificial environment must be designed so that it can support customer decision-making. To retain customers it must be pleasing and fun, and create a task with natural flow. Customers have different needs, competence and motivation, which affect decision-making. It may therefore be important to customize the design of the e-store environment. Future ergonomics research will have to investigate perceptual aspects, such as presentation of merchandize, and cognitive issues, such as product search and navigation, as well as decision making while considering various economic parameters. Five theories on e-commerce research are presented.
HCLS 2.0/3.0: health care and life sciences data mashup using Web 2.0/3.0.
Cheung, Kei-Hoi; Yip, Kevin Y; Townsend, Jeffrey P; Scotch, Matthew
2008-10-01
We describe the potential of current Web 2.0 technologies to achieve data mashup in the health care and life sciences (HCLS) domains, and compare that potential to the nascent trend of performing semantic mashup. After providing an overview of Web 2.0, we demonstrate two scenarios of data mashup, facilitated by the following Web 2.0 tools and sites: Yahoo! Pipes, Dapper, Google Maps and GeoCommons. In the first scenario, we exploited Dapper and Yahoo! Pipes to implement a challenging data integration task in the context of DNA microarray research. In the second scenario, we exploited Yahoo! Pipes, Google Maps, and GeoCommons to create a geographic information system (GIS) interface that allows visualization and integration of diverse categories of public health data, including cancer incidence and pollution prevalence data. Based on these two scenarios, we discuss the strengths and weaknesses of these Web 2.0 mashup technologies. We then describe Semantic Web, the mainstream Web 3.0 technology that enables more powerful data integration over the Web. We discuss the areas of intersection of Web 2.0 and Semantic Web, and describe the potential benefits that can be brought to HCLS research by combining these two sets of technologies.
HCLS 2.0/3.0: Health Care and Life Sciences Data Mashup Using Web 2.0/3.0
Cheung, Kei-Hoi; Yip, Kevin Y.; Townsend, Jeffrey P.; Scotch, Matthew
2010-01-01
We describe the potential of current Web 2.0 technologies to achieve data mashup in the health care and life sciences (HCLS) domains, and compare that potential to the nascent trend of performing semantic mashup. After providing an overview of Web 2.0, we demonstrate two scenarios of data mashup, facilitated by the following Web 2.0 tools and sites: Yahoo! Pipes, Dapper, Google Maps and GeoCommons. In the first scenario, we exploited Dapper and Yahoo! Pipes to implement a challenging data integration task in the context of DNA microarray research. In the second scenario, we exploited Yahoo! Pipes, Google Maps, and GeoCommons to create a geographic information system (GIS) interface that allows visualization and integration of diverse categories of public health data, including cancer incidence and pollution prevalence data. Based on these two scenarios, we discuss the strengths and weaknesses of these Web 2.0 mashup technologies. We then describe Semantic Web, the mainstream Web 3.0 technology that enables more powerful data integration over the Web. We discuss the areas of intersection of Web 2.0 and Semantic Web, and describe the potential benefits that can be brought to HCLS research by combining these two sets of technologies. PMID:18487092
The Organizational Role of Web Services
ERIC Educational Resources Information Center
Mitchell, Erik
2011-01-01
The workload of Web librarians is already split between Web-related and other library tasks. But today's technological environment has created new implications for existing services and new demands for staff time. It is time to reconsider how libraries can best allocate resources to provide effective Web services. Delivering high-quality services…
Web-Based Inquiry Learning: Facilitating Thoughtful Literacy with WebQuests
ERIC Educational Resources Information Center
Ikpeze, Chinwe H.; Boyd, Fenice B.
2007-01-01
An action research study investigated how the multiple tasks found in WebQuests facilitate fifth-grade students' literacy skills and higher order thinking. Findings indicate that WebQuests are most successful when activities are carefully selected and systematically delivered. Implications for teaching include the necessity for adequate planning,…
Spaceport Command and Control System Support Software Development
NASA Technical Reports Server (NTRS)
Brunotte, Leonard
2016-01-01
The Spaceport Command and Control System (SCCS) is a project developed and used by NASA at Kennedy Space Center in order to control and monitor the Space Launch System (SLS) at the time of its launch. One integral subteam under SCCS is the one assigned to the development of a data set building application to be used both on the launch pad and in the Launch Control Center (LCC) at the time of launch. This web application was developed in Ruby on Rails, a web framework using the Ruby object-oriented programming language, by a 15 - employee team (approx.). Because this application is such a huge undertaking with many facets and iterations, there were a few areas in which work could be more easily organized and expedited. As an intern working with this team, I was charged with the task of writing web applications that fulfilled this need, creating a virtual and highly customizable whiteboard in order to allow engineers to keep track of build iterations and their status. Additionally, I developed a knowledge capture web application wherein any engineer or contractor within SCCS could ask a question, answer an existing question, or leave a comment on any question or answer, similar to Stack Overflow.
Reliable file sharing in distributed operating system using web RTC
NASA Astrophysics Data System (ADS)
Dukiya, Rajesh
2017-12-01
Since, the evolution of distributed operating system, distributed file system is come out to be important part in operating system. P2P is a reliable way in Distributed Operating System for file sharing. It was introduced in 1999, later it became a high research interest topic. Peer to Peer network is a type of network, where peers share network workload and other load related tasks. A P2P network can be a period of time connection, where a bunch of computers connected by a USB (Universal Serial Bus) port to transfer or enable disk sharing i.e. file sharing. Currently P2P requires special network that should be designed in P2P way. Nowadays, there is a big influence of browsers in our life. In this project we are going to study of file sharing mechanism in distributed operating system in web browsers, where we will try to find performance bottlenecks which our research will going to be an improvement in file sharing by performance and scalability in distributed file systems. Additionally, we will discuss the scope of Web Torrent file sharing and free-riding in peer to peer networks.
UNH Data Cooperative: A Cyber Infrastructure for Earth System Studies
NASA Astrophysics Data System (ADS)
Braswell, B. H.; Fekete, B. M.; Prusevich, A.; Gliden, S.; Magill, A.; Vorosmarty, C. J.
2007-12-01
Earth system scientists and managers have a continuously growing demand for a wide array of earth observations derived from various data sources including (a) modern satellite retrievals, (b) "in-situ" records, (c) various simulation outputs, and (d) assimilated data products combining model results with observational records. The sheer quantity of data, and formatting inconsistencies make it difficult for users to take full advantage of this important information resource. Thus the system could benefit from a thorough retooling of our current data processing procedures and infrastructure. Emerging technologies, like OPeNDAP and OGC map services, open standard data formats (NetCDF, HDF) data cataloging systems (NASA-Echo, Global Change Master Directory, etc.) are providing the basis for a new approach in data management and processing, where web- services are increasingly designed to serve computer-to-computer communications without human interactions and complex analysis can be carried out over distributed computer resources interconnected via cyber infrastructure. The UNH Earth System Data Collaborative is designed to utilize the aforementioned emerging web technologies to offer new means of access to earth system data. While the UNH Data Collaborative serves a wide array of data ranging from weather station data (Climate Portal) to ocean buoy records and ship tracks (Portsmouth Harbor Initiative) to land cover characteristics, etc. the underlaying data architecture shares common components for data mining and data dissemination via web-services. Perhaps the most unique element of the UNH Data Cooperative's IT infrastructure is its prototype modeling environment for regional ecosystem surveillance over the Northeast corridor, which allows the integration of complex earth system model components with the Cooperative's data services. While the complexity of the IT infrastructure to perform complex computations is continuously increasing, scientists are often forced to spend considerable amount of time to solve basic data management and preprocessing tasks and deal with low level computational design problems like parallelization of model codes. Our modeling infrastructure is designed to take care the bulk of the common tasks found in complex earth system models like I/O handling, computational domain and time management, parallel execution of the modeling tasks, etc. The modeling infrastructure allows scientists to focus on the numerical implementation of the physical processes on a single computational objects(typically grid cells) while the framework takes care of the preprocessing of input data, establishing of the data exchange between computation objects and the execution of the science code. In our presentation, we will discuss the key concepts of our modeling infrastructure. We will demonstrate integration of our modeling framework with data services offered by the UNH Earth System Data Collaborative via web interfaces. We will layout the road map to turn our prototype modeling environment into a truly community framework for wide range of earth system scientists and environmental managers.
Project Assessment Skills Web Application
NASA Technical Reports Server (NTRS)
Goff, Samuel J.
2013-01-01
The purpose of this project is to utilize Ruby on Rails to create a web application that will replace a spreadsheet keeping track of training courses and tasks. The goal is to create a fast and easy to use web application that will allow users to track progress on training courses. This application will allow users to update and keep track of all of the training required of them. The training courses will be organized by group and by user, making readability easier. This will also allow group leads and administrators to get a sense of how everyone is progressing in training. Currently, updating and finding information from this spreadsheet is a long and tedious task. By upgrading to a web application, finding and updating information will be easier than ever as well as adding new training courses and tasks. Accessing this data will be much easier in that users just have to go to a website and log in with NDC credentials rather than request the relevant spreadsheet from the holder. In addition to Ruby on Rails, I will be using JavaScript, CSS, and jQuery to help add functionality and ease of use to my web application. This web application will include a number of features that will help update and track progress on training. For example, one feature will be to track progress of a whole group of users to be able to see how the group as a whole is progressing. Another feature will be to assign tasks to either a user or a group of users. All of these together will create a user friendly and functional web application.
Persuasive system design does matter: a systematic review of adherence to web-based interventions.
Kelders, Saskia M; Kok, Robin N; Ossebaard, Hans C; Van Gemert-Pijnen, Julia E W C
2012-11-14
Although web-based interventions for promoting health and health-related behavior can be effective, poor adherence is a common issue that needs to be addressed. Technology as a means to communicate the content in web-based interventions has been neglected in research. Indeed, technology is often seen as a black-box, a mere tool that has no effect or value and serves only as a vehicle to deliver intervention content. In this paper we examine technology from a holistic perspective. We see it as a vital and inseparable aspect of web-based interventions to help explain and understand adherence. This study aims to review the literature on web-based health interventions to investigate whether intervention characteristics and persuasive design affect adherence to a web-based intervention. We conducted a systematic review of studies into web-based health interventions. Per intervention, intervention characteristics, persuasive technology elements and adherence were coded. We performed a multiple regression analysis to investigate whether these variables could predict adherence. We included 101 articles on 83 interventions. The typical web-based intervention is meant to be used once a week, is modular in set-up, is updated once a week, lasts for 10 weeks, includes interaction with the system and a counselor and peers on the web, includes some persuasive technology elements, and about 50% of the participants adhere to the intervention. Regarding persuasive technology, we see that primary task support elements are most commonly employed (mean 2.9 out of a possible 7.0). Dialogue support and social support are less commonly employed (mean 1.5 and 1.2 out of a possible 7.0, respectively). When comparing the interventions of the different health care areas, we find significant differences in intended usage (p=.004), setup (p<.001), updates (p<.001), frequency of interaction with a counselor (p<.001), the system (p=.003) and peers (p=.017), duration (F=6.068, p=.004), adherence (F=4.833, p=.010) and the number of primary task support elements (F=5.631, p=.005). Our final regression model explained 55% of the variance in adherence. In this model, a RCT study as opposed to an observational study, increased interaction with a counselor, more frequent intended usage, more frequent updates and more extensive employment of dialogue support significantly predicted better adherence. Using intervention characteristics and persuasive technology elements, a substantial amount of variance in adherence can be explained. Although there are differences between health care areas on intervention characteristics, health care area per se does not predict adherence. Rather, the differences in technology and interaction predict adherence. The results of this study can be used to make an informed decision about how to design a web-based intervention to which patients are more likely to adhere.
Development of intelligent semantic search system for rubber research data in Thailand
NASA Astrophysics Data System (ADS)
Kaewboonma, Nattapong; Panawong, Jirapong; Pianhanuruk, Ekkawit; Buranarach, Marut
2017-10-01
The rubber production of Thailand increased not only by strong demand from the world market, but was also stimulated strongly through the replanting program of the Thai Government from 1961 onwards. With the continuous growth of rubber research data volume on the Web, the search for information has become a challenging task. Ontologies are used to improve the accuracy of information retrieval from the web by incorporating a degree of semantic analysis during the search. In this context, we propose an intelligent semantic search system for rubber research data in Thailand. The research methods included 1) analyzing domain knowledge, 2) ontologies development, and 3) intelligent semantic search system development to curate research data in trusted digital repositories may be shared among the wider Thailand rubber research community.
The Power of Portals: Personalizing the Web To Build Community.
ERIC Educational Resources Information Center
Page, Dan
2001-01-01
Describes how the director of information systems for the computing and communications department and a team of software developers embarked on the task of creating and refining portal technology for a broad community of users with various relationships to the University of Washington. Discusses focus on individual needs; authentication, the…
An Ontology Infrastructure for an E-Learning Scenario
ERIC Educational Resources Information Center
Guo, Wen-Ying; Chen, De-Ren
2007-01-01
Selecting appropriate learning services for a learner from a large number of heterogeneous knowledge sources is a complex and challenging task. This article illustrates and discusses how Semantic Web technologies such as RDF [resource description framework] and ontology can be applied to e-learning systems to help the learner in selecting an…
One EPA Web Principles that Guide Content Development
The principles of One EPA Web can be applied to better meet the needs and expectations of our audiences, fit their information-seeking behavior, and help them accomplish tasks. Learn about the five paths forward for transforming web content.
Globe Teachers Guide and Photographic Data on the Web
NASA Technical Reports Server (NTRS)
Kowal, Dan
2004-01-01
The task of managing the GLOBE Online Teacher s Guide during this time period focused on transforming the technology behind the delivery system of this document. The web application transformed from a flat file retrieval system to a dynamic database access approach. The new methodology utilizes Java Server Pages (JSP) on the front-end and an Oracle relational database on the backend. This new approach allows users of the web site, mainly teachers, to access content efficiently by grade level and/or by investigation or educational concept area. Moreover, teachers can gain easier access to data sheets and lab and field guides. The new online guide also included updated content for all GLOBE protocols. The GLOBE web management team was given documentation for maintaining the new application. Instructions for modifying the JSP templates and managing database content were included in this document. It was delivered to the team by the end of October, 2003. The National Geophysical Data Center (NGDC) continued to manage the school study site photos on the GLOBE website. 333 study site photo images were added to the GLOBE database and posted on the web during this same time period for 64 schools. Documentation for processing study site photos was also delivered to the new GLOBE web management team. Lastly, assistance was provided in transferring reference applications such as the Cloud and LandSat quizzes and Earth Systems Online Poster from NGDC servers to GLOBE servers along with documentation for maintaining these applications.
Web-Based Computational Chemistry Education with CHARMMing I: Lessons and Tutorial
Miller, Benjamin T.; Singh, Rishi P.; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S.; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R.; Woodcock, H. Lee
2014-01-01
This article describes the development, implementation, and use of web-based “lessons” to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that “point and click” simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance. PMID:25057988
Web-based computational chemistry education with CHARMMing I: Lessons and tutorial.
Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R; Woodcock, H Lee
2014-07-01
This article describes the development, implementation, and use of web-based "lessons" to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that "point and click" simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance.
New tools and methods for direct programmatic access to the dbSNP relational database
Saccone, Scott F.; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A.; Rice, John P.
2011-01-01
Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale. PMID:21037260
Academic Library Web Sites: Current Practice and Future Directions
ERIC Educational Resources Information Center
Detlor, Brian; Lewis, Vivian
2006-01-01
To address competitive threats, academic libraries are encouraged to build robust Web sites personalized to learning and research tasks. Through an evaluation of Association of Research Libraries (ARL)-member Web sites, we suggest how library Web sites should evolve and reflect upon the impacts such recommendations may have on academic libraries…
NASA Technical Reports Server (NTRS)
1981-01-01
Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.
Task Force on the Future of Military Health Care
2007-12-01
Navigator. Service programs are supported by the Military Health System Population Health Portal (MHSPHP), a centralized, secure, web-based population...Congress on March 1, 2008.66 64 Air Force Medical Support Agency, Population Health Support Division. MHS Population Health Portal Methods. July 2007...HEDIS metrics using the MHS Population Health Portal and reporting in the service systems and the Tri- Service Business Planning tool. DoD has several
Software Engineering Improvement Plan
NASA Technical Reports Server (NTRS)
2006-01-01
In performance of this task order, bd Systems personnel provided support to the Flight Software Branch and the Software Working Group through multiple tasks related to software engineering improvement and to activities of the independent Technical Authority (iTA) Discipline Technical Warrant Holder (DTWH) for software engineering. To ensure that the products, comments, and recommendations complied with customer requirements and the statement of work, bd Systems personnel maintained close coordination with the customer. These personnel performed work in areas such as update of agency requirements and directives database, software effort estimation, software problem reports, a web-based process asset library, miscellaneous documentation review, software system requirements, issue tracking software survey, systems engineering NPR, and project-related reviews. This report contains a summary of the work performed and the accomplishments in each of these areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Frank T. Alex
2007-02-11
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase One includes the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two, which is currently underway, involves the development of a platform for on-line data analysis. Phase Two includes the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its forty-eighth month of development activities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Charles G. Crawford
2006-02-11
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase One includes the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two, which is currently underway, involves the development of a platform for on-line data analysis. Phase Two includes the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its forty-second month of development activities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Charles G. Crawford
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase 1, which is currently in progress and will take twelve months to complete, will include the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. In Phase 2, which will be completed in the second year of the project, a platform for on-line data analysis will be developed. Phase 2 will include the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its eleventh month of Phase 1 development activities.« less
Web-based data delivery services in support of disaster-relief applications
Jones, Brenda K.; Risty, Ron R.; Buswell, M.
2003-01-01
The U.S. Geological Survey Earth Resources Observation Systems Data Center responds to emergencies in support of various government agencies for human-induced and natural disasters. This response consists of satellite tasking and acquisitions, satellite image registrations, disaster-extent maps analysis and creation, base image provision and support, Web-based mapping services for product delivery, and predisaster and postdisaster data archiving. The emergency response staff are on call 24 hours a day, 7 days a week, and have access to many commercial and government satellite and aerial photography tasking authorities. They have access to value-added data processing and photographic laboratory services for off-hour emergency requests. They work with various Federal agencies for preparedness planning, which includes providing base imagery. These data may include digital elevation models, hydrographic models, base satellite images, vector data layers such as roads, aerial photographs, and other predisaster data. These layers are incorporated into a Web-based browser and data delivery service that is accessible either to the general public or to select customers. As usage declines, the data are moved to a postdisaster nearline archive that is still accessible, but not in real time.
Geyer, John; Myers, Kathleen; Vander Stoep, Ann; McCarty, Carolyn; Palmer, Nancy; DeSalvo, Amy
2011-10-01
Clinical trials with multiple intervention locations and a single research coordinating center can be logistically difficult to implement. Increasingly, web-based systems are used to provide clinical trial support with many commercial, open source, and proprietary systems in use. New web-based tools are available which can be customized without programming expertise to deliver web-based clinical trial management and data collection functions. To demonstrate the feasibility of utilizing low-cost configurable applications to create a customized web-based data collection and study management system for a five intervention site randomized clinical trial establishing the efficacy of providing evidence-based treatment via teleconferencing to children with attention-deficit hyperactivity disorder. The sites are small communities that would not usually be included in traditional randomized trials. A major goal was to develop database that participants could access from computers in their home communities for direct data entry. Discussed is the selection process leading to the identification and utilization of a cost-effective and user-friendly set of tools capable of customization for data collection and study management tasks. An online assessment collection application, template-based web portal creation application, and web-accessible Access 2007 database were selected and customized to provide the following features: schedule appointments, administer and monitor online secure assessments, issue subject incentives, and securely transmit electronic documents between sites. Each tool was configured by users with limited programming expertise. As of June 2011, the system has successfully been used with 125 participants in 5 communities, who have completed 536 sets of assessment questionnaires, 8 community therapists, and 11 research staff at the research coordinating center. Total automation of processes is not possible with the current set of tools as each is loosely affiliated, creating some inefficiency. This system is best suited to investigations with a single data source e.g., psychosocial questionnaires. New web-based applications can be used by investigators with limited programming experience to implement user-friendly, efficient, and cost-effective tools for multi-site clinical trials with small distant communities. Such systems allow the inclusion in research of populations that are not usually involved in clinical trials.
Web-based routing assistance tool to reduce pavement damage by overweight and oversize vehicles.
DOT National Transportation Integrated Search
2016-10-30
This report documents the results of a completed project titled Web-Based Routing Assistance Tool to Reduce Pavement Damage by Overweight and Oversize Vehicles. The tasks involved developing a Web-based GIS routing assistance tool and evaluate ...
Macy, Jonathan T; Chassin, Laurie; Presson, Clark C; Sherman, Jeffrey W
2015-02-01
Implicit attitudes have been shown to predict smoking behaviors. Therefore, an important goal is the development of interventions to change these attitudes. This study assessed the effects of a web-based intervention on implicit attitudes toward smoking and receptivity to smoking-related information. Smokers (N = 284) were recruited to a two-session web-based study. In the first session, baseline data were collected. Session two contained the intervention, which consisted of assignment to the experimental or control version of an approach-avoidance task and assignment to an anti-smoking or control public service announcement (PSA), and post-intervention measures. Among smokers with less education and with plans to quit, implicit attitudes were more negative for those who completed the approach-avoidance task. Smokers with more education who viewed the anti-smoking PSA and completed the approach-avoidance task spent more time reading smoking-related information. An approach-avoidance task is a potentially feasible strategy for changing implicit attitudes toward smoking and increasing receptivity to smoking-related information.
Macy, Jonathan T.; Chassin, Laurie; Presson, Clark C.; Sherman, Jeffrey W.
2014-01-01
Implicit attitudes have been shown to predict smoking behaviors. Therefore, an important goal is the development of interventions to change these attitudes. This study assessed the effects of a web-based intervention on implicit attitudes toward smoking and receptivity to smoking-related information. Smokers (N=284) were recruited to a two-session web-based study. In the first session, baseline data were collected. Session two contained the intervention, which consisted of assignment to the experimental or control version of an approach-avoidance task and assignment to an anti-smoking or control public service announcement (PSA), and post-intervention measures. Among smokers with less education and with plans to quit, implicit attitudes were more negative for those who completed the approach-avoidance task. Smokers with more education who viewed the anti-smoking PSA and completed the approach-avoidance task spent more time reading smoking-related information. An approach-avoidance task is a potentially feasible strategy for changing implicit attitudes toward smoking and increasing receptivity to smoking-related information. PMID:25059750
Bouzguenda, Lotfi; Turki, Manel
2014-04-01
This paper shows how the combined use of agent and web services technologies can help to design an architectural style for dynamic medical Cross-Organizational Workflow (COW) management system. Medical COW aims at supporting the collaboration between several autonomous and possibly heterogeneous medical processes, distributed over different organizations (Hospitals, Clinic or laboratories). Dynamic medical COW refers to occasional cooperation between these health organizations, free of structural constraints, where the medical partners involved and their number are not pre-defined. More precisely, this paper proposes a new architecture style based on agents and web services technologies to deal with two key coordination issues of dynamic COW: medical partners finding and negotiation between them. It also proposes how the proposed architecture for dynamic medical COW management system can connect to a multi-agent system coupling the Clinical Decision Support System (CDSS) with Computerized Prescriber Order Entry (CPOE). The idea is to assist the health professionals such as doctors, nurses and pharmacists with decision making tasks, as determining diagnosis or patient data analysis without stopping their clinical processes in order to act in a coherent way and to give care to the patient.
Strategies for Adapting WebQuests for Students with Learning Disabilities
ERIC Educational Resources Information Center
Skylar, Ashley A.; Higgins, Kyle; Boone, Randall
2007-01-01
WebQuests are gaining popularity as teachers explore using the Internet for guided learning activities. A WebQuest involves students working on a task that is broken down into clearly defined steps. Students often work in groups to actively conduct the research. This article suggests a variety of methods for adapting WebQuests for students with…
A Web Browser Interface to Manage the Searching and Organizing of Information on the Web by Learners
ERIC Educational Resources Information Center
Li, Liang-Yi; Chen, Gwo-Dong
2010-01-01
Information Gathering is a knowledge construction process. Web learners make a plan for their Information Gathering task based on their prior knowledge. The plan is evolved with new information encountered and their mental model is constructed through continuously assimilating and accommodating new information gathered from different Web pages. In…
At-sea demonstration of RF sensor tasking using XML over a worldwide network
NASA Astrophysics Data System (ADS)
Kellogg, Robert L.; Lee, Tom; Dumas, Diane; Raggo, Barbara
2003-07-01
As part of an At-Sea Demonstration for Space and Naval Warfare Command (SPAWAR, PMW-189), a prototype RF sensor for signal acquisition and direction finding queried and received tasking via a secure worldwide Automated Data Network System (ADNS). Using extended mark-up language (XML) constructs, both mission and signal tasking were available for push and pull Battlespace management. XML tasking was received by the USS Cape St George (CG-71) during an exercise along the Gulf Coast of the US from a test facility at SPAWAR, San Diego, CA. Although only one ship was used in the demonstration, the intent of the software initiative was to show that a network of different RF sensors on different platforms with different capabilitis could be tasked by a common web agent. A sensor software agent interpreted the XML task to match the sensor's capability. Future improvements will focus on enlarging the domain of mission tasking and incorporate report management.
ERIC Educational Resources Information Center
Hayashi, Yugo
2015-01-01
The present study investigates web-based learning activities of undergraduate students who generate explanations about a key concept taught in a large-scale classroom. The present study used an online system with Pedagogical Conversational Agent (PCA), asked to explain about the key concept from different points and provided suggestions and…
Utilizing Peer Interactions to Promote Learning through a Web-Based Peer Assessment System
ERIC Educational Resources Information Center
Li, Lan; Steckelberg, Allen L.; Srinivasan, Sribhagyam
2008-01-01
Peer assessment is an instructional strategy in which students evaluate each other's performance for the purpose of improving learning. Despite its accepted use in higher education, researchers and educators have reported concerns such as students' time on task, the impact of peer pressure on the accuracy of marking, and students' lack of ability…
Suicide Note Sentiment Classification: A Supervised Approach Augmented by Web Data
Xu, Yan; Wang, Yue; Liu, Jiahua; Tu, Zhuowen; Sun, Jian-Tao; Tsujii, Junichi; Chang, Eric
2012-01-01
Objective: To create a sentiment classification system for the Fifth i2b2/VA Challenge Track 2, which can identify thirteen subjective categories and two objective categories. Design: We developed a hybrid system using Support Vector Machine (SVM) classifiers with augmented training data from the Internet. Our system consists of three types of classification-based systems: the first system uses spanning n-gram features for subjective categories, the second one uses bag-of-n-gram features for objective categories, and the third one uses pattern matching for infrequent or subtle emotion categories. The spanning n-gram features are selected by a feature selection algorithm that leverages emotional corpus from weblogs. Special normalization of objective sentences is generalized with shallow parsing and external web knowledge. We utilize three sources of web data: the weblog of LiveJournal which helps to improve the feature selection, the eBay List which assists in special normalization of information and instructions categories, and the suicide project web which provides unlabeled data with similar properties as suicide notes. Measurements: The performance is evaluated by the overall micro-averaged precision, recall and F-measure. Result: Our system achieved an overall micro-averaged F-measure of 0.59. Happiness_peacefulness had the highest F-measure of 0.81. We were ranked as the second best out of 26 competing teams. Conclusion: Our results indicated that classifying fine-grained sentiments at sentence level is a non-trivial task. It is effective to divide categories into different groups according to their semantic properties. In addition, our system performance benefits from external knowledge extracted from publically available web data of other purposes; performance can be further enhanced when more training data is available. PMID:22879758
Myria: Scalable Analytics as a Service
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.; Whitaker, A.
2014-12-01
At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.
Learn how EPA's three web user personas (Information Consumer, Information Intermediary, and Information Interpreter) can help you identify appropriate top audiences and top tasks for a topic or web area.
Pietrobon, Ricardo; Shah, Anand; Kuo, Paul; Harker, Matthew; McCready, Mariana; Butler, Christeen; Martins, Henrique; Moorman, C T; Jacobs, Danny O
2006-07-27
Although regulatory compliance in academic research is enforced by law to ensure high quality and safety to participants, its implementation is frequently hindered by cost and logistical barriers. In order to decrease these barriers, we have developed a Web-based application, Duke Surgery Research Central (DSRC), to monitor and streamline the regulatory research process. The main objective of DSRC is to streamline regulatory research processes. The application was built using a combination of paper prototyping for system requirements and Java as the primary language for the application, in conjunction with the Model-View-Controller design model. The researcher interface was designed for simplicity so that it could be used by individuals with different computer literacy levels. Analogously, the administrator interface was designed with functionality as its primary goal. DSRC facilitates the exchange of regulatory documents between researchers and research administrators, allowing for tasks to be tracked and documents to be stored in a Web environment accessible from an Intranet. Usability was evaluated using formal usability tests and field observations. Formal usability results demonstrated that DSRC presented good speed, was easy to learn and use, had a functionality that was easily understandable, and a navigation that was intuitive. Additional features implemented upon request by initial users included: extensive variable categorization (in contrast with data capture using free text), searching capabilities to improve how research administrators could search an extensive number of researcher names, warning messages before critical tasks were performed (such as deleting a task), and confirmatory e-mails for critical tasks (such as completing a regulatory task). The current version of DSRC was shown to have excellent overall usability properties in handling research regulatory issues. It is hoped that its release as an open-source application will promote improved and streamlined regulatory processes for individual academic centers as well as larger research networks.
Pietrobon, Ricardo; Shah, Anand; Kuo, Paul; Harker, Matthew; McCready, Mariana; Butler, Christeen; Martins, Henrique; Moorman, CT; Jacobs, Danny O
2006-01-01
Background Although regulatory compliance in academic research is enforced by law to ensure high quality and safety to participants, its implementation is frequently hindered by cost and logistical barriers. In order to decrease these barriers, we have developed a Web-based application, Duke Surgery Research Central (DSRC), to monitor and streamline the regulatory research process. Results The main objective of DSRC is to streamline regulatory research processes. The application was built using a combination of paper prototyping for system requirements and Java as the primary language for the application, in conjunction with the Model-View-Controller design model. The researcher interface was designed for simplicity so that it could be used by individuals with different computer literacy levels. Analogously, the administrator interface was designed with functionality as its primary goal. DSRC facilitates the exchange of regulatory documents between researchers and research administrators, allowing for tasks to be tracked and documents to be stored in a Web environment accessible from an Intranet. Usability was evaluated using formal usability tests and field observations. Formal usability results demonstrated that DSRC presented good speed, was easy to learn and use, had a functionality that was easily understandable, and a navigation that was intuitive. Additional features implemented upon request by initial users included: extensive variable categorization (in contrast with data capture using free text), searching capabilities to improve how research administrators could search an extensive number of researcher names, warning messages before critical tasks were performed (such as deleting a task), and confirmatory e-mails for critical tasks (such as completing a regulatory task). Conclusion The current version of DSRC was shown to have excellent overall usability properties in handling research regulatory issues. It is hoped that its release as an open-source application will promote improved and streamlined regulatory processes for individual academic centers as well as larger research networks. PMID:16872540
Semantically Enriching the Search System of a Music Digital Library
NASA Astrophysics Data System (ADS)
de Juan, Paloma; Iglesias, Carlos
Traditional search systems are usually based on keywords, a very simple and convenient mechanism to express a need for information. This is the most popular way of searching the Web, although it is not always an easy task to accurately summarize a natural language query in a few keywords. Working with keywords means losing the context, which is the only thing that can help us deal with ambiguity. This is the biggest problem of keyword-based systems. Semantic Web technologies seem a perfect solution to this problem, since they make it possible to represent the semantics of a given domain. In this chapter, we present three projects, Harmos, Semusici and Cantiga, whose aim is to provide access to a music digital library. We will describe two search systems, a traditional one and a semantic one, developed in the context of these projects and compare them in terms of usability and effectiveness.
An Architecture for Autonomic Web Service Process Planning
NASA Astrophysics Data System (ADS)
Moore, Colm; Xue Wang, Ming; Pahl, Claus
Web service composition is a technology that has received considerable attention in the last number of years. Languages and tools to aid in the process of creating composite Web services have been received specific attention. Web service composition is the process of linking single Web services together in order to accomplish more complex tasks. One area of Web service composition that has not received as much attention is the area of dynamic error handling and re-planning, enabling autonomic composition. Given a repository of service descriptions and a task to complete, it is possible for AI planners to automatically create a plan that will achieve this goal. If however a service in the plan is unavailable or erroneous the plan will fail. Motivated by this problem, this paper suggests autonomous re-planning as a means to overcome dynamic problems. Our solution involves automatically recovering from faults and creating a context-dependent alternate plan. We present an architecture that serves as a basis for the central activities autonomous composition, monitoring and fault handling.
Judging nursing information on the world wide web.
Cader, Raffik
2013-02-01
The World Wide Web is increasingly becoming an important source of information for healthcare professionals. However, finding reliable information from unauthoritative Web sites to inform healthcare can pose a challenge to nurses. A study, using grounded theory, was undertaken in two phases to understand how qualified nurses judge the quality of Web nursing information. Data were collected using semistructured interviews and focus groups. An explanatory framework that emerged from the data showed that the judgment process involved the application of forms of knowing and modes of cognition to a range of evaluative tasks and depended on the nurses' critical skills, the time available, and the level of Web information cues. This article mainly focuses on the six evaluative tasks relating to assessing user-friendliness, outlook and authority of Web pages, and relationship to nursing practice; appraising the nature of evidence; and applying cross-checking strategies. The implications of these findings to nurse practitioners and publishers of nursing information are significant.
Resnick, Marc L; Sanchez, Julian
2004-01-01
As companies increase the quantity of information they provide through their Web sites, it is critical that content is structured with an appropriate architecture. However, resource constraints often limit the ability of companies to apply all Web design principles completely. This study quantifies the effects of two major information architecture principles in a controlled study that isolates the incremental effects of organizational scheme and labeling on user performance and satisfaction. Sixty participants with a wide range of Internet and on-line shopping experience were recruited to complete a series of shopping tasks on a prototype retail shopping Web site. User-centered labels provided a significant benefit in performance and satisfaction over labels obtained through company-centered methods. User-centered organization did not result in improved performance except when the label quality was poor. Significant interactions suggest specific guidelines for allocating resources in Web site design. Applications of this research include the design of Web sites for any commercial application, particularly E-commerce.
NASA Astrophysics Data System (ADS)
Golick, Douglas A.; Heng-Moss, Tiffany M.; Steckelberg, Allen L.; Brooks, David. W.; Higley, Leon G.; Fowler, David
2013-08-01
The purpose of the study was to determine whether undergraduate students receiving web-based instruction based on traditional, key character, or classification instruction differed in their performance of insect identification tasks. All groups showed a significant improvement in insect identifications on pre- and post-two-dimensional picture specimen quizzes. The study also determined student performance on insect identification tasks was not as good as for family-level identification as compared to broader insect orders and arthropod classification identification tasks. Finally, students erred significantly more by misidentification than misspelling specimen names on prepared specimen quizzes. Results of this study support that short web-based insect identification exercises can improve insect identification performance. Also included is a discussion of how these results can be used in teaching and future research on biological identification.
de Souza, Edson Rufino; de Freitas, Sydney Fernandes
2012-01-01
At present, it is recognized that the Internet plays key role in universalization of opportunities in society in which we live. For people with disabilities, the content must be accessible in all websites, but the assistive technologies used must be adequate to the specific needs of people with disabilities. Dosvox is a free system developed in the Universidade Federal do Rio de Janeiro (UFRJ) specially designed for blind people and used by them in the performance of their tasks with the use of computers. Previously, through exploratory research based on the observation of the interaction of blind students with the Web, usability problems were identified in the Dosvox interface and in the Webvox, the Web browser included in the system, and these problems were related to the fact that the interface is designed in accordance with the mental model of these Information Technology professionals. This study consolidate the problems in earlier phases of the research, link the results with usability heuristics of Nielsen and propose several improvements to Dosvox and its development process.
Managing Large Scale Project Analysis Teams through a Web Accessible Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.
2008-01-01
Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.
Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research
Crump, Matthew J. C.; McDonnell, John V.; Gureckis, Todd M.
2013-01-01
Amazon Mechanical Turk (AMT) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money. The service has attracted attention from experimental psychologists interested in gathering human subject data more efficiently. However, relative to traditional laboratory studies, many aspects of the testing environment are not under the experimenter's control. In this paper, we attempt to empirically evaluate the fidelity of the AMT system for use in cognitive behavioral experiments. These types of experiment differ from simple surveys in that they require multiple trials, sustained attention from participants, comprehension of complex instructions, and millisecond accuracy for response recording and stimulus presentation. We replicate a diverse body of tasks from experimental psychology including the Stroop, Switching, Flanker, Simon, Posner Cuing, attentional blink, subliminal priming, and category learning tasks using participants recruited using AMT. While most of replications were qualitatively successful and validated the approach of collecting data anonymously online using a web-browser, others revealed disparity between laboratory results and online results. A number of important lessons were encountered in the process of conducting these replications that should be of value to other researchers. PMID:23516406
Space webs based on rotating tethered formations
NASA Astrophysics Data System (ADS)
Palmerini, Giovanni B.; Sgubini, Silvano; Sabatini, Marco
2009-07-01
Several on-going studies indicate the interest for large, light orbiting structures, shaped as fish nets or webs: along the ropes of the web small spacecraft can move like spiders to position and re-locate, at will, pieces of hardware devoted to specific missions. The concept could be considered as an intermediate solution between the large monolithic structure, heavy and expensive to realize, but easy to control, and the formations of satellites, where all system members are completely free and should manoeuvre in order to acquire a desired configuration. Instead, the advantage of having a "hard-but-light" link among the different grids lays in the partition of the tasks among system components and in a possible overall reduction of the control system complexity and cost. Unfortunately, there is no stable configuration for an orbiting, two-dimensional web made by light, flexible tethers which cannot support compression forces. A possible solution is to make use of centrifugal forces to pull the net, with a reduced number of simple thrusters located at the tips of the tethers to initially acquire the required spin. In this paper a dynamic analysis of a simplified rotating web is performed, in order to evaluate the spinning velocity able to satisfy the requirement for the stability of the system. The model adopted overlaps simpler elements, each of them given by a tether (made up of a number of linear finite elements) connecting two extreme bodies accommodating the spinning thrusters. The combination of these "diameter-like" elements provides the web, shaped according to the specific requirements. The net is primarily considered as subjected to Keplerian attraction and J2 and drag perturbations only, but its behaviour under thermal inputs is also investigated.
Twitter web-service for soft agent reporting in persistent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2010-04-01
Persistent surveillance is an intricate process requiring monitoring, gathering, processing, tracking, and characterization of many spatiotemporal events occurring concurrently. Data associated with events can be readily attained by networking of hard (physical) sensors. Sensors may have homogeneous or heterogeneous (hybrid) sensing modalities with different communication bandwidth requirements. Complimentary to hard sensors are human observers or "soft sensors" that can report occurrences of evolving events via different communication devices (e.g., texting, cell phones, emails, instant messaging, etc.) to the command control center. However, networking of human observers in ad-hoc way is rather a difficult task. In this paper, we present a Twitter web-service for soft agent reporting in persistent surveillance systems (called Web-STARS). The objective of this web-service is to aggregate multi-source human observations in hybrid sensor networks rapidly. With availability of Twitter social network, such a human networking concept can not only be realized for large scale persistent surveillance systems (PSS), but also, it can be employed with proper interfaces to expedite rapid events reporting by human observers. The proposed technique is particularly suitable for large-scale persistent surveillance systems with distributed soft and hard sensor networks. The efficiency and effectiveness of the proposed technique is measured experimentally by conducting several simulated persistent surveillance scenarios. It is demonstrated that by fusion of information from hard and soft agents improves understanding of common operating picture and enhances situational awareness.
Persuasive System Design Does Matter: A Systematic Review of Adherence to Web-Based Interventions
Kok, Robin N; Ossebaard, Hans C; Van Gemert-Pijnen, Julia EWC
2012-01-01
Background Although web-based interventions for promoting health and health-related behavior can be effective, poor adherence is a common issue that needs to be addressed. Technology as a means to communicate the content in web-based interventions has been neglected in research. Indeed, technology is often seen as a black-box, a mere tool that has no effect or value and serves only as a vehicle to deliver intervention content. In this paper we examine technology from a holistic perspective. We see it as a vital and inseparable aspect of web-based interventions to help explain and understand adherence. Objective This study aims to review the literature on web-based health interventions to investigate whether intervention characteristics and persuasive design affect adherence to a web-based intervention. Methods We conducted a systematic review of studies into web-based health interventions. Per intervention, intervention characteristics, persuasive technology elements and adherence were coded. We performed a multiple regression analysis to investigate whether these variables could predict adherence. Results We included 101 articles on 83 interventions. The typical web-based intervention is meant to be used once a week, is modular in set-up, is updated once a week, lasts for 10 weeks, includes interaction with the system and a counselor and peers on the web, includes some persuasive technology elements, and about 50% of the participants adhere to the intervention. Regarding persuasive technology, we see that primary task support elements are most commonly employed (mean 2.9 out of a possible 7.0). Dialogue support and social support are less commonly employed (mean 1.5 and 1.2 out of a possible 7.0, respectively). When comparing the interventions of the different health care areas, we find significant differences in intended usage (p = .004), setup (p < .001), updates (p < .001), frequency of interaction with a counselor (p < .001), the system (p = .003) and peers (p = .017), duration (F = 6.068, p = .004), adherence (F = 4.833, p = .010) and the number of primary task support elements (F = 5.631, p = .005). Our final regression model explained 55% of the variance in adherence. In this model, a RCT study as opposed to an observational study, increased interaction with a counselor, more frequent intended usage, more frequent updates and more extensive employment of dialogue support significantly predicted better adherence. Conclusions Using intervention characteristics and persuasive technology elements, a substantial amount of variance in adherence can be explained. Although there are differences between health care areas on intervention characteristics, health care area per se does not predict adherence. Rather, the differences in technology and interaction predict adherence. The results of this study can be used to make an informed decision about how to design a web-based intervention to which patients are more likely to adhere. PMID:23151820
Using NetCloak to develop server-side Web-based experiments without writing CGI programs.
Wolfe, Christopher R; Reyna, Valerie F
2002-05-01
Server-side experiments use the Web server, rather than the participant's browser, to handle tasks such as random assignment, eliminating inconsistencies with JAVA and other client-side applications. Heretofore, experimenters wishing to create server-side experiments have had to write programs to create common gateway interface (CGI) scripts in programming languages such as Perl and C++. NetCloak uses simple, HTML-like commands to create CGIs. We used NetCloak to implement an experiment on probability estimation. Measurements of time on task and participants' IP addresses assisted quality control. Without prior training, in less than 1 month, we were able to use NetCloak to design and create a Web-based experiment and to help graduate students create three Web-based experiments of their own.
Pastoral hermeneutics and the challenge of a global economy: care to the living human Web.
Louw, D J
2002-01-01
The author discusses the relationship between a pastoral hermeneutics and the current social context as determined by international communication and globalization. He explores the influence of telecommunications on the human quest for meaning and the implication of this for pastoral care and counseling. A paradigm shift is proposed in terms of care to the living human web. A pastoral assessment which interprets the undergirding philosophy and belief system of globalization and its influence on human dignity is suggested; and a pastoral ministry which takes up its prophetic task and voices the needs of people in terms of a "globalization from below" is explicated.
Dewan, Shaveta; Sibal, Anupam; Uberoi, R S; Kaur, Ishneet; Nayak, Yogamaya; Kar, Sujoy; Loria, Gaurav; Yatheesh, G; Balaji, V
2014-01-01
Creating and implementing processes to deliver quality care in compliance with accreditation standards is a challenging task but even more daunting is sustaining these processes and systems. There is need for frequent monitoring of the gap between the expected level of care and the level of care actually delivered so as to achieve consistent level of care. The Apollo Accreditation Program (AAP) was implemented as a web-based single measurable dashboard to display, measure and compare compliance levels for established standards of care in JCI accredited hospitals every quarter and resulted in an overall 15.5% improvement in compliance levels over one year.
2015-01-01
1 3.0 Methods, Assumptions, and Procedures ...18 4.6.3. LineUp Web... Procedures A search of the internet looking at web sites specializing in graphics, graphics engines, web browser applications, and games was conducted to
Route Advising in a Dynamic Environment - A High-Tech Approach
NASA Astrophysics Data System (ADS)
Firdhous, M. F. M.; Basnayake, D. L.; Kodithuwakku, K. H. L.; Hatthalla, N. K.; Charlin, N. W.; Bandara, P. M. R. I. K.
Finding the optimal path between two locations in the Colombo city is not a straight forward task, because of the complex road system and the huge traffic jams etc. This paper presents a system to find the optimal driving direction between two locations within the Colombo city, considering road rules (one way, two ways or fully closed in both directions). The system contains three main modules - core module, web module and mobile module, additionally there are two user interfaces one for normal users and the other for administrative users. Both these interfaces can be accessed using a web browser or a GPRS enabled mobile phone. The system is developed based on the Geographic Information System (GIS) technology. GIS is considered as the best option to integrate hardware, software, and data for capturing, managing, analyzing, and displaying all forms of geographically referenced information. The core of the system is MapServer (MS4W) used along with the other supporting technologies such as PostGIS, PostgreSQL, pgRouting, ASP.NET and C#.
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications
Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.
2018-01-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.
Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D
2017-04-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.
Optimized autonomous space in-situ sensor web for volcano monitoring
Song, W.-Z.; Shirazi, B.; Huang, R.; Xu, M.; Peterson, N.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.; Kedar, S.; Chien, S.; Webb, F.; Kiely, A.; Doubleday, J.; Davies, A.; Pieri, D.
2010-01-01
In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), have developed a prototype of dynamic and scalable hazard monitoring sensor-web and applied it to volcano monitoring. The combined Optimized Autonomous Space In-situ Sensor-web (OASIS) has two-way communication capability between ground and space assets, uses both space and ground data for optimal allocation of limited bandwidth resources on the ground, and uses smart management of competing demands for limited space assets. It also enables scalability and seamless infusion of future space and in-situ assets into the sensor-web. The space and in-situ control components of the system are integrated such that each element is capable of autonomously tasking the other. The ground in-situ was deployed into the craters and around the flanks of Mount St. Helens in July 2009, and linked to the command and control of the Earth Observing One (EO-1) satellite. ?? 2010 IEEE.
Design of the Resources and Environment Monitoring Website in Kashgar
NASA Astrophysics Data System (ADS)
Huang, Z.; Lin, Q. Z.; Wang, Q. J.
2014-03-01
Despite the development of the web geographical information system (web GIS), many useful spatial analysis functions are ignored in the system implementation. As Kashgar is rich in natural resources, it is of great significance to monitor the ample natural resource and environment situation in the region. Therefore, with multiple uses of spatial analysis, resources and environment monitoring website of Kashgar was built. Functions of water, vegetation, ice and snow extraction, task management, change assessment as well as thematic mapping and reports based on TM remote sensing images were implemented in the website. The design of the website was presented based on database management tier, the business logic tier and the top-level presentation tier. The vital operations of the website were introduced and the general performance was evaluated.
Social customer relationship management: taking advantage of Web 2.0 and Big Data technologies.
Orenga-Roglá, Sergio; Chalmeta, Ricardo
2016-01-01
The emergence of Web 2.0 and Big Data technologies has allowed a new customer relationship strategy based on interactivity and collaboration called Social Customer Relationship Management (Social CRM) to be created. This enhances customer engagement and satisfaction. The implementation of Social CRM is a complex task that involves different organisational, human and technological aspects. However, there is a lack of methodologies to assist companies in these processes. This paper shows a novel methodology that helps companies to implement Social CRM, taking into account different aspects such as social customer strategy, the Social CRM performance measurement system, the Social CRM business processes, or the Social CRM computer system. The methodology was applied to one company in order to validate and refine it.
A Framework for the Systematic Collection of Open Source Intelligence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pouchard, Line Catherine; Trien, Joseph P; Dobson, Jonathan D
2009-01-01
Following legislative directions, the Intelligence Community has been mandated to make greater use of Open Source Intelligence (OSINT). Efforts are underway to increase the use of OSINT but there are many obstacles. One of these obstacles is the lack of tools helping to manage the volume of available data and ascertain its credibility. We propose a unique system for selecting, collecting and storing Open Source data from the Web and the Open Source Center. Some data management tasks are automated, document source is retained, and metadata containing geographical coordinates are added to the documents. Analysts are thus empowered to search,more » view, store, and analyze Web data within a single tool. We present ORCAT I and ORCAT II, two implementations of the system.« less
NASA Astrophysics Data System (ADS)
Fume, Kosei; Ishitani, Yasuto
2008-01-01
We propose a document categorization method based on a document model that can be defined externally for each task and that categorizes Web content or business documents into a target category in accordance with the similarity of the model. The main feature of the proposed method consists of two aspects of semantics extraction from an input document. The semantics of terms are extracted by the semantic pattern analysis and implicit meanings of document substructure are specified by a bottom-up text clustering technique focusing on the similarity of text line attributes. We have constructed a system based on the proposed method for trial purposes. The experimental results show that the system achieves more than 80% classification accuracy in categorizing Web content and business documents into 15 or 70 categories.
STINGRAY: system for integrated genomic resources and analysis.
Wagner, Glauber; Jardim, Rodrigo; Tschoeke, Diogo A; Loureiro, Daniel R; Ocaña, Kary A C S; Ribeiro, Antonio C B; Emmel, Vanessa E; Probst, Christian M; Pitaluga, André N; Grisard, Edmundo C; Cavalcanti, Maria C; Campos, Maria L M; Mattoso, Marta; Dávila, Alberto M R
2014-03-07
The STINGRAY system has been conceived to ease the tasks of integrating, analyzing, annotating and presenting genomic and expression data from Sanger and Next Generation Sequencing (NGS) platforms. STINGRAY includes: (a) a complete and integrated workflow (more than 20 bioinformatics tools) ranging from functional annotation to phylogeny; (b) a MySQL database schema, suitable for data integration and user access control; and (c) a user-friendly graphical web-based interface that makes the system intuitive, facilitating the tasks of data analysis and annotation. STINGRAY showed to be an easy to use and complete system for analyzing sequencing data. While both Sanger and NGS platforms are supported, the system could be faster using Sanger data, since the large NGS datasets could potentially slow down the MySQL database usage. STINGRAY is available at http://stingray.biowebdb.org and the open source code at http://sourceforge.net/projects/stingray-biowebdb/.
STINGRAY: system for integrated genomic resources and analysis
2014-01-01
Background The STINGRAY system has been conceived to ease the tasks of integrating, analyzing, annotating and presenting genomic and expression data from Sanger and Next Generation Sequencing (NGS) platforms. Findings STINGRAY includes: (a) a complete and integrated workflow (more than 20 bioinformatics tools) ranging from functional annotation to phylogeny; (b) a MySQL database schema, suitable for data integration and user access control; and (c) a user-friendly graphical web-based interface that makes the system intuitive, facilitating the tasks of data analysis and annotation. Conclusion STINGRAY showed to be an easy to use and complete system for analyzing sequencing data. While both Sanger and NGS platforms are supported, the system could be faster using Sanger data, since the large NGS datasets could potentially slow down the MySQL database usage. STINGRAY is available at http://stingray.biowebdb.org and the open source code at http://sourceforge.net/projects/stingray-biowebdb/. PMID:24606808
An Exploratory Study of School-Age Children's Use of a Heterogeneous Resource Site
ERIC Educational Resources Information Center
Holmes, Jason; Robins, David; Zhang, Yin; Salaba, Athena
2008-01-01
In this study, students' use of a new educational Web portal was evaluated with particular emphasis on searching and browsing strategies. The effects and implications of a federated search system are also discussed. Fifty-four students, ranging from 5th to 12th grade, were given five tasks to complete using the SchoolRooms interface. The tasks…
Assemble Collocation and Colligation in Chinese Writing Web Tools for New Immigrants in Taiwan
ERIC Educational Resources Information Center
Lu, Meg; Lin, Chien Hui; Chuang, Tsung Yen; Ku, Tsun; Tsai, Chia Min
2011-01-01
CSL (Chinese as a second language) learning is an emergency task in Taiwan, especially when more and more new immigrants joined in Taiwan. However, there are just a few new immigrants who can finish all language courses. For this reason, this research intends to provide a training system to assist new immigrants observing and learning the phrase…
An Analysis of Botnet Vulnerabilities
2007-06-01
Definition Currently, the primary defense against botnets is prompt patching of vulnerable systems and antivirus software . Network monitoring can identify...IRCd software , none were identified during this effort. AFIT iv For my wife, for her caring and support throughout the course of this...are software agents designed to automatically perform tasks. Examples include web-spiders that catalog the Internet and bots found in popular online
A web-based data-querying tool based on ontology-driven methodology and flowchart-based model.
Ping, Xiao-Ou; Chung, Yufang; Tseng, Yi-Ju; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei
2013-10-08
Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, "degree of liver damage," "degree of liver damage when applying a mutually exclusive setting," and "treatments for liver cancer") was 100% for all four experiments (10 patients, 100 patients, 1000 patients, and 10,000 patients). Among the three measured query phases, (1) structured query language operations, (2) criteria verification, and (3) other, the first two had the longest execution time. The ontology-driven FBDQM-based approach enriched the capabilities of the data-querying system. The adoption of the GLIF3.5 increased the potential for interoperability, shareability, and reusability of the query tasks.
GDSCalc: A Web-Based Application for Evaluating Discrete Graph Dynamical Systems
Elmeligy Abdelhamid, Sherif H.; Kuhlman, Chris J.; Marathe, Madhav V.; Mortveit, Henning S.; Ravi, S. S.
2015-01-01
Discrete dynamical systems are used to model various realistic systems in network science, from social unrest in human populations to regulation in biological networks. A common approach is to model the agents of a system as vertices of a graph, and the pairwise interactions between agents as edges. Agents are in one of a finite set of states at each discrete time step and are assigned functions that describe how their states change based on neighborhood relations. Full characterization of state transitions of one system can give insights into fundamental behaviors of other dynamical systems. In this paper, we describe a discrete graph dynamical systems (GDSs) application called GDSCalc for computing and characterizing system dynamics. It is an open access system that is used through a web interface. We provide an overview of GDS theory. This theory is the basis of the web application; i.e., an understanding of GDS provides an understanding of the software features, while abstracting away implementation details. We present a set of illustrative examples to demonstrate its use in education and research. Finally, we compare GDSCalc with other discrete dynamical system software tools. Our perspective is that no single software tool will perform all computations that may be required by all users; tools typically have particular features that are more suitable for some tasks. We situate GDSCalc within this space of software tools. PMID:26263006
GDSCalc: A Web-Based Application for Evaluating Discrete Graph Dynamical Systems.
Elmeligy Abdelhamid, Sherif H; Kuhlman, Chris J; Marathe, Madhav V; Mortveit, Henning S; Ravi, S S
2015-01-01
Discrete dynamical systems are used to model various realistic systems in network science, from social unrest in human populations to regulation in biological networks. A common approach is to model the agents of a system as vertices of a graph, and the pairwise interactions between agents as edges. Agents are in one of a finite set of states at each discrete time step and are assigned functions that describe how their states change based on neighborhood relations. Full characterization of state transitions of one system can give insights into fundamental behaviors of other dynamical systems. In this paper, we describe a discrete graph dynamical systems (GDSs) application called GDSCalc for computing and characterizing system dynamics. It is an open access system that is used through a web interface. We provide an overview of GDS theory. This theory is the basis of the web application; i.e., an understanding of GDS provides an understanding of the software features, while abstracting away implementation details. We present a set of illustrative examples to demonstrate its use in education and research. Finally, we compare GDSCalc with other discrete dynamical system software tools. Our perspective is that no single software tool will perform all computations that may be required by all users; tools typically have particular features that are more suitable for some tasks. We situate GDSCalc within this space of software tools.
Creating Task-Centered Instruction for Web-Based Instruction: Obstacles and Solutions
ERIC Educational Resources Information Center
Gardner, Joel; Jeon, Tae
2010-01-01
Merrill proposes First Principles of Instruction, including a problem- or task-centered strategy for designing instruction. However, when the tasks or problems are ill-defined or complex, task-centered instruction can be difficult to design. We describe an online task-centered training at a land-grant university designed to train employees to use…
Drexel at TREC 2014 Federated Web Search Track
2014-11-01
of its input RS results. 1. INTRODUCTION Federated Web Search is the task of searching multiple search engines simultaneously and combining their...or distributed properly[5]. The goal of RS is then, for a given query, to select only the most promising search engines from all those available. Most...result pages of 149 search engines . 4000 queries are used in building the sample set. As a part of the Vertical Selection task, search engines are
Biotool2Web: creating simple Web interfaces for bioinformatics applications.
Shahid, Mohammad; Alam, Intikhab; Fuellen, Georg
2006-01-01
Currently there are many bioinformatics applications being developed, but there is no easy way to publish them on the World Wide Web. We have developed a Perl script, called Biotool2Web, which makes the task of creating web interfaces for simple ('home-made') bioinformatics applications quick and easy. Biotool2Web uses an XML document containing the parameters to run the tool on the Web, and generates the corresponding HTML and common gateway interface (CGI) files ready to be published on a web server. This tool is available for download at URL http://www.uni-muenster.de/Bioinformatics/services/biotool2web/ Georg Fuellen (fuellen@alum.mit.edu).
Sensor Web Interoperability Testbed Results Incorporating Earth Observation Satellites
NASA Technical Reports Server (NTRS)
Frye, Stuart; Mandl, Daniel J.; Alameh, Nadine; Bambacus, Myra; Cappelaere, Pat; Falke, Stefan; Derezinski, Linda; Zhao, Piesheng
2007-01-01
This paper describes an Earth Observation Sensor Web scenario based on the Open Geospatial Consortium s Sensor Web Enablement and Web Services interoperability standards. The scenario demonstrates the application of standards in describing, discovering, accessing and tasking satellites and groundbased sensor installations in a sequence of analysis activities that deliver information required by decision makers in response to national, regional or local emergencies.
The Best of Two Worlds: Combining ITV and Web Quests To Strengthen Distance Learning.
ERIC Educational Resources Information Center
Mosby, Charmaine
This presentation describes an English graduate seminar in Local Color and Regionalism in American Literature at Western Kentucky University that was set up as an experimental hybrid course, i.e., roughly 60% face-to-face and 40% Web course (Web quest format). The focus is on the four tasks that comprised the Web quest segment of the course: (1) a…
Online versus offline: The Web as a medium for response time data collection.
Chetverikov, Andrey; Upravitelev, Philipp
2016-09-01
The Internet provides a convenient environment for data collection in psychology. Modern Web programming languages, such as JavaScript or Flash (ActionScript), facilitate complex experiments without the necessity of experimenter presence. Yet there is always a question of how much noise is added due to the differences between the setups used by participants and whether it is compensated for by increased ecological validity and larger sample sizes. This is especially a problem for experiments that measure response times (RTs), because they are more sensitive (and hence more susceptible to noise) than, for example, choices per se. We used a simple visual search task with different set sizes to compare laboratory performance with Web performance. The results suggest that although the locations (means) of RT distributions are different, other distribution parameters are not. Furthermore, the effect of experiment setting does not depend on set size, suggesting that task difficulty is not important in the choice of a data collection method. We also collected an additional online sample to investigate the effects of hardware and software diversity on the accuracy of RT data. We found that the high diversity of browsers, operating systems, and CPU performance may have a detrimental effect, though it can partly be compensated for by increased sample sizes and trial numbers. In sum, the findings show that Web-based experiments are an acceptable source of RT data, comparable to a common keyboard-based setup in the laboratory.
Identifying interactions between chemical entities in biomedical text.
Lamurias, Andre; Ferreira, João D; Couto, Francisco M
2014-10-23
Interactions between chemical compounds described in biomedical text can be of great importance to drug discovery and design, as well as pharmacovigilance. We developed a novel system, \\"Identifying Interactions between Chemical Entities\\" (IICE), to identify chemical interactions described in text. Kernel-based Support Vector Machines first identify the interactions and then an ensemble classifier validates and classifies the type of each interaction. This relation extraction module was evaluated with the corpus released for the DDI Extraction task of SemEval 2013, obtaining results comparable to state-of-the-art methods for this type of task. We integrated this module with our chemical named entity recognition module and made the whole system available as a web tool at www.lasige.di.fc.ul.pt/webtools/iice.
Identifying interactions between chemical entities in biomedical text.
Lamurias, Andre; Ferreira, João D; Couto, Francisco M
2014-12-01
Interactions between chemical compounds described in biomedical text can be of great importance to drug discovery and design, as well as pharmacovigilance. We developed a novel system, "Identifying Interactions between Chemical Entities" (IICE), to identify chemical interactions described in text. Kernel-based Support Vector Machines first identify the interactions and then an ensemble classifier validates and classifies the type of each interaction. This relation extraction module was evaluated with the corpus released for the DDI Extraction task of SemEval 2013, obtaining results comparable to stateof- the-art methods for this type of task. We integrated this module with our chemical named entity recognition module and made the whole system available as a web tool at www.lasige.di.fc.ul.pt/webtools/iice.
Gil, Yolanda; Michel, Felix; Ratnakar, Varun; Read, Jordan S.; Hauder, Matheus; Duffy, Christopher; Hanson, Paul C.; Dugan, Hilary
2015-01-01
The Web was originally developed to support collaboration in science. Although scientists benefit from many forms of collaboration on the Web (e.g., blogs, wikis, forums, code sharing, etc.), most collaborative projects are coordinated over email, phone calls, and in-person meetings. Our goal is to develop a collaborative infrastructure for scientists to work on complex science questions that require multi-disciplinary contributions to gather and analyze data, that cannot occur without significant coordination to synthesize findings, and that grow organically to accommodate new contributors as needed as the work evolves over time. Our approach is to develop an organic data science framework based on a task-centered organization of the collaboration, includes principles from social sciences for successful on-line communities, and exposes an open science process. Our approach is implemented as an extension of a semantic wiki platform, and captures formal representations of task decomposition structures, relations between tasks and users, and other properties of tasks, data, and other relevant science objects. All these entities are captured through the semantic wiki user interface, represented as semantic web objects, and exported as linked data.
Phase 2 of the array automated assembly task for the low cost solar array project
NASA Technical Reports Server (NTRS)
Campbell, R. B.; Davis, J. R.; Ostroski, J. W.; Rai-Choudhury, P.; Rohatgi, A.; Seman, E. J.; Stapleton, R. E.
1979-01-01
The process sequence for the fabrication of dendritic web silicon into solar panels was modified to include aluminum back surface field formation. Plasma etching was found to be a feasible technique for pre-diffusion cleaning of the web. Several contacting systems were studied. The total plated Pd-Ni system was not compatible with the process sequence; however, the evaporated TiPd-electroplated Cu system was shown stable under life testing. Ultrasonic bonding parameters were determined for various interconnect and contact metals but the yield of the process was not sufficiently high to use for module fabrication at this time. Over 400 solar cells were fabricated according to the modified sequence. No sub-process incompatibility was seen. These cells were used to fabricate four demonstration modules. A cost analysis of the modified process sequence resulted in a selling price of $0.75/peak watt.
Clinician search behaviors may be influenced by search engine design.
Lau, Annie Y S; Coiera, Enrico; Zrimec, Tatjana; Compton, Paul
2010-06-30
Searching the Web for documents using information retrieval systems plays an important part in clinicians' practice of evidence-based medicine. While much research focuses on the design of methods to retrieve documents, there has been little examination of the way different search engine capabilities influence clinician search behaviors. Previous studies have shown that use of task-based search engines allows for faster searches with no loss of decision accuracy compared with resource-based engines. We hypothesized that changes in search behaviors may explain these differences. In all, 75 clinicians (44 doctors and 31 clinical nurse consultants) were randomized to use either a resource-based or a task-based version of a clinical information retrieval system to answer questions about 8 clinical scenarios in a controlled setting in a university computer laboratory. Clinicians using the resource-based system could select 1 of 6 resources, such as PubMed; clinicians using the task-based system could select 1 of 6 clinical tasks, such as diagnosis. Clinicians in both systems could reformulate search queries. System logs unobtrusively capturing clinicians' interactions with the systems were coded and analyzed for clinicians' search actions and query reformulation strategies. The most frequent search action of clinicians using the resource-based system was to explore a new resource with the same query, that is, these clinicians exhibited a "breadth-first" search behaviour. Of 1398 search actions, clinicians using the resource-based system conducted 401 (28.7%, 95% confidence interval [CI] 26.37-31.11) in this way. In contrast, the majority of clinicians using the task-based system exhibited a "depth-first" search behavior in which they reformulated query keywords while keeping to the same task profiles. Of 585 search actions conducted by clinicians using the task-based system, 379 (64.8%, 95% CI 60.83-68.55) were conducted in this way. This study provides evidence that different search engine designs are associated with different user search behaviors.
Tags Extarction from Spatial Documents in Search Engines
NASA Astrophysics Data System (ADS)
Borhaninejad, S.; Hakimpour, F.; Hamzei, E.
2015-12-01
Nowadays the selective access to information on the Web is provided by search engines, but in the cases which the data includes spatial information the search task becomes more complex and search engines require special capabilities. The purpose of this study is to extract the information which lies in spatial documents. To that end, we implement and evaluate information extraction from GML documents and a retrieval method in an integrated approach. Our proposed system consists of three components: crawler, database and user interface. In crawler component, GML documents are discovered and their text is parsed for information extraction; storage. The database component is responsible for indexing of information which is collected by crawlers. Finally the user interface component provides the interaction between system and user. We have implemented this system as a pilot system on an Application Server as a simulation of Web. Our system as a spatial search engine provided searching capability throughout the GML documents and thus an important step to improve the efficiency of search engines has been taken.
Dynamic user data analysis and web composition technique using big data
NASA Astrophysics Data System (ADS)
Soundarya, P.; Vanitha, M.; Sumaiya Thaseen, I.
2017-11-01
In the existing system, a reliable service oriented system is built which is more important when compared with the traditional standalone system in the unpredictable internet service and it also a challenging task to build reliable web service. In the proposed system, the fault tolerance is determined by using the proposed heuristic algorithm. There are two kinds of strategies active and passive strategies. The user requirement is also formulated as local and global constraints. Different services are deployed in the modification process. Two bus reservation and two train reservation services are deployed along with hotel reservation service. User can choose any one of the bus reservation and specify their destination location. If corresponding destination is not available then automatic backup service to another bus reservation system is carried. If same, the service is not available then parallel service of train reservation is initiated. Automatic hotel reservation is also initiated based on the mode and type of travel of the user.
The WWW and Our Digital Heritage--The New Preservation Tasks of the Library Community.
ERIC Educational Resources Information Center
Mannerheim, Johan
This paper discusses the role of libraries in the preservation of World Wide Web publications. Topics addressed include: (1) the scope of Web preservation, including examples of projects that illustrate comprehensive and selective approaches; (2) the responsibility of Web preservation, including placing the responsibility on publishers and other…
Effect of font size, italics, and colour count on web usability.
Bhatia, Sanjiv K; Samal, Ashok; Rajan, Nithin; Kiviniemi, Marc T
2011-04-01
Web usability measures the ease of use of a website. This study attempts to find the effect of three factors - font size, italics, and colour count - on web usability. The study was performed using a set of tasks and developing a survey questionnaire. We performed the study using a set of human subjects, selected from the undergraduate students taking courses in psychology. The data computed from the tasks and survey questionnaire were statistically analysed to find if there was any effect of font size, italics, and colour count on the three web usability dimensions. We found that for the student population considered, there was no significant effect of font size on usability. However, the manipulation of italics and colour count did influence some aspects of usability. The subjects performed better for pages with no italics and high italics compared to moderate italics. The subjects rated the pages that contained only one colour higher than the web pages with four or six colours. This research will help web developers better understand the effect of font size, italics, and colour count on web usability in general, and for young adults, in particular.
Effect of font size, italics, and colour count on web usability
Samal, Ashok; Rajan, Nithin; Kiviniemi, Marc T.
2013-01-01
Web usability measures the ease of use of a website. This study attempts to find the effect of three factors – font size, italics, and colour count – on web usability. The study was performed using a set of tasks and developing a survey questionnaire. We performed the study using a set of human subjects, selected from the undergraduate students taking courses in psychology. The data computed from the tasks and survey questionnaire were statistically analysed to find if there was any effect of font size, italics, and colour count on the three web usability dimensions. We found that for the student population considered, there was no significant effect of font size on usability. However, the manipulation of italics and colour count did influence some aspects of usability. The subjects performed better for pages with no italics and high italics compared to moderate italics. The subjects rated the pages that contained only one colour higher than the web pages with four or six colours. This research will help web developers better understand the effect of font size, italics, and colour count on web usability in general, and for young adults, in particular. PMID:24358055
Geoinformation web-system for processing and visualization of large archives of geo-referenced data
NASA Astrophysics Data System (ADS)
Gordov, E. P.; Okladnikov, I. G.; Titov, A. G.; Shulgina, T. M.
2010-12-01
Developed working model of information-computational system aimed at scientific research in area of climate change is presented. The system will allow processing and analysis of large archives of geophysical data obtained both from observations and modeling. Accumulated experience of developing information-computational web-systems providing computational processing and visualization of large archives of geo-referenced data was used during the implementation (Gordov et al, 2007; Okladnikov et al, 2008; Titov et al, 2009). Functional capabilities of the system comprise a set of procedures for mathematical and statistical analysis, processing and visualization of data. At present five archives of data are available for processing: 1st and 2nd editions of NCEP/NCAR Reanalysis, ECMWF ERA-40 Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, and NOAA-CIRES XX Century Global Reanalysis Version I. To provide data processing functionality a computational modular kernel and class library providing data access for computational modules were developed. Currently a set of computational modules for climate change indices approved by WMO is available. Also a special module providing visualization of results and writing to Encapsulated Postscript, GeoTIFF and ESRI shape files was developed. As a technological basis for representation of cartographical information in Internet the GeoServer software conforming to OpenGIS standards is used. Integration of GIS-functionality with web-portal software to provide a basis for web-portal’s development as a part of geoinformation web-system is performed. Such geoinformation web-system is a next step in development of applied information-telecommunication systems offering to specialists from various scientific fields unique opportunities of performing reliable analysis of heterogeneous geophysical data using approved computational algorithms. It will allow a wide range of researchers to work with geophysical data without specific programming knowledge and to concentrate on solving their specific tasks. The system would be of special importance for education in climate change domain. This work is partially supported by RFBR grant #10-07-00547, SB RAS Basic Program Projects 4.31.1.5 and 4.31.2.7, SB RAS Integration Projects 4 and 9.
Advances on Sensor Web for Internet of Things
NASA Astrophysics Data System (ADS)
Liang, S.; Bermudez, L. E.; Huang, C.; Jazayeri, M.; Khalafbeigi, T.
2013-12-01
'In much the same way that HTML and HTTP enabled WWW, the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE), envisioned in 2001 [1] will allow sensor webs to become a reality.'. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not a simple task. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. SWE standardizes web service interfaces, sensor descriptions and data encodings as building blocks for a Sensor Web. SWE standards are now mature specifications (version 2.0) with approved OGC compliance test suites and tens of independent implementations. Many earth and space science organizations and government agencies are using the SWE standards to publish and share their sensors and observations. While SWE has been demonstrated very effective for scientific sensors, its complexity and the computational overhead may not be suitable for resource-constrained tiny sensors. In June 2012, a new OGC Standards Working Group (SWG) was formed called the Sensor Web Interface for Internet of Things (SWE-IoT) SWG. This SWG focuses on developing one or more OGC standards for resource-constrained sensors and actuators (e.g., Internet of Things devices) while leveraging the existing OGC SWE standards. In the near future, billions to trillions of small sensors and actuators will be embedded in real- world objects and connected to the Internet facilitating a concept called the Internet of Things (IoT). By populating our environment with real-world sensor-based devices, the IoT is opening the door to exciting possibilities for a variety of application domains, such as environmental monitoring, transportation and logistics, urban informatics, smart cities, as well as personal and social applications. The current SWE-IoT development aims on modeling the IoT components and defining a standard web service that makes the observations captured by IoT devices easily accessible and allows users to task the actuators on the IoT devices. The SWE IoT model links things with sensors and reuses the OGC Observation and Model (O&M) to link sensors with features of interest and observed properties Unlike most SWE standards, the SWE-IoT defines a RESTful web interface for users to perform CRUD (i.e., create, read, update, and delete) functions on resources, including Things, Sensors, Actuators, Observations, Tasks, etc. Inspired by the OASIS Open Data Protocol (OData), the SWE-IoT web service provides the multi-faceted query, which means that users can query from different entity collections and link from one entity to other related entities. This presentation will introduce the latest development of the OGC SWE-IoT standards. Potential applications and implications in Earth and Space science will also be discussed. [1] Mike Botts, Sensor Web Enablement White Paper, Open GIS Consortium, Inc. 2002
DHLAS: A web-based information system for statistical genetic analysis of HLA population data.
Thriskos, P; Zintzaras, E; Germenis, A
2007-03-01
DHLAS (database HLA system) is a user-friendly, web-based information system for the analysis of human leukocyte antigens (HLA) data from population studies. DHLAS has been developed using JAVA and the R system, it runs on a Java Virtual Machine and its user-interface is web-based powered by the servlet engine TOMCAT. It utilizes STRUTS, a Model-View-Controller framework and uses several GNU packages to perform several of its tasks. The database engine it relies upon for fast access is MySQL, but others can be used a well. The system estimates metrics, performs statistical testing and produces graphs required for HLA population studies: (i) Hardy-Weinberg equilibrium (calculated using both asymptotic and exact tests), (ii) genetics distances (Euclidian or Nei), (iii) phylogenetic trees using the unweighted pair group method with averages and neigbor-joining method, (iv) linkage disequilibrium (pairwise and overall, including variance estimations), (v) haplotype frequencies (estimate using the expectation-maximization algorithm) and (vi) discriminant analysis. The main merit of DHLAS is the incorporation of a database, thus, the data can be stored and manipulated along with integrated genetic data analysis procedures. In addition, it has an open architecture allowing the inclusion of other functions and procedures.
NASA Astrophysics Data System (ADS)
Fazliev, A.
2009-04-01
The information and knowledge layers of information-computational system for water spectroscopy are described. Semantic metadata for all the tasks of domain information model that are the basis of the layers have been studied. The principle of semantic metadata determination and mechanisms of the usage during information systematization in molecular spectroscopy has been revealed. The software developed for the work with semantic metadata is described as well. Formation of domain model in the framework of Semantic Web is based on the use of explicit specification of its conceptualization or, in other words, its ontologies. Formation of conceptualization for molecular spectroscopy was described in Refs. 1, 2. In these works two chains of task are selected for zeroth approximation for knowledge domain description. These are direct tasks chain and inverse tasks chain. Solution schemes of these tasks defined approximation of data layer for knowledge domain conceptualization. Spectroscopy tasks solutions properties lead to a step-by-step extension of molecular spectroscopy conceptualization. Information layer of information system corresponds to this extension. An advantage of molecular spectroscopy model designed in a form of tasks chain is actualized in the fact that one can explicitly define data and metadata at each step of solution of these molecular spectroscopy chain tasks. Metadata structure (tasks solutions properties) in knowledge domain also has form of a chain in which input data and metadata of the previous task become metadata of the following tasks. The term metadata is used in its narrow sense: metadata are the properties of spectroscopy tasks solutions. Semantic metadata represented with the help of OWL 3 are formed automatically and they are individuals of classes (A-box). Unification of T-box and A-box is an ontology that can be processed with the help of inference engine. In this work we analyzed the formation of individuals of molecular spectroscopy applied ontologies as well as the software used for their creation by means of OWL DL language. The results of this work are presented in a form of an information layer and a knowledge layer in W@DIS information system 4. 1 FORMATION OF INDIVIDUALS OF WATER SPECTROSCOPY APPLIED ONTOLOGY Applied tasks ontology contains explicit description of input an output data of physical tasks solved in two chains of molecular spectroscopy tasks. Besides physical concepts, related to spectroscopy tasks solutions, an information source, which is a key concept of knowledge domain information model, is also used. Each solution of knowledge domain task is linked to the information source which contains a reference on published task solution, molecule and task solution properties. Each information source allows us to identify a certain knowledge domain task solution contained in the information system. Water spectroscopy applied ontology classes are formed on the basis of molecular spectroscopy concepts taxonomy. They are defined by constrains on properties of the selected conceptualization. Extension of applied ontology in W@DIS information system is actualized according to two scenarios. Individuals (ontology facts or axioms) formation is actualized during the task solution upload in the information system. Ontology user operation that implies molecular spectroscopy taxonomy and individuals is performed solely by the user. For this purpose Protege ontology editor was used. For the formation, processing and visualization of knowledge domain tasks individuals a software was designed and implemented. Method of individual formation determines the sequence of steps of created ontology individuals' generation. Tasks solutions properties (metadata) have qualitative and quantitative values. Qualitative metadata are regarded as metadata describing qualitative side of a task such as solution method or other information that can be explicitly specified by object properties of OWL DL language. Quantitative metadata are metadata that describe quantitative properties of task solution such as minimal and maximal data value or other information that can be explicitly obtained by programmed algorithmic operations. These metadata are related to DatatypeProperty properties of OWL specification language Quantitative metadata can be obtained automatically during data upload into information system. Since ObjectProperty values are objects, processing of qualitative metadata requires logical constraints. In case of the task solved in W@DIS ICS qualitative metadata can be formed automatically (for example in spectral functions calculation task). The used methods of translation of qualitative metadata into quantitative is characterized as roughened representation of knowledge in knowledge domain. The existence of two ways of data obtainment is a key moment in the formation of applied ontology of molecular spectroscopy task. experimental method (metadata for experimental data contain description of equipment, experiment conditions and so on) on the initial stage and inverse task solution on the following stages; calculation method (metadata for calculation data are closely related to the metadata used for the description of physical and mathematical models of molecular spectroscopy) 2 SOFTWARE FOR ONTOLOGY OPERATION Data collection in water spectroscopy information system is organized in a form of workflow that contains such operations as information source creation, entry of bibliographic data on publications, formation of uploaded data schema an so on. Metadata are generated in information source as well. Two methods are used for their formation: automatic metadata generation and manual metadata generation (performed by user). Software implementation of support of actions related to metadata formation is performed by META+ module. Functions of META+ module can be divided into two groups. The first groups contains the functions necessary to software developer while the second one the functions necessary to a user of the information system. META+ module functions necessary to the developer are: 1. creation of taxonomy (T-boxes) of applied ontology classes of knowledge domain tasks; 2. creation of instances of task classes; 3. creation of data schemes of tasks in a form of an XML-pattern and based on XML-syntax. XML-pattern is developed for instances generator and created according to certain rules imposed on software generator implementation. 4. implementation of metadata values calculation algorithms; 5. creation of a request interface and additional knowledge processing function for the solution of these task; 6. unification of the created functions and interfaces into one information system The following sequence is universal for the generation of task classes' individuals that form chains. Special interfaces for user operations management are designed for software developer in META+ module. There are means for qualitative metadata values updating during data reuploading to information source. The list of functions necessary to end user contains: - data sets visualization and editing, taking into account their metadata, e.g.: display of unique number of bands in transitions for a certain data source; - export of OWL/RDF models from information system to the environment in XML-syntax; - visualization of instances of classes of applied ontology tasks on molecular spectroscopy; - import of OWL/RDF models into the information system and their integration with domain vocabulary; - formation of additional knowledge of knowledge domain for the construction of ontological instances of task classes using GTML-formats and their processing; - formation of additional knowledge in knowledge domain for the construction of instances of task classes, using software algorithm for data sets processing; - function of semantic search implementation using an interface that formulates questions in a form of related triplets in order for getting an adequate answer. 3 STRUCTURE OF META+ MODULE META+ software module that provides the above functions contains the following components: - a knowledge base that stores semantic metadata and taxonomies of information system; - software libraries POWL and RAP 5 created by third-party developer and providing access to ontological storage; - function classes and libraries that form the core of the module and perform the tasks of formation, storage and visualization of classes instances; - configuration files and module patterns that allow one to adjust and organize operation of different functional blocks; META+ module also contains scripts and patterns implemented according to the rules of W@DIS information system development environment. - scripts for interaction with environment by means of the software core of information system. These scripts provide organizing web-oriented interactive communication; - patterns for the formation of functionality visualization realized by the scripts Software core of scientific information-computational system W@DIS is created with the help of MVC (Model - View - Controller) design pattern that allows us to separate logic of application from its representation. It realizes the interaction of three logical components, actualizing interactivity with the environment via Web and performing its preprocessing. Functions of «Controller» logical component are realized with the help of scripts designed according to the rules imposed by software core of the information system. Each script represents a definite object-oriented class with obligatory class method of script initiation called "start". Functions of actualization of domain application operation results representation (i.e. "View" component) are sets of HTML-patterns that allow one to visualize the results of domain applications operation with the help of additional constructions processed by software core of the system. Besides the interaction with the software core of the scientific information system this module also deals with configuration files of software core and its database. Such organization of work provides closer integration with software core and deeper and more adequate connection in operating system support. 4 CONCLUSION In this work the problems of semantic metadata creation in information system oriented on information representation in the area of molecular spectroscopy have been discussed. The described method of semantic metadata and functions formation as well as realization and structure of META+ module have been described. Architecture of META+ module is closely related to the existing software of "Molecular spectroscopy" scientific information system. Realization of the module is performed with the use of modern approaches to Web-oriented applications development. It uses the existing applied interfaces. The developed software allows us to: - perform automatic metadata annotation of calculated tasks solutions directly in the information system; - perform automatic annotation of metadata on the solution of tasks on task solution results uploading outside the information system forming an instance of the solved task on the basis of entry data; - use ontological instances of task solution for identification of data in information tasks of viewing, comparison and search solved by information system; - export applied tasks ontologies for the operation with them by external means; - solve the task of semantic search according to the pattern and using question-answer type interface. 5 ACKNOWLEDGEMENT The authors are grateful to RFBR for the financial support of development of distributed information system for molecular spectroscopy. REFERENCES A.D.Bykov, A.Z. Fazliev, N.N.Filippov, A.V. Kozodoev, A.I.Privezentsev, L.N.Sinitsa, M.V.Tonkov and M.Yu.Tretyakov, Distributed information system on atmospheric spectroscopy // Geophysical Research Abstracts, SRef-ID: 1607-7962/gra/EGU2007-A-01906, 2007, v. 9, p. 01906. A.I.Prevezentsev, A.Z. Fazliev Applied task ontology for molecular spectroscopy information resources systematization. The Proceedings of 9th Russian scientific conference "Electronic libraries: advanced methods and technologies, electronic collections" - RCDL'2007, Pereslavl Zalesskii, 2007, part.1, 2007, P.201-210. OWL Web Ontology Language Semantics and Abstract Syntax, W3C Recommendation 10 February 2004, http://www.w3.org/TR/2004/REC-owl-semantics-20040210/ W@DIS information system, http://wadis.saga.iao.ru RAP library, http://www4.wiwiss.fu-berlin.de/bizer/rdfapi/.
ERIC Educational Resources Information Center
Ku, David Tawei; Chang, Chia-Chi
2014-01-01
By conducting usability testing on a multilanguage Web site, this study analyzed the cultural differences between Taiwanese and American users in the performance of assigned tasks. To provide feasible insight into cross-cultural Web site design, Microsoft Office Online (MOO) that supports both traditional Chinese and English and contains an almost…
WebChem Viewer: a tool for the easy dissemination of chemical and structural data sets
2014-01-01
Background Sharing sets of chemical data (e.g., chemical properties, docking scores, etc.) among collaborators with diverse skill sets is a common task in computer-aided drug design and medicinal chemistry. The ability to associate this data with images of the relevant molecular structures greatly facilitates scientific communication. There is a need for a simple, free, open-source program that can automatically export aggregated reports of entire chemical data sets to files viewable on any computer, regardless of the operating system and without requiring the installation of additional software. Results We here present a program called WebChem Viewer that automatically generates these types of highly portable reports. Furthermore, in designing WebChem Viewer we have also created a useful online web application for remotely generating molecular structures from SMILES strings. We encourage the direct use of this online application as well as its incorporation into other software packages. Conclusions With these features, WebChem Viewer enables interdisciplinary collaborations that require the sharing and visualization of small molecule structures and associated sets of heterogeneous chemical data. The program is released under the FreeBSD license and can be downloaded from http://nbcr.ucsd.edu/WebChemViewer. The associated web application (called “Smiley2png 1.0”) can be accessed through freely available web services provided by the National Biomedical Computation Resource at http://nbcr.ucsd.edu. PMID:24886360
Optimized Autonomous Space In-situ Sensor-Web for volcano monitoring
Song, W.-Z.; Shirazi, B.; Kedar, S.; Chien, S.; Webb, F.; Tran, D.; Davis, A.; Pieri, D.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.
2008-01-01
In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), is developing a prototype dynamic and scaleable hazard monitoring sensor-web and applying it to volcano monitoring. The combined Optimized Autonomous Space -In-situ Sensor-web (OASIS) will have two-way communication capability between ground and space assets, use both space and ground data for optimal allocation of limited power and bandwidth resources on the ground, and use smart management of competing demands for limited space assets. It will also enable scalability and seamless infusion of future space and in-situ assets into the sensor-web. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been active since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO-1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real-time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be integrated such that each element is capable of autonomously tasking the other. Sensor-web data acquisition and dissemination will be accomplished through the use of the Open Geospatial Consortium Sensorweb Enablement protocols. The three-year project will demonstrate end-to-end system performance with the in-situ test-bed at Mount St. Helens and NASA's EO-1 platform. ??2008 IEEE.
Turning a remotely controllable observatory into a fully autonomous system
NASA Astrophysics Data System (ADS)
Swindell, Scott; Johnson, Chris; Gabor, Paul; Zareba, Grzegorz; Kubánek, Petr; Prouza, Michael
2014-08-01
We describe a complex process needed to turn an existing, old, operational observatory - The Steward Observatory's 61" Kuiper Telescope - into a fully autonomous system, which observers without an observer. For this purpose, we employed RTS2,1 an open sourced, Linux based observatory control system, together with other open sourced programs and tools (GNU compilers, Python language for scripting, JQuery UI for Web user interface). This presentation provides a guide with time estimates needed for a newcomers to the field to handle such challenging tasks, as fully autonomous observatory operations.
Planetary exploration with nanosatellites: a space campus for future technology development
NASA Astrophysics Data System (ADS)
Drossart, P.; Mosser, B.; Segret, B.
2017-09-01
Planetary exploration is at the eve of a revolution through nanosatellites accompanying larger missions, or freely cruising in the solar system, providing a man-made cosmic web for in situ or remote sensing exploration of the Solar System. A first step is to build a specific place dedicated to nanosatellite development. The context of the CCERES PSL space campus presents an environment for nanosatellite testing and integration, a concurrent engineering facility room for project analysis and science environment dedicated to this task.
ERIC Educational Resources Information Center
Moffett, David W.; Claxton, Melba S.; Jordan, Skye L.; Mercer, Patricia P.; Reid, Barbara K.
2007-01-01
The case study describes the early stages of building and using a learning management system (LMS) to aid in the productivity of an education faculty unit. Little to no research exists regarding teacher education units using LMSs to create an online web group for work purposes. The literature review preceding the case study illuminated some of the…
Development of a table tennis robot for ball interception using visual feedback
NASA Astrophysics Data System (ADS)
Parnichkun, Manukid; Thalagoda, Janitha A.
2016-07-01
This paper presents a concept of intercepting a moving table tennis ball using a robot. The robot has four degrees of freedom(DOF) which are simplified in such a way that The system is able to perform the task within the bounded limit. It employs computer vision to localize the ball. For ball identification, Colour Based Threshold Segmentation(CBTS) and Background Subtraction(BS) methodologies are used. Coordinate Transformation(CT) is employed to transform the data, which is taken based on camera coordinate frame to the general coordinate frame. The sensory system consisted of two HD Web Cameras. The computation time of image processing from web cameras is long .it is not possible to intercept table tennis ball using only image processing. Therefore the projectile motion model is employed to predict the final destination of the ball.
Quality and Business Offer Driven Selection of Web Services for Compositions
NASA Astrophysics Data System (ADS)
D'Mello, Demian Antony; Ananthanarayana, V. S.
The service composition makes use of the existing services to produce a new value added service to execute the complex business process. The service discovery finds the suitable services (candidates) for the various tasks of the composition based on the functionality. The service selection in composition assigns the best candidate for each tasks of the pre-structured composition plan based on the non-functional properties. In this paper, we propose the broker based architecture for the QoS and business offer aware Web service compositions. The broker architecture facilitates the registration of a new composite service into three different registries. The broker publishes service information into the service registry and QoS into the QoS registry. The business offers of the composite Web service are published into a separate repository called business offer (BO) registry. The broker employs the mechanism for the optimal assignment of the Web services to the individual tasks of the composition. The assignment is based on the composite service providers’s (CSP) variety of requirements defined on the QoS and business offers. The broker also computes the QoS of resulting composition and provides the useful information for the CSP to publish thier business offers.
Supporting the Application of Design Patterns in Web-Course Design.
ERIC Educational Resources Information Center
Frizell, Sherri S.; Hubscher, Roland
Many instructors are expected to design and create Web courses. The design of Web courses can be a difficult task for educators who lack experience in interaction and instructional design. Design patterns have emerged as a way to capture design experience and present design solutions to novice designers. Design patterns are a widely accepted…
Teaching E-Commerce Web Page Evaluation and Design: A Pilot Study Using Tourism Destination Sites
ERIC Educational Resources Information Center
Susser, Bernard; Ariga, Taeko
2006-01-01
This study explores a teaching method for improving business students' skills in e-commerce page evaluation and making Web design majors aware of business content issues through cooperative learning. Two groups of female students at a Japanese university studying either tourism or Web page design were assigned tasks that required cooperation to…
Learning to Design WebQuests: An Exploration in Preservice Social Studies Education
ERIC Educational Resources Information Center
Bates, Alisa
2008-01-01
Effective uses of technology in social studies methods courses is an under-researched field. This study focused on the development of WebQuests to engage teacher candidate's exploration of the Internet as an authentic medium for inquiry in social studies education. Analysis of appropriateness of tasks in the WebQuests, depth of ideas and audience…
Conducting Web-Based Surveys. ERIC Digest.
ERIC Educational Resources Information Center
Solomon, David J.
Web-based surveying is very attractive for many reasons, including reducing the time and cost of conducting a survey and avoiding the often error prone and tedious task of data entry. At this time, Web-based surveys should still be used with caution. The biggest concern at present is coverage bias or bias resulting from sampled people either not…
ERIC Educational Resources Information Center
Rouet, Jean-Francois; Ros, Christine; Goumi, Antonine; Macedo-Rouet, Monica; Dinet, Jerome
2011-01-01
Two experiments investigated primary and secondary school students' Web menu selection strategies using simulated Web search tasks. It was hypothesized that students' selections of websites depend on their perception and integration of multiple relevance cues. More specifically, students should be able to disentangle superficial cues (e.g.,…
Clinical Predictive Modeling Development and Deployment through FHIR Web Services.
Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng
2015-01-01
Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.
Clinical Predictive Modeling Development and Deployment through FHIR Web Services
Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng
2015-01-01
Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207
Analyzing Web pages visual scanpaths: between and within tasks variability.
Drusch, Gautier; Bastien, J M Christian
2012-01-01
In this paper, we propose a new method for comparing scanpaths in a bottom-up approach, and a test of the scanpath theory. To do so, we conducted a laboratory experiment in which 113 participants were invited to accomplish a set of tasks on two different websites. For each site, they had to perform two tasks that had to be repeated ounce. The data were analyzed using a procedure similar to the one used by Duchowski et al. [8]. The first step was to automatically identify, then label, AOIs with the mean-shift clustering procedure [19]. Then, scanpaths were compared two by two with a modified version of the string-edit method, which take into account the order of AOIs visualizations [2]. Our results show that scanpaths variability between tasks but within participants seems to be lower than the variability within task for a given participant. In other words participants seem to be more coherent when they perform different tasks, than when they repeat the same tasks. In addition, participants view more of the same AOI when they perform a different task on the same Web page than when they repeated the same task. These results are quite different from what predicts the scanpath theory.
Ariza, Ferran; Kalra, Dipak; Potts, Henry Ww
2015-11-20
Clinical information systems in the National Health Service do not need to conform to any explicit usability requirements. Poor usability can increase the mental workload experienced by clinicians and cause fatigue, increase error rates and impact the overall patient safety. Mental workload can be used as a measure of usability. To assess the subjective cognitive workload experienced by general practitioners (GPs) with their systems. To raise awareness of the importance of usability in system design among users, designers, developers and policymakers. We used a modified version of the NASA Task Load Index, adapted for web. We developed a set of common clinical scenarios and computer tasks on an online survey. We emailed the study link to 199 clinical commissioning groups and 1,646 GP practices in England. Sixty-seven responders completed the survey. The respondents had spent an average of 17 years in general practice, had experience of using a mean of 1.5 GP computer systems and had used their current system for a mean time of 6.7 years. The mental workload score was not different among systems. There were significant differences among the task scores, but these differences were not specific to particular systems. The overall score and task scores were related to the length of experience with their present system. Four tasks imposed a higher mental workload on GPs: 'repeat prescribing', 'find episode', 'drug management' and 'overview records'. Further usability studies on GP systems should focus on these tasks. Users, policymakers, designers and developers should remain aware of the importance of usability in system design.What does this study add?• Current GP systems in England do not need to conform to explicit usability requirements. Poor usability can increase the mental workload of clinicians and lead to errors.• Some clinical computer tasks incur more cognitive workload than others and should be considered carefully during the design of a system.• GPs did not report overall very high levels of subjective cognitive workload when undertaking common clinical tasks with their systems.• Further usability studies on GP systems should focus on the tasks incurring higher cognitive workload.• Users, policymakers, and designers and developers should remain aware of the importance of usability in system design.
Starodubtsev, V I; Kuznetsov, S L; Kurakova, N G; Tsvetkova, L A
2012-01-01
The contribution scientific publications of Russian Academy of Medical Sciences (RAMS) in the national publication stream, indexed by Web of Science over the past thirty years, was estimated. The indicators of publication activity that are necessary for the institutions of RAMS to achieve in short-term period the conformity with bibliometric indicators, established by Presidential Decree of May 7, 2012 (to increase the share of Russian publications in Web of Science to 2.44% in 2015) were calculated. It is shown that the current structure of global science, where publications in medicine make up for approximately one third of scientific publications in the world, set for RAMS scientists particularly difficult task: to double in three years the number of publications in Web of Sci. In the article are proposed the priorities and the necessary steps to fulfill this task.
iDEAS: A web-based system for dry eye assessment.
Remeseiro, Beatriz; Barreira, Noelia; García-Resúa, Carlos; Lira, Madalena; Giráldez, María J; Yebra-Pimentel, Eva; Penedo, Manuel G
2016-07-01
Dry eye disease is a public health problem, whose multifactorial etiology challenges clinicians and researchers making necessary the collaboration between different experts and centers. The evaluation of the interference patterns observed in the tear film lipid layer is a common clinical test used for dry eye diagnosis. However, it is a time-consuming task with a high degree of intra- as well as inter-observer variability, which makes the use of a computer-based analysis system highly desirable. This work introduces iDEAS (Dry Eye Assessment System), a web-based application to support dry eye diagnosis. iDEAS provides a framework for eye care experts to collaboratively work using image-based services in a distributed environment. It is composed of three main components: the web client for user interaction, the web application server for request processing, and the service module for image analysis. Specifically, this manuscript presents two automatic services: tear film classification, which classifies an image into one interference pattern; and tear film map, which illustrates the distribution of the patterns over the entire tear film. iDEAS has been evaluated by specialists from different institutions to test its performance. Both services have been evaluated in terms of a set of performance metrics using the annotations of different experts. Note that the processing time of both services has been also measured for efficiency purposes. iDEAS is a web-based application which provides a fast, reliable environment for dry eye assessment. The system allows practitioners to share images, clinical information and automatic assessments between remote computers. Additionally, it save time for experts, diminish the inter-expert variability and can be used in both clinical and research settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Segerståhl, Katarina; Oinas-Kukkonen, Harri
2011-12-01
Various personal monitoring technologies have been introduced for supporting regular physical activity, which is of critical importance in reducing the risks of several chronic diseases. Recent studies suggest that combining multiple modes of delivery, such as text messages and mobile monitoring devices with web applications, holds potential for effectively supporting physical exercise. Of particular interest is how the functionality and content of these systems should be distributed across the different modes for successful outcomes. The aim of this study was to: (a) investigate how users incorporate a system employing two modes of delivery - a wearable heart rate monitor and a web service - into their training and (b) to analyze benefits and limitations in personal exercise monitoring and how they relate to the different modes in use. A qualitative field study employing diaries and semi-structured interviews was carried out with 30 participants who used a heart rate monitoring system comprising a wearable heart rate monitor, Polar FT60 and a web service, Polar Personal Trainer for a period of 21 days. The data were systematically analyzed to identify specific benefits and limitations associated with the system characteristics and modes as perceived by the end-users. The benefits include supporting exploratory learning, controlling target behavior, rectifying behaviors, motivation and logging support. The limitations are associated with information for validating the system, virtual coaching, task-technology fit, data integrity and privacy concerns. Mobile interfaces enable exploratory learning and controlling of target behaviors in situ, while web services can effectively support users' need for cognition within the early stages of adoption and long-term training with intelligent coaching functionality. This study explains several benefits and limitations in personal exercise monitoring. These can be addressed with crossmedial design, i.e., strategic distribution of functionality and content across modes within the system. Our findings suggest that personal exercise monitoring systems may be improved by more systematically combining mobile and web-based functionality. 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gordova, Yulia; Gorbatenko, Valentina; Martynova, Yulia; Shulgina, Tamara
2014-05-01
A problem of making education relevant to the workplace tasks is a key problem of higher education because old-school training programs are not keeping pace with the rapidly changing situation in the professional field of environmental sciences. A joint group of specialists from Tomsk State University and Siberian center for Environmental research and Training/IMCES SB RAS developed several new courses for students of "Climatology" and "Meteorology" specialties, which comprises theoretical knowledge from up-to-date environmental sciences with practical tasks. To organize the educational process we use an open-source course management system Moodle (www.moodle.org). It gave us an opportunity to combine text and multimedia in a theoretical part of educational courses. The hands-on approach is realized through development of innovative trainings which are performed within the information-computational platform "Climate" (http://climate.scert.ru/) using web GIS tools. These trainings contain practical tasks on climate modeling and climate changes assessment and analysis and should be performed using typical tools which are usually used by scientists performing such kind of research. Thus, students are engaged in n the use of modern tools of the geophysical data analysis and it cultivates dynamic of their professional learning. The hands-on approach can help us to fill in this gap because it is the only approach that offers experience, increases students involvement, advance the use of modern information and communication tools. The courses are implemented at Tomsk State University and help forming modern curriculum in Earth system science area. This work is partially supported by SB RAS project VIII.80.2.1, RFBR grants numbers 13-05-12034 and 14-05-00502.
Vandervalk, Ben; McCarthy, E Luke; Cruz-Toledo, José; Klein, Artjom; Baker, Christopher J O; Dumontier, Michel; Wilkinson, Mark D
2013-04-05
The Web provides widespread access to vast quantities of health-related information that can improve quality-of-life through better understanding of personal symptoms, medical conditions, and available treatments. Unfortunately, identifying a credible and personally relevant subset of information can be a time-consuming and challenging task for users without a medical background. The objective of the Personal Health Lens system is to aid users when reading health-related webpages by providing warnings about personally relevant drug interactions. More broadly, we wish to present a prototype for a novel, generalizable approach to facilitating interactions between a patient, their practitioner(s), and the Web. We utilized a distributed, Semantic Web-based architecture for recognizing personally dangerous drugs consisting of: (1) a private, local triple store of personal health information, (2) Semantic Web services, following the Semantic Automated Discovery and Integration (SADI) design pattern, for text mining and identifying substance interactions, (3) a bookmarklet to trigger analysis of a webpage and annotate it with personalized warnings, and (4) a semantic query that acts as an abstract template of the analytical workflow to be enacted by the system. A prototype implementation of the system is provided in the form of a Java standalone executable JAR file. The JAR file bundles all components of the system: the personal health database, locally-running versions of the SADI services, and a javascript bookmarklet that triggers analysis of a webpage. In addition, the demonstration includes a hypothetical personal health profile, allowing the system to be used immediately without configuration. Usage instructions are provided. The main strength of the Personal Health Lens system is its ability to organize medical information and to present it to the user in a personalized and contextually relevant manner. While this prototype was limited to a single knowledge domain (drug/drug interactions), the proposed architecture is generalizable, and could act as the foundation for much richer personalized-health-Web clients, while importantly providing a novel and personalizable mechanism for clinical experts to inject their expertise into the browsing experience of their patients in the form of customized semantic queries and ontologies.
Vandervalk, Ben; McCarthy, E Luke; Cruz-Toledo, José; Klein, Artjom; Baker, Christopher J O; Dumontier, Michel
2013-01-01
Background The Web provides widespread access to vast quantities of health-related information that can improve quality-of-life through better understanding of personal symptoms, medical conditions, and available treatments. Unfortunately, identifying a credible and personally relevant subset of information can be a time-consuming and challenging task for users without a medical background. Objective The objective of the Personal Health Lens system is to aid users when reading health-related webpages by providing warnings about personally relevant drug interactions. More broadly, we wish to present a prototype for a novel, generalizable approach to facilitating interactions between a patient, their practitioner(s), and the Web. Methods We utilized a distributed, Semantic Web-based architecture for recognizing personally dangerous drugs consisting of: (1) a private, local triple store of personal health information, (2) Semantic Web services, following the Semantic Automated Discovery and Integration (SADI) design pattern, for text mining and identifying substance interactions, (3) a bookmarklet to trigger analysis of a webpage and annotate it with personalized warnings, and (4) a semantic query that acts as an abstract template of the analytical workflow to be enacted by the system. Results A prototype implementation of the system is provided in the form of a Java standalone executable JAR file. The JAR file bundles all components of the system: the personal health database, locally-running versions of the SADI services, and a javascript bookmarklet that triggers analysis of a webpage. In addition, the demonstration includes a hypothetical personal health profile, allowing the system to be used immediately without configuration. Usage instructions are provided. Conclusions The main strength of the Personal Health Lens system is its ability to organize medical information and to present it to the user in a personalized and contextually relevant manner. While this prototype was limited to a single knowledge domain (drug/drug interactions), the proposed architecture is generalizable, and could act as the foundation for much richer personalized-health-Web clients, while importantly providing a novel and personalizable mechanism for clinical experts to inject their expertise into the browsing experience of their patients in the form of customized semantic queries and ontologies. PMID:23612187
Development of a Web-Based Visualization Platform for Climate Research Using Google Earth
NASA Technical Reports Server (NTRS)
Sun, Xiaojuan; Shen, Suhung; Leptoukh, Gregory G.; Wang, Panxing; Di, Liping; Lu, Mingyue
2011-01-01
Recently, it has become easier to access climate data from satellites, ground measurements, and models from various data centers, However, searching. accessing, and prc(essing heterogeneous data from different sources are very tim -consuming tasks. There is lack of a comprehensive visual platform to acquire distributed and heterogeneous scientific data and to render processed images from a single accessing point for climate studies. This paper. documents the design and implementation of a Web-based visual, interoperable, and scalable platform that is able to access climatological fields from models, satellites, and ground stations from a number of data sources using Google Earth (GE) as a common graphical interface. The development is based on the TCP/IP protocol and various data sharing open sources, such as OPeNDAP, GDS, Web Processing Service (WPS), and Web Mapping Service (WMS). The visualization capability of integrating various measurements into cE extends dramatically the awareness and visibility of scientific results. Using embedded geographic information in the GE, the designed system improves our understanding of the relationships of different elements in a four dimensional domain. The system enables easy and convenient synergistic research on a virtual platform for professionals and the general public, gr$tly advancing global data sharing and scientific research collaboration.
The Sargassum Early Advisory System (SEAS)
NASA Astrophysics Data System (ADS)
Armstrong, D.; Gallegos, S. C.
2016-02-01
The Sargassum Early Advisory System (SEAS) web-app was designed to automatically detect Sargassum at sea, forecast movement of the seaweed, and alert users of potential landings. Inspired to help address the economic hardships caused by large landings of Sargassum, the web app automates and enhances the manual tasks conducted by the SEAS group of Texas A&M University at Galveston. The SEAS web app is a modular, mobile-friendly tool that automates the entire workflow from data acquisition to user management. The modules include: 1) an Imagery Retrieval Module to automatically download Landsat-8 Operational Land Imagery (OLI) from the United States Geological Survey (USGS), 2) a Processing Module for automatic detection of Sargassum in the OLI imagery, and subsequent mapping of theses patches in the HYCOM grid, producing maps that show Sargassum clusters; 3) a Forecasting engine fed by the HYbrid Coordinate Ocean Model (HYCOM) model currents and winds from weather buoys; and 4) a mobile phone optimized geospatial user interface. The user can view the last known position of Sargassum clusters, trajectory and location projections for the next 24, 72 and 168 hrs. Users can also subscribe to alerts generated for particular areas. Currently, the SEAS web app produces advisories for Texas beaches. The forecasted Sargassum landing locations are validated by reports from Texas beach managers. However, the SEAS web app was designed to easily expand to other areas, and future plans call for extending the SEAS web app to Mexico and the Caribbean islands. The SEAS web app development is led by NASA, with participation by ASRC Federal/Computer Science Corporation, and the Naval Research Laboratory, all at Stennis Space Center, and Texas A&M University at Galveston.
2005-06-01
need for user-defined dashboard • automated monitoring of web data sources • task driven data aggregation and display Working toward automated processing of task, resource, and intelligence updates
Learning from Student Experiences for Online Assessment Tasks
ERIC Educational Resources Information Center
Qayyum, M. Asim; Smith, David
2015-01-01
Introduction: Use of the Internet for open Web searches is common among university students in academic learning tasks. The tools used by students to find relevant information for online assessment tasks were investigated and their information seeking behaviour was documented to explore the impact on assessment design. Method: A mixed methods…
Web Survey Design in ASP.Net 2.0: A Simple Task with One Line of Code
ERIC Educational Resources Information Center
Liu, Chang
2007-01-01
Over the past few years, more and more companies have been investing in electronic commerce (EC) by designing and implementing Web-based applications. In the world of practice, the importance of using Web technology to reach individual customers has been presented by many researchers. This paper presents an easy way of conducting marketing…
Outdoor Programs On-Line: Creating a Link with Participants, Staff and Community.
ERIC Educational Resources Information Center
Poff, Raymond
As use of the Internet and the World Wide Web increases, patrons expect that organizations will utilize the technology, and outdoor programs can benefit from doing so. Web sites can be thought of as miniature information booths containing information an agency wants to make available to the public. Outdoor programs can use the Web for many tasks,…
ERIC Educational Resources Information Center
Bilal, Dania
2002-01-01
Reports findings of a three-part research project that examined the information seeking behavior and success of 22 seventh-grade science students in using the Web. Discusses problems encountered, including inadequate knowledge of how to use the search engine and poor level of research skills; and considers implications for Web training and system…
Biological data integration: wrapping data and tools.
Lacroix, Zoé
2002-06-01
Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. Building a digital library for scientific data requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web as well as data generated by software. We present an approach to wrapping web data sources, databases, flat files, or data generated by tools through a database view mechanism. Generally, a wrapper has two tasks: it first sends a query to the source to retrieve data and, second builds the expected output with respect to the virtual structure. Our wrappers are composed of a retrieval component based on an intermediate object view mechanism called search views mapping the source capabilities to attributes, and an eXtensible Markup Language (XML) engine, respectively, to perform these two tasks. The originality of the approach consists of: 1) a generic view mechanism to access seamlessly data sources with limited capabilities and 2) the ability to wrap data sources as well as the useful specific tools they may provide. Our approach has been developed and demonstrated as part of the multidatabase system supporting queries via uniform object protocol model (OPM) interfaces.
Project management web tools at the MICE experiment
NASA Astrophysics Data System (ADS)
Coney, L. R.; Tunnell, C. D.
2012-12-01
Project management tools like Trac are commonly used within the open-source community to coordinate projects. The Muon Ionization Cooling Experiment (MICE) uses the project management web application Redmine to host mice.rl.ac.uk. Many groups within the experiment have a Redmine project: analysis, computing and software (including offline, online, controls and monitoring, and database subgroups), executive board, and operations. All of these groups use the website to communicate, track effort, develop schedules, and maintain documentation. The issue tracker is a rich tool that is used to identify tasks and monitor progress within groups on timescales ranging from immediate and unexpected problems to milestones that cover the life of the experiment. It allows the prioritization of tasks according to time-sensitivity, while providing a searchable record of work that has been done. This record of work can be used to measure both individual and overall group activity, identify areas lacking sufficient personnel or effort, and as a measure of progress against the schedule. Given that MICE, like many particle physics experiments, is an international community, such a system is required to allow easy communication within a global collaboration. Unlike systems that are purely wiki-based, the structure of a project management tool like Redmine allows information to be maintained in a more structured and logical fashion.
Grid Computing Application for Brain Magnetic Resonance Image Processing
NASA Astrophysics Data System (ADS)
Valdivia, F.; Crépeault, B.; Duchesne, S.
2012-02-01
This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.
Adding Processing Functionality to the Sensor Web
NASA Astrophysics Data System (ADS)
Stasch, Christoph; Pross, Benjamin; Jirka, Simon; Gräler, Benedikt
2017-04-01
The Sensor Web allows discovering, accessing and tasking different kinds of environmental sensors in the Web, ranging from simple in-situ sensors to remote sensing systems. However, (geo-)processing functionality needs to be applied to integrate data from different sensor sources and to generate higher level information products. Yet, a common standardized approach for processing sensor data in the Sensor Web is still missing and the integration differs from application to application. Standardizing not only the provision of sensor data, but also the processing facilitates sharing and re-use of processing modules, enables reproducibility of processing results, and provides a common way to integrate external scalable processing facilities or legacy software. In this presentation, we provide an overview on on-going research projects that develop concepts for coupling standardized geoprocessing technologies with Sensor Web technologies. At first, different architectures for coupling sensor data services with geoprocessing services are presented. Afterwards, profiles for linear regression and spatio-temporal interpolation of the OGC Web Processing Services that allow consuming sensor data coming from and uploading predictions to Sensor Observation Services are introduced. The profiles are implemented in processing services for the hydrological domain. Finally, we illustrate how the R software can be coupled with existing OGC Sensor Web and Geoprocessing Services and present an example, how a Web app can be built that allows exploring the results of environmental models in an interactive way using the R Shiny framework. All of the software presented is available as Open Source Software.
Artificial intelligence in the service of system administrators
NASA Astrophysics Data System (ADS)
Haen, C.; Barra, V.; Bonaccorsi, E.; Neufeld, N.
2012-12-01
The LHCb online system relies on a large and heterogeneous IT infrastructure made from thousands of servers on which many different applications are running. They run a great variety of tasks: critical ones such as data taking and secondary ones like web servers. The administration of such a system and making sure it is working properly represents a very important workload for the small expert-operator team. Research has been performed to try to automatize (some) system administration tasks, starting in 2001 when IBM defined the so-called “self objectives” supposed to lead to “autonomic computing”. In this context, we present a framework that makes use of artificial intelligence and machine learning to monitor and diagnose at a low level and in a non intrusive way Linux-based systems and their interaction with software. Moreover, the multi agent approach we use, coupled with an “object oriented paradigm” architecture should increase our learning speed a lot and highlight relations between problems.
Successfully Preparing Your CMS Web Area for OWC Review
Common issues in draft websites reviewed by EPA's Office of Web Communications that result in multiple rounds of review; such as lack of focus on key audiences' top tasks, not using plain language and conciseness, and unclear titles and headings.
Integrating Information Extraction Agents into a Tourism Recommender System
NASA Astrophysics Data System (ADS)
Esparcia, Sergio; Sánchez-Anguix, Víctor; Argente, Estefanía; García-Fornes, Ana; Julián, Vicente
Recommender systems face some problems. On the one hand information needs to be maintained updated, which can result in a costly task if it is not performed automatically. On the other hand, it may be interesting to include third party services in the recommendation since they improve its quality. In this paper, we present an add-on for the Social-Net Tourism Recommender System that uses information extraction and natural language processing techniques in order to automatically extract and classify information from the Web. Its goal is to maintain the system updated and obtain information about third party services that are not offered by service providers inside the system.
A Web-Based Data-Querying Tool Based on Ontology-Driven Methodology and Flowchart-Based Model
Ping, Xiao-Ou; Chung, Yufang; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei
2013-01-01
Background Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. Objective The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. Methods The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. Results In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, “degree of liver damage,” “degree of liver damage when applying a mutually exclusive setting,” and “treatments for liver cancer”) was 100% for all four experiments (10 patients, 100 patients, 1000 patients, and 10,000 patients). Among the three measured query phases, (1) structured query language operations, (2) criteria verification, and (3) other, the first two had the longest execution time. Conclusions The ontology-driven FBDQM-based approach enriched the capabilities of the data-querying system. The adoption of the GLIF3.5 increased the potential for interoperability, shareability, and reusability of the query tasks. PMID:25600078
WebGIVI: a web-based gene enrichment analysis and visualization tool.
Sun, Liang; Zhu, Yongnan; Mahmood, A S M Ashique; Tudor, Catalina O; Ren, Jia; Vijay-Shanker, K; Chen, Jian; Schmidt, Carl J
2017-05-04
A major challenge of high throughput transcriptome studies is presenting the data to researchers in an interpretable format. In many cases, the outputs of such studies are gene lists which are then examined for enriched biological concepts. One approach to help the researcher interpret large gene datasets is to associate genes and informative terms (iTerm) that are obtained from the biomedical literature using the eGIFT text-mining system. However, examining large lists of iTerm and gene pairs is a daunting task. We have developed WebGIVI, an interactive web-based visualization tool ( http://raven.anr.udel.edu/webgivi/ ) to explore gene:iTerm pairs. WebGIVI was built via Cytoscape and Data Driven Document JavaScript libraries and can be used to relate genes to iTerms and then visualize gene and iTerm pairs. WebGIVI can accept a gene list that is used to retrieve the gene symbols and corresponding iTerm list. This list can be submitted to visualize the gene iTerm pairs using two distinct methods: a Concept Map or a Cytoscape Network Map. In addition, WebGIVI also supports uploading and visualization of any two-column tab separated data. WebGIVI provides an interactive and integrated network graph of gene and iTerms that allows filtering, sorting, and grouping, which can aid biologists in developing hypothesis based on the input gene lists. In addition, WebGIVI can visualize hundreds of nodes and generate a high-resolution image that is important for most of research publications. The source code can be freely downloaded at https://github.com/sunliang3361/WebGIVI . The WebGIVI tutorial is available at http://raven.anr.udel.edu/webgivi/tutorial.php .
Performance measurement integrated information framework in e-Manufacturing
NASA Astrophysics Data System (ADS)
Teran, Hilaida; Hernandez, Juan Carlos; Vizán, Antonio; Ríos, José
2014-11-01
The implementation of Internet technologies has led to e-Manufacturing technologies becoming more widely used and to the development of tools for compiling, transforming and synchronising manufacturing data through the Web. In this context, a potential area for development is the extension of virtual manufacturing to performance measurement (PM) processes, a critical area for decision making and implementing improvement actions in manufacturing. This paper proposes a PM information framework to integrate decision support systems in e-Manufacturing. Specifically, the proposed framework offers a homogeneous PM information exchange model that can be applied through decision support in e-Manufacturing environment. Its application improves the necessary interoperability in decision-making data processing tasks. It comprises three sub-systems: a data model, a PM information platform and PM-Web services architecture. A practical example of data exchange for measurement processes in the area of equipment maintenance is shown to demonstrate the utility of the model.
Semantic integration of information about orthologs and diseases: the OGO system.
Miñarro-Gimenez, Jose Antonio; Egaña Aranguren, Mikel; Martínez Béjar, Rodrigo; Fernández-Breis, Jesualdo Tomás; Madrid, Marisa
2011-12-01
Semantic Web technologies like RDF and OWL are currently applied in life sciences to improve knowledge management by integrating disparate information. Many of the systems that perform such task, however, only offer a SPARQL query interface, which is difficult to use for life scientists. We present the OGO system, which consists of a knowledge base that integrates information of orthologous sequences and genetic diseases, providing an easy to use ontology-constrain driven query interface. Such interface allows the users to define SPARQL queries through a graphical process, therefore not requiring SPARQL expertise. Copyright © 2011 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Rodicio, Héctor García
2015-01-01
When searching and using resources on the Web, students have to evaluate Web pages in terms of relevance and reliability. This evaluation can be done in a more or less systematic way, by either considering deep or superficial cues of relevance and reliability. The goal of this study was to examine how systematic students are when evaluating Web…
ERIC Educational Resources Information Center
DeSchryver, Michael
2012-01-01
This dissertation utilized a multiple case study design to explore how advanced learners synthesize information about ill-structured topics when reading-to-learn and reading-to-do on the Web. Eight graduate students provided data in the form of think-alouds, interviews, screen video, digital trails, and task artifacts. Data analysis was based on…
ERIC Educational Resources Information Center
Hwang, Gwo-Jen; Wu, Po-Han; Chen, Chi-Chang
2012-01-01
In this paper, an online game was developed in the form of a competitive board game for conducting web-based problem-solving activities. The participants of the game determined their move by throwing a dice. Each location of the game board corresponds to a gaming task, which could be a web-based information-searching question or a mini-game; the…
Autonomous Learning through Task-Based Instruction in Fully Online Language Courses
ERIC Educational Resources Information Center
Lee, Lina
2016-01-01
This study investigated the affordances for autonomous learning in a fully online learning environment involving the implementation of task-based instruction in conjunction with Web 2.0 technologies. To that end, four-skill-integrated tasks and digital tools were incorporated into the coursework. Data were collected using midterm reflections,…
Seahawk: moving beyond HTML in Web-based bioinformatics analysis.
Gordon, Paul M K; Sensen, Christoph W
2007-06-18
Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therefore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer.
Seahawk: moving beyond HTML in Web-based bioinformatics analysis
Gordon, Paul MK; Sensen, Christoph W
2007-01-01
Background Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therfore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. Results We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. Conclusion As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer. PMID:17577405
Optimality of the basic colour categories for classification
Griffin, Lewis D
2005-01-01
Categorization of colour has been widely studied as a window into human language and cognition, and quite separately has been used pragmatically in image-database retrieval systems. This suggests the hypothesis that the best category system for pragmatic purposes coincides with human categories (i.e. the basic colours). We have tested this hypothesis by assessing the performance of different category systems in a machine-vision task. The task was the identification of the odd-one-out from triples of images obtained using a web-based image-search service. In each triple, two of the images had been retrieved using the same search term, the other a different term. The terms were simple concrete nouns. The results were as follows: (i) the odd-one-out task can be performed better than chance using colour alone; (ii) basic colour categorization performs better than random systems of categories; (iii) a category system that performs better than the basic colours could not be found; and (iv) it is not just the general layout of the basic colours that is important, but also the detail. We conclude that (i) the results support the plausibility of an explanation for the basic colours as a result of a pressure-to-optimality and (ii) the basic colours are good categories for machine vision image-retrieval systems. PMID:16849219
ERIC Educational Resources Information Center
Kawka, Marta; Larkin, Kevin M.; Danaher, Patrick
2012-01-01
This paper explores the implementation of a Flickr (Web 2.0 photo sharing software) learning task in a first year primary education course. The context for the task was a Multiliteracies course where students designed digital media activities for later use with primary age students. The Flickr task was constructed to determine how a learning…
Using Cloud-based Storage Technologies for Earth Science Data
NASA Astrophysics Data System (ADS)
Michaelis, A.; Readey, J.; Votava, P.
2016-12-01
Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.
Kamel Boulos, M N; Roudsari, A V; Gordon, C; Muir Gray, J A
2001-01-01
In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health. They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health. It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space.
Jones, Josette; Schilling, Katherine; Pesut, Daniel
2011-01-01
The purpose of this study was to answer the following two questions: What are clinical nurses' rationales for their approaches to finding patient educational materials on the web? What are perceived barriers and benefits associated with the use of web-based information resources for patient education in the context of nursing clinical practice?Over 179 individual data units were analyzed to understand clinical nurses' rationales for their approaches to find patient educational materials on the web. Rationales were defined as those underlying catalysts or activators leading to an information need. Analyses found that the primary reasons why clinical nurses conducted web-based information searches included direct patient requests ( 9 requests), colleague requests (6 requests), building patient materials collections (4), patients' family requests (3), routine teaching (1), personal development (1), or staff development (1). From these data, four broad themes emerged: professional reasons, personal reasons, technology reasons, and organization reasons for selecting information resources. Content analysis identified 306 individual data units representing either 'benefits' (178 units) or 'barriers' (128) to the nurses' use of web resources for on-unit patient care. Inter-rater reliability was assessed and found to be excellent (r = 0.943 to 0.961). The primary themes that emerged as barriers to the used of web-based resources included: 1) time requirements to perform a search, 2) nurses' experience and knowledge about the resources or required technology, 3) specific characteristics of individuals electronic information resources, and 4) organizational procedures and policies. Three primary themes that represented the benefits of using web-based resources were also identified: 1) past experiences and knowledge of a specific resource or the required technologies, 2) availability and accessibility on the unit, and 3) specific characteristics of individual information tool. In many cases, nurses commented on specific characteristics or features of favorite information resources. Favorite sites included a variety or reputable health care organizations that displayed context in text, audio, and/or video. In addition such sites were described as easy-to read and provided content related to patient-focused information or specific content such as toll free telephone contact numbers.Information searching is the interaction between and among information users and computer-based information systems. Information seeking is becoming an important part of the knowledge work of nurses. Information seeking and searching intersects with the field of human computer interaction (HCI), which focuses on all aspects of human, and computer interactions. Users of an information system are understood as "actors" in situations, with a set of skills and shared practices based on work experiences with others. Designing better tools and developing information searching strategies that support, extend, and transform practices, begins by asking: Who are the users? What are the tasks? What is the interplay between the technology and the organization of the task? This study contributes fundamental data and information about the rationales nurses use in information seeking tasks. In addition it provides empirical evidences regarding barriers and benefits of information seeking in the context of patient education needs in inpatient clinical settings.
Web-based integrated public healthcare information system of Korea: development and performance.
Ryu, Seewon; Park, Minsu; Lee, Jaegook; Kim, Sung-Soo; Han, Bum Soo; Mo, Kyoung Chun; Lee, Hyung Seok
2013-12-01
The Web-based integrated public healthcare information system (PHIS) of Korea was planned and developed from 2005 to 2010, and it is being used in 3,501 regional health organizations. This paper introduces and discusses development and performance of the system. We reviewed and examined documents about the development process and performance of the newly integrated PHIS. The resources we analyzed the national plan for public healthcare, information strategy for PHIS, usage and performance reports of the system. The integrated PHIS included 19 functional business areas, 47 detailed health programs, and 48 inter-organizational tasks. The new PHIS improved the efficiency and effectiveness of the business process and inter-organizational business, and enhanced user satisfaction. Economic benefits were obtained from five categories: labor, health education and monitoring, clinical information management, administration and civil service, and system maintenance. The system was certified by a patent from the Korean Intellectual Property Office and accredited as an ISO 9001. It was also reviewed and received preliminary comments about its originality, advancement, and business applicability from the Patent Cooperation Treaty. It has been found to enhance the quality of policy decision-making about regional healthcare at the self-governing local government level. PHIS, a Web-based integrated system, has contributed to the improvement of regional healthcare services of Korea. However, when it comes to an appropriate evolution, the needs and changing environments of community-level healthcare service and IT infrastructure should be analyzed properly in advance.
Web-Based Integrated Public Healthcare Information System of Korea: Development and Performance
Park, Minsu; Lee, Jaegook; Kim, Sung-Soo; Han, Bum Soo; Mo, Kyoung Chun; Lee, Hyung Seok
2013-01-01
Objectives The Web-based integrated public healthcare information system (PHIS) of Korea was planned and developed from 2005 to 2010, and it is being used in 3,501 regional health organizations. This paper introduces and discusses development and performance of the system. Methods We reviewed and examined documents about the development process and performance of the newly integrated PHIS. The resources we analyzed the national plan for public healthcare, information strategy for PHIS, usage and performance reports of the system. Results The integrated PHIS included 19 functional business areas, 47 detailed health programs, and 48 inter-organizational tasks. The new PHIS improved the efficiency and effectiveness of the business process and inter-organizational business, and enhanced user satisfaction. Economic benefits were obtained from five categories: labor, health education and monitoring, clinical information management, administration and civil service, and system maintenance. The system was certified by a patent from the Korean Intellectual Property Office and accredited as an ISO 9001. It was also reviewed and received preliminary comments about its originality, advancement, and business applicability from the Patent Cooperation Treaty. It has been found to enhance the quality of policy decision-making about regional healthcare at the self-governing local government level. Conclusions PHIS, a Web-based integrated system, has contributed to the improvement of regional healthcare services of Korea. However, when it comes to an appropriate evolution, the needs and changing environments of community-level healthcare service and IT infrastructure should be analyzed properly in advance. PMID:24523997
NASA Astrophysics Data System (ADS)
Esparza, Javier
In many areas of computer science entities can “reproduce”, “replicate”, or “create new instances”. Paramount examples are threads in multithreaded programs, processes in operating systems, and computer viruses, but many others exist: procedure calls create new incarnations of the callees, web crawlers discover new pages to be explored (and so “create” new tasks), divide-and-conquer procedures split a problem into subproblems, and leaves of tree-based data structures become internal nodes with children. For lack of a better name, I use the generic term systems with process creation to refer to all these entities.
Wildeboer, Gina; Kelders, Saskia M; van Gemert-Pijnen, Julia E W C
2016-12-01
Research has shown that web-based interventions concerning mental health can be effective, although there is a broad range in effect sizes. Why some interventions are more effective than others is not clear. Persuasive technology is one of the aspects which has a positive influence on changing attitude and/or behavior, and can contribute to better outcomes. According to the Persuasive Systems Design Model there are various principles that can be deployed. It is unknown whether the number and combinations of principles used in a web-based intervention affect the effectiveness. Another issue in web-based interventions is adherence. Little is known about the relationship of adherence on the effectiveness of web-based interventions. This study examines whether there is a relationship between the number and combinations of persuasive technology principles used in web-based interventions and the effectiveness. Also the influence of adherence on effectiveness of web-based interventions is investigated. This study elaborates on the systematic review by [37] and therefore the articles were derived from that study. Only web-based interventions were included that were intended to be used on more than one occasion and studies were excluded when no information on adherence was provided. 48 interventions targeted at mental health were selected for the current study. A within-group (WG) and between-group (BG) meta-analysis were performed and subsequently subgroup analyses regarding the relationship between the number and combinations of persuasive technology principles and effectiveness. The influence of adherence on the effectiveness was examined through a meta-regression analysis. For the WG meta-analysis 40 treatment groups were included. The BG meta-analysis included 19 studies. The mean pooled effect size in the WG meta-analysis was large and significant (Hedges' g=0.94), while for the BG meta-analysis this was moderate to large and significant (Hedges' g=0.78) in favor of the web-based interventions. With regard to the number of persuasive technology principles, the differences between the effect sizes in the subgroups were significant in the WG subgroup analyses for the total number of principles and for the number of principles in the three categories Primary Task Support, Dialogue Support, and Social Support. In the BG subgroup analyses only the difference in Primary Task Support was significant. An increase in the total number of principles and Dialogue Support principles yielded larger effect sizes in the WG subgroup analysis, indicating that more principles lead to better outcomes. The number of principles in the Primary Task Support (WG and BG) and Social Support (WG) did not show an upward trend but had varying effect sizes. We identified a number of combinations of principles that were more effective, but only in the WG analyses. The association between adherence and effectiveness was not significant. There is a relationship between the number of persuasive technology principles and the effectiveness of web-based interventions concerning mental health, however this does not always mean that implementing more principles leads to better outcomes. Regarding the combinations of principles, specific principles seemed to work well together (e.g. tunneling and tailoring; reminders and similarity; social learning and comparison), but adding another principle can diminish the effectiveness (e.g. tunneling, tailoring and reduction). In this study, an increase in adherence was not associated with larger effect sizes. The findings of this study can help developers to decide which persuasive principles to include to make web-based interventions more persuasive. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
System for selecting relevant information for decision support.
Kalina, Jan; Seidl, Libor; Zvára, Karel; Grünfeldová, Hana; Slovák, Dalibor; Zvárová, Jana
2013-01-01
We implemented a prototype of a decision support system called SIR which has a form of a web-based classification service for diagnostic decision support. The system has the ability to select the most relevant variables and to learn a classification rule, which is guaranteed to be suitable also for high-dimensional measurements. The classification system can be useful for clinicians in primary care to support their decision-making tasks with relevant information extracted from any available clinical study. The implemented prototype was tested on a sample of patients in a cardiological study and performs an information extraction from a high-dimensional set containing both clinical and gene expression data.
NASA Astrophysics Data System (ADS)
Alpert, J. C.; Rutledge, G.; Wang, J.; Freeman, P.; Kang, C. Y.
2009-05-01
The NOAA Operational Modeling Archive Distribution System (NOMADS) is now delivering high availability services as part of NOAA's official real time data dissemination at its Web Operations Center (WOC). The WOC is a web service used by all organizational units in NOAA and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. New applications to access data and observations for verification of gridded model output, and progress toward integration with access to conventional and non-conventional observations will be discussed. We will demonstrate how users can use NOMADS services to repackage area subsets either using repackaging of GRIB2 files, or values selected by ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6- Dimensional analysis services across the internet.
Acquiring geographical data with web harvesting
NASA Astrophysics Data System (ADS)
Dramowicz, K.
2016-04-01
Many websites contain very attractive and up to date geographical information. This information can be extracted, stored, analyzed and mapped using web harvesting techniques. Poorly organized data from websites are transformed with web harvesting into a more structured format, which can be stored in a database and analyzed. Almost 25% of web traffic is related to web harvesting, mostly while using search engines. This paper presents how to harvest geographic information from web documents using the free tool called the Beautiful Soup, one of the most commonly used Python libraries for pulling data from HTML and XML files. It is a relatively easy task to process one static HTML table. The more challenging task is to extract and save information from tables located in multiple and poorly organized websites. Legal and ethical aspects of web harvesting are discussed as well. The paper demonstrates two case studies. The first one shows how to extract various types of information about the Good Country Index from the multiple web pages, load it into one attribute table and map the results. The second case study shows how script tools and GIS can be used to extract information from one hundred thirty six websites about Nova Scotia wines. In a little more than three minutes a database containing one hundred and six liquor stores selling these wines is created. Then the availability and spatial distribution of various types of wines (by grape types, by wineries, and by liquor stores) are mapped and analyzed.
Mobile Cloud Computing with SOAP and REST Web Services
NASA Astrophysics Data System (ADS)
Ali, Mushtaq; Fadli Zolkipli, Mohamad; Mohamad Zain, Jasni; Anwar, Shahid
2018-05-01
Mobile computing in conjunction with Mobile web services drives a strong approach where the limitations of mobile devices may possibly be tackled. Mobile Web Services are based on two types of technologies; SOAP and REST, which works with the existing protocols to develop Web services. Both the approaches carry their own distinct features, yet to keep the constraint features of mobile devices in mind, the better in two is considered to be the one which minimize the computation and transmission overhead while offloading. The load transferring of mobile device to remote servers for execution called computational offloading. There are numerous approaches to implement computational offloading a viable solution for eradicating the resources constraints of mobile device, yet a dynamic method of computational offloading is always required for a smooth and simple migration of complex tasks. The intention of this work is to present a distinctive approach which may not engage the mobile resources for longer time. The concept of web services utilized in our work to delegate the computational intensive tasks for remote execution. We tested both SOAP Web services approach and REST Web Services for mobile computing. Two parameters considered in our lab experiments to test; Execution Time and Energy Consumption. The results show that RESTful Web services execution is far better than executing the same application by SOAP Web services approach, in terms of execution time and energy consumption. Conducting experiments with the developed prototype matrix multiplication app, REST execution time is about 200% better than SOAP execution approach. In case of energy consumption REST execution is about 250% better than SOAP execution approach.
2008-09-01
IWPC 21 Berners - Lee , Tim . (1999). Weaving the Web. New York: HarperCollins Publishers, Inc. 22... Berners - Lee , Tim . (1999). Weaving the Web. New York: HarperCollins Publishers, Inc. Berners - Lee , T., Hendler, J., & Lassila, O. (2001). The Semantic...environment where software agents roaming from page to page can readily carry out sophisticated tasks for users. T. Berners - Lee , J. Hendler, and O
Judging nursing information on the WWW: a theoretical understanding.
Cader, Raffik; Campbell, Steve; Watson, Don
2009-09-01
This paper is a report of a study of the judgement processes nurses use when evaluating World Wide Web information related to nursing practice. The World Wide Web has increased the global accessibility of online health information. However, the variable nature of the quality of World Wide Web information and its perceived level of reliability may lead to misinformation. This makes demands on healthcare professionals, and on nurses in particular, to ensure that health information of reliable quality is selected for use in practice. A grounded theory approach was adopted. Semi-structured interviews and focus groups were used to collect data, between 2004 and 2005, from 20 nurses undertaking a postqualification graduate course at a university and 13 nurses from a local hospital in the United Kingdom. A theoretical framework emerged that gave insight into the judgement process nurses use when evaluating World Wide Web information. Participants broke the judgement process down into specific tasks. In addition, they used tacit, process and propositional knowledge and intuition, quasi-rational cognition and analysis to undertake these tasks. World Wide Web information cues, time available and nurses' critical skills were influencing factors in their judgement process. Addressing the issue of quality and reliability associated with World Wide Web information is a global challenge. This theoretical framework could contribute towards meeting this challenge.
Standards-based sensor interoperability and networking SensorWeb: an overview
NASA Astrophysics Data System (ADS)
Bolling, Sam
2012-06-01
The War fighter lacks a unified Intelligence, Surveillance, and Reconnaissance (ISR) environment to conduct mission planning, command and control (C2), tasking, collection, exploitation, processing, and data discovery of disparate sensor data across the ISR Enterprise. Legacy sensors and applications are not standardized or integrated for assured, universal access. Existing tasking and collection capabilities are not unified across the enterprise, inhibiting robust C2 of ISR including near-real time, cross-cueing operations. To address these critical needs, the National Measurement and Signature Intelligence (MASINT) Office (NMO), and partnering Combatant Commands and Intelligence Agencies are developing SensorWeb, an architecture that harmonizes heterogeneous sensor data to a common standard for users to discover, access, observe, subscribe to and task sensors. The SensorWeb initiative long term goal is to establish an open commercial standards-based, service-oriented framework to facilitate plug and play sensors. The current development effort will produce non-proprietary deliverables, intended as a Government off the Shelf (GOTS) solution to address the U.S. and Coalition nations' inability to quickly and reliably detect, identify, map, track, and fully understand security threats and operational activities.
NASA Astrophysics Data System (ADS)
Teng, W.; Chiu, L.; Kempler, S.; Liu, Z.; Nadeau, D.; Rui, H.
2006-12-01
Using NASA satellite remote sensing data from multiple sources for hydrologic applications can be a daunting task and requires a detailed understanding of the data's internal structure and physical implementation. Gaining this understanding and applying it to data reduction is a time-consuming task that must be undertaken before the core investigation can begin. In order to facilitate such investigations, the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) has developed the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure or "Giovanni," which supports a family of Web interfaces (instances) that allow users to perform interactive visualization and analysis online without downloading any data. Two such Giovanni instances are particularly relevant to hydrologic applications: the Tropical Rainfall Measuring Mission (TRMM) Online Visualization and Analysis System (TOVAS) and the Agricultural Online Visualization and Analysis System (AOVAS), both highly popular and widely used for a variety of applications, including those related to several NASA Applications of National Priority, such as Agricultural Efficiency, Disaster Management, Ecological Forecasting, Homeland Security, and Public Health. Dynamic, context- sensitive Web services provided by TOVAS and AOVAS enable users to seamlessly access NASA data from within, and deeply integrate the data into, their local client environments. One example is between TOVAS and Florida International University's TerraFly, a Web-enabled system that serves a broad segment of the research and applications community, by facilitating access to various textual, remotely sensed, and vector data. Another example is between AOVAS and the U.S. Department of Agriculture Foreign Agricultural Service (USDA FAS)'s Crop Explorer, the primary decision support tool used by FAS to monitor the production, supply, and demand of agricultural commodities worldwide. AOVAS is also part of GES DISC's Agricultural Information System (AIS), which can operationally provide satellite remote sensing data products (e.g., near- real-time rainfall) and analysis services to agricultural users. AIS enables the remote, interoperable access to distributed data, by using the GrADS-Data Server (GDS) and the Open Geospatial Consortium (OGC)- compliant MapServer. The latter allows the access of AIS data from any OGC-compliant client, such as the Earth-Sun System Gateway (ESG) or Google Earth. The Giovanni system is evolving towards a Service- Oriented Architecture and is highly customizable (e.g., adding new products or services), thus availing the hydrologic applications user community of Giovanni's simple-to-use and powerful capabilities to improve decision-making.
Semantic Web Infrastructure Supporting NextFrAMES Modeling Platform
NASA Astrophysics Data System (ADS)
Lakhankar, T.; Fekete, B. M.; Vörösmarty, C. J.
2008-12-01
Emerging modeling frameworks offer new ways to modelers to develop model applications by offering a wide range of software components to handle common modeling tasks such as managing space and time, distributing computational tasks in parallel processing environment, performing input/output and providing diagnostic facilities. NextFrAMES, the next generation updates to the Framework for Aquatic Modeling of the Earth System originally developed at University of New Hampshire and currently hosted at The City College of New York takes a step further by hiding most of these services from modeler behind a platform agnostic modeling platform that allows scientists to focus on the implementation of scientific concepts in the form of a new modeling markup language and through a minimalist application programming interface that provide means to implement model processes. At the core of the NextFrAMES modeling platform there is a run-time engine that interprets the modeling markup language loads the module plugins establishes the model I/O and executes the model defined by the modeling XML and the accompanying plugins. The current implementation of the run-time engine is designed for single processor or symmetric multi processing (SMP) systems but future implementation of the run-time engine optimized for different hardware architectures are anticipated. The modeling XML and the accompanying plugins define the model structure and the computational processes in a highly abstract manner, which is not only suitable for the run-time engine, but has the potential to integrate into semantic web infrastructure, where intelligent parsers can extract information about the model configurations such as input/output requirements applicable space and time scales and underlying modeling processes. The NextFrAMES run-time engine itself is also designed to tap into web enabled data services directly, therefore it can be incorporated into complex workflow to implement End-to-End application from observation to the delivery of highly aggregated information. Our presentation will discuss the web services ranging from OpenDAP and WaterOneFlow data services to metadata provided through catalog services that could serve NextFrAMES modeling applications. We will also discuss the support infrastructure needed to streamline the integration of NextFrAMES into an End-to-End application to deliver highly processed information to end users. The End-to-End application will be demonstrated through examples from the State-of-the Global Water System effort that builds on data services provided through WMO's Global Terrestrial Network for Hydrology to deliver water resources related information to policy makers for better water management. Key components of this E2E system are promoted as Community of Practice examples for the Global Observing System of Systems therefore the State-of-the Global Water System can be viewed as test case for the interoperability of the incorporated web service components.
A pilot test of a tailored mobile and web-based diabetes messaging system for adolescents.
Mulvaney, Shelagh A; Anders, Shilo; Smith, Annie K; Pittel, Eric J; Johnson, Kevin B
2012-03-01
We conducted a pilot trial of a new mobile and web-based intervention to improve diabetes adherence. The text messaging system was designed to motivate and remind adolescents about diabetes self-care tasks. Text messages were tailored according to individually-reported barriers to diabetes self-care. A total of 23 adolescents with type 1 diabetes used the system for a period of three months. On average, they received 10 text messages per week (range 8-12). A matched historical control group from the same clinic was used for comparison. After three months, system users rated the content, usability and experiences with the system, which were very favourable. Comparison of the intervention and control groups indicated a significant interaction between group and time. Both groups had similar HbA(1c) levels at baseline. After three months, the mean HbA(1c) level in the intervention group was unchanged (8.8%), but the mean level in the control group was significantly higher (9.9%), P = 0.006. The results demonstrate the feasibility of the messaging system, user acceptance and a promising effect on glycaemic control. Integrating this type of messaging system with online educational programming could prove to be beneficial.
A Web Terminology Server Using UMLS for the Description of Medical Procedures
Burgun, Anita; Denier, Patrick; Bodenreider, Olivier; Botti, Geneviève; Delamarre, Denis; Pouliquen, Bruno; Oberlin, Philippe; Lévéque, Jean M.; Lukacs, Bertrand; Kohler, François; Fieschi, Marius; Le Beux, Pierre
1997-01-01
Abstract The Model for Assistance in the Orientation of a User within Coding Systems (MAOUSSC) project has been designed to provide a representation for medical and surgical procedures that allows several applications to be developed from several viewpoints. It is based on a conceptual model, a controlled set of terms, and Web server development. The design includes the UMLS knowledge sources associated with additional knowledge about medico-surgical procedures. The model was implemented using a relational database. The authors developed a complete interface for the Web presentation, with the intermediary layer being written in PERL. The server has been used for the representation of medico-surgical procedures that occur in the discharge summaries of the national survey of hospital activities that is performed by the French Health Statistics Agency in order to produce inpatient profiles. The authors describe the current status of the MAOUSSC server and discuss their interest in using such a server to assist in the coordination of terminology tasks and in the sharing of controlled terminologies. PMID:9292841
SPARQL Assist language-neutral query composer
2012-01-01
Background SPARQL query composition is difficult for the lay-person, and even the experienced bioinformatician in cases where the data model is unfamiliar. Moreover, established best-practices and internationalization concerns dictate that the identifiers for ontological terms should be opaque rather than human-readable, which further complicates the task of synthesizing queries manually. Results We present SPARQL Assist: a Web application that addresses these issues by providing context-sensitive type-ahead completion during SPARQL query construction. Ontological terms are suggested using their multi-lingual labels and descriptions, leveraging existing support for internationalization and language-neutrality. Moreover, the system utilizes the semantics embedded in ontologies, and within the query itself, to help prioritize the most likely suggestions. Conclusions To ensure success, the Semantic Web must be easily available to all users, regardless of locale, training, or preferred language. By enhancing support for internationalization, and moreover by simplifying the manual construction of SPARQL queries through the use of controlled-natural-language interfaces, we believe we have made some early steps towards simplifying access to Semantic Web resources. PMID:22373327
SPARQL assist language-neutral query composer.
McCarthy, Luke; Vandervalk, Ben; Wilkinson, Mark
2012-01-25
SPARQL query composition is difficult for the lay-person, and even the experienced bioinformatician in cases where the data model is unfamiliar. Moreover, established best-practices and internationalization concerns dictate that the identifiers for ontological terms should be opaque rather than human-readable, which further complicates the task of synthesizing queries manually. We present SPARQL Assist: a Web application that addresses these issues by providing context-sensitive type-ahead completion during SPARQL query construction. Ontological terms are suggested using their multi-lingual labels and descriptions, leveraging existing support for internationalization and language-neutrality. Moreover, the system utilizes the semantics embedded in ontologies, and within the query itself, to help prioritize the most likely suggestions. To ensure success, the Semantic Web must be easily available to all users, regardless of locale, training, or preferred language. By enhancing support for internationalization, and moreover by simplifying the manual construction of SPARQL queries through the use of controlled-natural-language interfaces, we believe we have made some early steps towards simplifying access to Semantic Web resources.
Federated Web-accessible Clinical Data Management within an Extensible NeuroImaging Database
Keator, David B.; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R.; Bockholt, Jeremy; Grethe, Jeffrey S.
2010-01-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site. PMID:20567938
Federated web-accessible clinical data management within an extensible neuroimaging database.
Ozyurt, I Burak; Keator, David B; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R; Bockholt, Jeremy; Grethe, Jeffrey S
2010-12-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site.
Geographic Information Systems and Web Page Development
NASA Technical Reports Server (NTRS)
Reynolds, Justin
2004-01-01
The Facilities Engineering and Architectural Branch is responsible for the design and maintenance of buildings, laboratories, and civil structures. In order to improve efficiency and quality, the FEAB has dedicated itself to establishing a data infrastructure based on Geographic Information Systems, GIs. The value of GIS was explained in an article dating back to 1980 entitled "Need for a Multipurpose Cadastre which stated, "There is a critical need for a better land-information system in the United States to improve land-conveyance procedures, furnish a basis for equitable taxation, and provide much-needed information for resource management and environmental planning." Scientists and engineers both point to GIS as the solution. What is GIS? According to most text books, Geographic Information Systems is a class of software that stores, manages, and analyzes mapable features on, above, or below the surface of the earth. GIS software is basically database management software to the management of spatial data and information. Simply put, Geographic Information Systems manage, analyze, chart, graph, and map spatial information. At the outset, I was given goals and expectations from my branch and from my mentor with regards to the further implementation of GIs. Those goals are as follows: (1) Continue the development of GIS for the underground structures. (2) Extract and export annotated data from AutoCAD drawing files and construct a database (to serve as a prototype for future work). (3) Examine existing underground record drawings to determine existing and non-existing underground tanks. Once this data was collected and analyzed, I set out on the task of creating a user-friendly database that could be assessed by all members of the branch. It was important that the database be built using programs that most employees already possess, ruling out most AutoCAD-based viewers. Therefore, I set out to create an Access database that translated onto the web using Internet Explorer as the foundation. After some programming, it was possible to view AutoCAD files and other GIS-related applications on Internet Explorer, while providing the user with a variety of editing commands and setting options. I was also given the task of launching a divisional website using Macromedia Flash and other web- development programs.
On the rapid and efficient divulgation of monitoring results in landslide emergency scenarios
NASA Astrophysics Data System (ADS)
Giordan, Daniele; Allasia, Paolo; Manconi, Andrea; Bertolo, Davide
2014-05-01
In last decades, the availability of several technological systems to monitor different physical parameters that can be used to control a landslide evolution recorded an exponential growth. In particular, surficial and deep-seated displacements of an instable area, as well as meteorological or hydrological parameters can be nowadays acquired with high spatial and temporal resolutions. As a consequence, the application of complex monitoring systems produces large amounts of data. While this can be considered an important progress in the field of landslide monitoring applications, the availability of large volumes of high resolution and multiparametric information implies important challenges. In this context, two main criticalities are: i) the integrated management of dataset produced by different monitoring systems and ii) the correct divulgation of monitoring results. In this work, we present the results of a real case-study relevant to a complex emergency scenario, i.e. the Mont de La Saxe landslide, a large rockslide (with an estimated volume or more than 8 million of cubic meters) that threatens La Palud and Entrèves hamlets in the Courmayeur municipality (Aosta Valley, Italy). We developed a web-based system based on the ADVICE algorithm (Allasia et al., 2013) in order to manage several data sources. The system collects, analyzes and publishes the results obtained by monitoring instrumentations in near-real-time at each new measurement cycle. Moreover, by collecting all the data in an unique web-based platform reduces the problems of compatibility amongst different monitoring systems, which usually rely on customized software for the data processing, delaying the comparative analysis comparison amongst different data sources. This is indeed a crucial task for decision makers, in particular during the emergency phases. In addition, by using the developed web-based platform we aimed at coping with another important task, often not considered and/or underestimated, relevant to the landslide monitoring results, i.e. the divulgation. Starting from the analysis of different landslide scenarios, we identified and classified people belonging to emergency management teams into several categories according to their role, the level of knowledge of landslides, and/or of monitoring systems. Our aim is to define standards to share the monitoring results, in order to disseminate the information about the recent evolution of the landslide, as well as the level of criticality, within all the people involved (scientists, technicians, civil protection operators, decision makers, politicians, press, population). This task is particularly critical during the emergency phases, when a correct understanding of the situation is (in particular for the population) the first step for a successful emergency management. References: Allasia, P.; Manconi, A.; Giordan, D.; Baldo, M.; Lollino, G. ADVICE: A New Approach for Near-Real-Time Monitoring of Surface Displacements in Landslide Hazard Scenarios. Sensors 2013, 13, 8285-8302.
NASA Astrophysics Data System (ADS)
Feeley, J.; Zajic, J.; Metcalf, A.; Baucom, T.
2009-12-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) Calibration and Validation (Cal/Val) team is planning post-launch activities to calibrate the NPP sensors and validate Sensor Data Records (SDRs). The IPO has developed a web-based data collection and visualization tool in order to effectively collect, coordinate, and manage the calibration and validation tasks for the OMPS, ATMS, CrIS, and VIIRS instruments. This tool is accessible to the multi-institutional Cal/Val teams consisting of the Prime Contractor and Government Cal/Val leads along with the NASA NPP Mission team, and is used for mission planning and identification/resolution of conflicts between sensor activities. Visualization techniques aid in displaying task dependencies, including prerequisites and exit criteria, allowing for the identification of a critical path. This presentation will highlight how the information is collected, displayed, and used to coordinate the diverse instrument calibration/validation teams.
Large area sheet task. Advanced dendritic web growth development. [silicon films
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Frantti, E.; Schruben, J.
1981-01-01
The development of a silicon dendritic web growth machine is discussed. Several refinements to the sensing and control equipment for melt replenishment during web growth are described and several areas for cost reduction in the components of the prototype automated web growth furnace are identified. A circuit designed to eliminate the sensitivity of the detector signal to the intensity of the reflected laser beam used to measure melt level is also described. A variable speed motor for the silicon feeder is discussed which allows pellet feeding to be accomplished at a rate programmed to match exactly the silicon removed by web growth.
76 FR 66125 - Petition for Waiver of Compliance
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-25
..., as well as testing. BNSF states it created a Web-based software application that it characterizes as.... Specifically, BNSF is proposing to use Web-based software to satisfy the ``hands-on'' portion of training... scenario. The employee must maneuver the avatar in the virtual setting and perform all inspection tasks...
Interactive Information Organization: Techniques and Evaluation
2001-05-01
information search and access. Locating interesting information on the World Wide Web is the main task of on-line search engines . Such engines accept a...likelihood of being relevant to the user’s request. The majority of today’s Web search engines follow this scenario. The ordering of documents in the
ERIC Educational Resources Information Center
Chizmar, John F.; Williams, David B.
2001-01-01
Uses classroom experience and data from a faculty survey to explore what faculty want from instructional technology. Presents several assertions, such as "faculty want instructional technology driven by pedagogical goals" and "faculty desire Web-based tools designed for a specific pedagogical task as opposed to a Swiss-Army-knife Web tool designed…
Automating Visualization Service Generation with the WATT Compiler
NASA Astrophysics Data System (ADS)
Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.
2007-12-01
As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web services. In particular, we will detail the generation of a charge density visualization service applicable to output from the quantum calculations of the VLab computation workflows, plus another service for mantle convection visualization. We also discuss WATT-LIVE [2], a web-based interface that allows users to interact with WATT. With WATT-LIVE users submit Tcl code, retrieve its C++ translation with various files and scripts necessary to locally install the tailor-made web service, or launch the service for a limited session on our test server. This work is supported by NSF through the ITR grant NSF-0426867. [1] Virtual Laboratory for Earth and Planetary Materials, http://vlab.msi.umn.edu, September 2007. [2] WATT-LIVE website, http://vlab2.scs.fsu.edu/watt-live, September 2007.
ERIC Educational Resources Information Center
Burzynska, Kamila
2012-01-01
The Internet provides a powerful digital learning environment for language acquisition and noticing. Thus implementation of challenging tasks to be solved by exploring the Web may sound appealing. The primary idea of the WebQuest project emphasizes data collection. The idea of the TalenQuest, however, goes beyond this traditional concept so as to…
Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel
2013-04-15
In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.
2013-01-01
Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394
Data Reduction and Analysis from the SOHO Spacecraft
NASA Technical Reports Server (NTRS)
Ipavich, F. M.
1999-01-01
This paper presents a final report on Data Reduction and Analysis from The SOHO Spacecraft from November 1, 1996-October 31, 1999. The topics include: 1) Instrumentation; 2) Health of Instrument; 3) Solar Wind Web Page; 3) Data Analysis; and 4) Science. This paper also includes appendices describing routine SOHO (Solar and Heliospheric Observatory) tasks, SOHO Science Procedures in the UMTOF (University Mass Determining Time-of-Flight) System, SOHO Programs on UMTOF and a list of publications.
ERIC Educational Resources Information Center
Thompson, Bruce
Web-based statistical instruction, like all statistical instruction, ought to focus on teaching the essence of the research endeavor: the exercise of reflective judgment. Using the framework of the recent report of the American Psychological Association (APA) Task Force on Statistical Inference (Wilkinson and the APA Task Force on Statistical…
The MIGenAS integrated bioinformatics toolkit for web-based sequence analysis
Rampp, Markus; Soddemann, Thomas; Lederer, Hermann
2006-01-01
We describe a versatile and extensible integrated bioinformatics toolkit for the analysis of biological sequences over the Internet. The web portal offers convenient interactive access to a growing pool of chainable bioinformatics software tools and databases that are centrally installed and maintained by the RZG. Currently, supported tasks comprise sequence similarity searches in public or user-supplied databases, computation and validation of multiple sequence alignments, phylogenetic analysis and protein–structure prediction. Individual tools can be seamlessly chained into pipelines allowing the user to conveniently process complex workflows without the necessity to take care of any format conversions or tedious parsing of intermediate results. The toolkit is part of the Max-Planck Integrated Gene Analysis System (MIGenAS) of the Max Planck Society available at (click ‘Start Toolkit’). PMID:16844980
An overview of the web-based Google Earth coincident imaging tool
Chander, Gyanesh; Kilough, B.; Gowda, S.
2010-01-01
The Committee on Earth Observing Satellites (CEOS) Visualization Environment (COVE) tool is a browser-based application that leverages Google Earth web to display satellite sensor coverage areas. The analysis tool can also be used to identify near simultaneous surface observation locations for two or more satellites. The National Aeronautics and Space Administration (NASA) CEOS System Engineering Office (SEO) worked with the CEOS Working Group on Calibration and Validation (WGCV) to develop the COVE tool. The CEOS member organizations are currently operating and planning hundreds of Earth Observation (EO) satellites. Standard cross-comparison exercises between multiple sensors to compare near-simultaneous surface observations and to identify corresponding image pairs are time-consuming and labor-intensive. COVE is a suite of tools that have been developed to make such tasks easier.
UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces
NASA Technical Reports Server (NTRS)
Shiffman, Smadar; Degani, Asaf; Heymann, Michael
2004-01-01
In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.
Narrowing the scope of failure prediction using targeted fault load injection
NASA Astrophysics Data System (ADS)
Jordan, Paul L.; Peterson, Gilbert L.; Lin, Alan C.; Mendenhall, Michael J.; Sellers, Andrew J.
2018-05-01
As society becomes more dependent upon computer systems to perform increasingly critical tasks, ensuring that those systems do not fail becomes increasingly important. Many organizations depend heavily on desktop computers for day-to-day operations. Unfortunately, the software that runs on these computers is written by humans and, as such, is still subject to human error and consequent failure. A natural solution is to use statistical machine learning to predict failure. However, since failure is still a relatively rare event, obtaining labelled training data to train these models is not a trivial task. This work presents new simulated fault-inducing loads that extend the focus of traditional fault injection techniques to predict failure in the Microsoft enterprise authentication service and Apache web server. These new fault loads were successful in creating failure conditions that were identifiable using statistical learning methods, with fewer irrelevant faults being created.
NaviCell Web Service for network-based data visualization.
Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P A; Barillot, Emmanuel; Zinovyev, Andrei
2015-07-01
Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of 'omics' data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
2012-01-01
This paper presents the rationale and methods for a randomized controlled evaluation of web-based training in motivational interviewing, goal setting, and behavioral task assignment. Web-based training may be a practical and cost-effective way to address the need for large-scale mental health training in evidence-based practice; however, there is a dearth of well-controlled outcome studies of these approaches. For the current trial, 168 mental health providers treating post-traumatic stress disorder (PTSD) were assigned to web-based training plus supervision, web-based training, or training-as-usual (control). A novel standardized patient (SP) assessment was developed and implemented for objective measurement of changes in clinical skills, while on-line self-report measures were used for assessing changes in knowledge, perceived self-efficacy, and practice related to cognitive behavioral therapy (CBT) techniques. Eligible participants were all actively involved in mental health treatment of veterans with PTSD. Study methodology illustrates ways of developing training content, recruiting participants, and assessing knowledge, perceived self-efficacy, and competency-based outcomes, and demonstrates the feasibility of conducting prospective studies of training efficacy or effectiveness in large healthcare systems. PMID:22583520
NaviCell Web Service for network-based data visualization
Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P. A.; Barillot, Emmanuel; Zinovyev, Andrei
2015-01-01
Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of ‘omics’ data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. PMID:25958393
NASA Astrophysics Data System (ADS)
Barreiro, F. H.; Borodin, M.; De, K.; Golubkov, D.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Padolski, S.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn’t exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented “train” model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.
Using qualitative studies to improve the usability of an EMR.
Rose, Alan F; Schnipper, Jeffrey L; Park, Elyse R; Poon, Eric G; Li, Qi; Middleton, Blackford
2005-02-01
The adoption of electronic medical records (EMRs) and user satisfaction are closely associated with the system's usability. To improve the usability of a results management module of a widely deployed web-based EMR, we conducted two qualitative studies that included multiple focus group and field study sessions. Qualitative research can help focus attention on user tasks and goals and identify patterns of care that can be visualized through task modeling exercises. Findings from both studies raised issues with the amount and organization of information in the display, interference with workflow patterns of primary care physicians, and the availability of visual cues and feedback. We used the findings of these studies to recommend design changes to the user interface of the results management module.
Ultrasonic flaw detection in a monorail box beam
NASA Astrophysics Data System (ADS)
Zheng, Peng; Greve, David W.; Oppenheim, Irving J.
2009-03-01
A steel box beam in a monorail application is constructed with an epoxy grout wearing surface, precluding visual inspection of its top flange. This paper describes a sequence of experimental research tasks to develop an ultrasonic system to detect flaws (such as fatigue cracks) in that flange, and the results of a field test to demonstrate system performance. The problem is constrained by the fact that the flange is exposed only along its longitudinal edges, and by the fact that permanent installation of transducers at close spacing was deemed to be impractical. The system chosen for development, after experimental comparison of alternate technologies, features angle-beam ultrasonic transducers with fluid coupling to the flange edge; the emitting transducers create transverse waves that travel diagonally across the width of the flange, where an array of receiving transducers detect flaw reflections and flaw shadows. The system rolls along the box beam, surveying (screening) the top flange for the presence of flaws. In a first research task, conducted on a full-size beam specimen, we compared waves generated from different transducer locations, either the flange edge or the web face, and at different frequency ranges. At relatively low frequencies, such as 100 kHz, we observed Lamb wave modes, and at higher frequency, in the MHz range, we observed nearlylongitudinal waves with trailing pulses. In all cases we observed little attenuation by the wearing surface and little influence of reflection at the web-flange joints. At the conclusion of this task we made the design decision to use edgemounted transducers at relatively high frequency, with correspondingly short wavelength, for best scattering from flaws. In a second research task we conducted experiments at 55% scale on a steel plate, with machined flaws of different size, and detected flaws of target size for the intended application. We then compared the performance of bonded transducers, fluid-coupled transducers, and angle-beam (wedge) transducers; from that comparison we made the design decision to use wedges, which beam the wave to increase the scattering from flaws. We also compared the performance of wired transducers using fluid coupling to that of wireless (inductively coupled) transducers mounted permanently. Although the wireless transducers achieved flaw detection, the necessary spacing (determined experimentally) would have required an impractical number of transducers. Therefore, we made the design decision to use wedge transducers with fluid coupling. In a third research task we developed and tested a rolling system with a water channel for acoustic coupling, including a study of its sensitivity to misalignment, and in a fourth task we devised a data display to create a pattern of reflections or shadows that could be easily interpreted as evidence of a flaw. Finally, we conducted a field test on the full-size system in a region containing bolt holes, which act as a physical simulation of a flaw, and show successful detection of reflections and shadows from those holes.
76 FR 45280 - Notice of ACHP Quarterly Business Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-28
... of Chairman's Award III. Chairman's Report IV. ACHP Management Issues A. Credentials Committee Report... America's Great Outdoors D. ACHP Legislative Agenda E. Sustainability Task Force F. Web Site Update and Social Media G. Preservation Action Federal Preservation Task Force Report and Recommendations VI...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-14
... must include a name and a phone number. Individuals may visit the Task Force Web site at http://dtf... two minutes. Written statements in which the author does not wish to present orally may be submitted...
NASA Technical Reports Server (NTRS)
Spitzer, M. B.
1983-01-01
The objective of this program is the investigation and evaluation of the capabilities of the ion implantation process for the production of photovoltaic cells from a variety of present-day, state-of-the-art, low-cost silicon sheet materials. Task 1 of the program concerns application of ion implantation and furnace annealing to fabrication of cells made from dendritic web silicon. Task 2 comprises the application of ion implantation and pulsed electron beam annealing (PEBA) to cells made from SEMIX, SILSO, heat-exchanger-method (HEM), edge-defined film-fed growth (EFG) and Czochralski (CZ) silicon. The goals of Task 1 comprise an investigation of implantation and anneal processes applied to dendritic web. A further goal is the evaluation of surface passivation and back surface reflector formation. In this way, processes yielding the very highest efficiency can be evaluated. Task 2 seeks to evaluate the use of PEBA for various sheet materials. A comparison of PEBA to thermal annealing will be made for a variety of ion implantation processes.
Use of Web-Based Portfolios as Tools for Reflection in Preservice Teacher Education
ERIC Educational Resources Information Center
Oner, Diler; Adadan, Emine
2011-01-01
This mixed-methods study examined the use of web-based portfolios for developing preservice teachers' reflective skills. Building on the work of previous research, the authors proposed a set of reflection-based tasks to enrich preservice teachers' internship experiences. Their purpose was to identify (a) whether preservice teachers demonstrated…
ERIC Educational Resources Information Center
Golick, Douglas A.; Heng-Moss, Tiffany M.; Steckelberg, Allen L.; Brooks, David. W.; Higley, Leon G.; Fowler, David
2013-01-01
The purpose of the study was to determine whether undergraduate students receiving web-based instruction based on traditional, key character, or classification instruction differed in their performance of insect identification tasks. All groups showed a significant improvement in insect identifications on pre- and post-two-dimensional picture…
The Nature of Discourse as Students Collaborate on a Mathematics WebQuest
ERIC Educational Resources Information Center
Orme, Michelle P.; Monroe, Eula Ewing
2005-01-01
Students were audio taped while working in teams on a WebQuest. Although gender-segregated, each team included both fifth- and sixth-graders. Interactions from two tasks were analyzed according to categories (exploratory, cumulative, disputational, tutorial) defined by the Spoken Language and New Technology (SLANT) project (e.g., Wegerif &…
Creative Commons: A New Tool for Schools
ERIC Educational Resources Information Center
Pitler, Howard
2006-01-01
Technology-savvy instructors often require students to create Web pages or videos, tasks that require finding materials such as images, music, or text on the Web, reusing them, and then republishing them in a technique that author Howard Pitler calls "remixing." However, this requires both the student and the instructor to deal with often thorny…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-20
..., authorized the National Senior Center under 49 U.S.C. 5314(c). In recognition of the fundamental importance..., Capacity and experience for conducting face-to-face and Web-based training. IV. Proposal Submission... tasks, including capacity and experience for conducting face-to-face and Web- based [[Page 78973...
ERIC Educational Resources Information Center
Woodruff, Allison; Rosenholtz, Ruth; Morrison, Julie B.; Faulring, Andrew; Pirolli, Peter
2002-01-01
Discussion of Web search strategies focuses on a comparative study of textual and graphical summarization mechanisms applied to search engine results. Suggests that thumbnail images (graphical summaries) can increase efficiency in processing results, and that enhanced thumbnails (augmented with readable textual elements) had more consistent…
Collier, James H; Lesk, Arthur M; Garcia de la Banda, Maria; Konagurthu, Arun S
2012-07-01
Searching for well-fitting 3D oligopeptide fragments within a large collection of protein structures is an important task central to many analyses involving protein structures. This article reports a new web server, Super, dedicated to the task of rapidly screening the protein data bank (PDB) to identify all fragments that superpose with a query under a prespecified threshold of root-mean-square deviation (RMSD). Super relies on efficiently computing a mathematical bound on the commonly used structural similarity measure, RMSD of superposition. This allows the server to filter out a large proportion of fragments that are unrelated to the query; >99% of the total number of fragments in some cases. For a typical query, Super scans the current PDB containing over 80,500 structures (with ∼40 million potential oligopeptide fragments to match) in under a minute. Super web server is freely accessible from: http://lcb.infotech.monash.edu.au/super.
Using Psychophysiological Sensors to Assess Mental Workload During Web Browsing.
Jimenez-Molina, Angel; Retamal, Cristian; Lira, Hernan
2018-02-03
Knowledge of the mental workload induced by a Web page is essential for improving users' browsing experience. However, continuously assessing the mental workload during a browsing task is challenging. To address this issue, this paper leverages the correlation between stimuli and physiological responses, which are measured with high-frequency, non-invasive psychophysiological sensors during very short span windows. An experiment was conducted to identify levels of mental workload through the analysis of pupil dilation measured by an eye-tracking sensor. In addition, a method was developed to classify mental workload by appropriately combining different signals (electrodermal activity (EDA), electrocardiogram, photoplethysmo-graphy (PPG), electroencephalogram (EEG), temperature and pupil dilation) obtained with non-invasive psychophysiological sensors. The results show that the Web browsing task involves four levels of mental workload. Also, by combining all the sensors, the efficiency of the classification reaches 93.7%.
Using Psychophysiological Sensors to Assess Mental Workload During Web Browsing
Jimenez-Molina, Angel; Retamal, Cristian; Lira, Hernan
2018-01-01
Knowledge of the mental workload induced by a Web page is essential for improving users’ browsing experience. However, continuously assessing the mental workload during a browsing task is challenging. To address this issue, this paper leverages the correlation between stimuli and physiological responses, which are measured with high-frequency, non-invasive psychophysiological sensors during very short span windows. An experiment was conducted to identify levels of mental workload through the analysis of pupil dilation measured by an eye-tracking sensor. In addition, a method was developed to classify mental workload by appropriately combining different signals (electrodermal activity (EDA), electrocardiogram, photoplethysmo-graphy (PPG), electroencephalogram (EEG), temperature and pupil dilation) obtained with non-invasive psychophysiological sensors. The results show that the Web browsing task involves four levels of mental workload. Also, by combining all the sensors, the efficiency of the classification reaches 93.7%. PMID:29401688
Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser
NASA Astrophysics Data System (ADS)
Christen, M.
2016-06-01
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.
Risin, J A
1998-01-01
The purpose of this paper is to facilitate international research of medical resources on the World Wide Web. International research consists of overcoming a unique set of obstacles and challenges that are not involved when undertaking research tasks using only U.S.-based information. Utilizing the World Wide Web can help us to overcome most of the restraints we would have to face when we perform research outside of our local geography. Currently, there are a number of Internet Web sites that may assist us in breaking down the barriers to undertaking international research.
Scientific Workflows and the Sensor Web for Virtual Environmental Observatories
NASA Astrophysics Data System (ADS)
Simonis, I.; Vahed, A.
2008-12-01
Virtual observatories mature from their original domain and become common practice for earth observation research and policy building. The term Virtual Observatory originally came from the astronomical research community. Here, virtual observatories provide universal access to the available astronomical data archives of space and ground-based observatories. Further on, as those virtual observatories aim at integrating heterogeneous ressources provided by a number of participating organizations, the virtual observatory acts as a coordinating entity that strives for common data analysis techniques and tools based on common standards. The Sensor Web is on its way to become one of the major virtual observatories outside of the astronomical research community. Like the original observatory that consists of a number of telescopes, each observing a specific part of the wave spectrum and with a collection of astronomical instruments, the Sensor Web provides a multi-eyes perspective on the current, past, as well as future situation of our planet and its surrounding spheres. The current view of the Sensor Web is that of a single worldwide collaborative, coherent, consistent and consolidated sensor data collection, fusion and distribution system. The Sensor Web can perform as an extensive monitoring and sensing system that provides timely, comprehensive, continuous and multi-mode observations. This technology is key to monitoring and understanding our natural environment, including key areas such as climate change, biodiversity, or natural disasters on local, regional, and global scales. The Sensor Web concept has been well established with ongoing global research and deployment of Sensor Web middleware and standards and represents the foundation layer of systems like the Global Earth Observation System of Systems (GEOSS). The Sensor Web consists of a huge variety of physical and virtual sensors as well as observational data, made available on the Internet at standardized interfaces. All data sets and sensor communication follow well-defined abstract models and corresponding encodings, mostly developed by the OGC Sensor Web Enablement initiative. Scientific progress is currently accelerated by an emerging new concept called scientific workflows, which organize and manage complex distributed computations. A scientific workflow represents and records the highly complex processes that a domain scientist typically would follow in exploration, discovery and ultimately, transformation of raw data to publishable results. The challenge is now to integrate the benefits of scientific workflows with those provided by the Sensor Web in order to leverage all resources for scientific exploration, problem solving, and knowledge generation. Scientific workflows for the Sensor Web represent the next evolutionary step towards efficient, powerful, and flexible earth observation frameworks and platforms. Those platforms support the entire process from capturing data, sharing and integrating, to requesting additional observations. Multiple sites and organizations will participate on single platforms and scientists from different countries and organizations interact and contribute to large-scale research projects. Simultaneously, the data- and information overload becomes manageable, as multiple layers of abstraction will free scientists to deal with underlying data-, processing or storage peculiarities. The vision are automated investigation and discovery mechanisms that allow scientists to pose queries to the system, which in turn would identify potentially related resources, schedules processing tasks and assembles all parts in workflows that may satisfy the query.
Roudsari, AV; Gordon, C; Gray, JA Muir
2001-01-01
Background In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health . They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. Objectives This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health . It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Methods Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Results Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Conclusions Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space. PMID:11720947
Popova, A Yu; Kuzkin, B P; Demina, Yu V; Dubyansky, V M; Kulichenko, A N; Maletskaya, O V; Shayakhmetov, O Kh; Semenko, O V; Nazarenko, Yu V; Agapitov, D S; Mezentsev, V M; Kharchenko, T V; Efremenko, D V; Oroby, V G; Klindukhov, V P; Grechanaya, T V; Nikolaevich, P N; Tesheva, S Ch; Rafeenko, G K
2015-01-01
To improve the sanitary and epidemiological surveillance at the Olympic Games has developed a system of GIS for monitoring objects and situations in the region of Sochi. The system is based on software package ArcGIS, version 10.2 server, with Web-java.lang. Object, Web-server Apach, and software developed in language java. During th execution of the tasks are solved: the stratification of the region of the Olympic Games for the private and aggregate epidemiological risk OCI various eti- ologies, ranking epidemiologically important facilities for the sanitary and hygienic conditions, monitoring of infectious diseases (in real time according to the preliminary diagnosis). GIS monitoring has shown its effectiveness: Information received from various sources, but focused on one portal. Information was available in real time all the specialists involved in ensuring epidemiological well-being and use at work during the Olympic Games in Sochi.
Nelson, Victoria; Nelson, Victoria Ruth; Li, Fiona; Green, Susan; Tamura, Tomoyoshi; Liu, Jun-Min; Class, Margaret
2008-11-06
The Walter Reed National Surgical Quality Improvement Program Data Transfer web module integrates with medical and surgical information systems, and leverages outside standards, such as the National Library of Medicine's RxNorm, to process surgical and risk assessment data. Key components of the project included a needs assessment with nurse reviewers and a data analysis for federated (standards were locally controlled) data sources. The resulting interface streamlines nurse reviewer workflow by integrating related tasks and data.
ANALYTiC: An Active Learning System for Trajectory Classification.
Soares Junior, Amilcar; Renso, Chiara; Matwin, Stan
2017-01-01
The increasing availability and use of positioning devices has resulted in large volumes of trajectory data. However, semantic annotations for such data are typically added by domain experts, which is a time-consuming task. Machine-learning algorithms can help infer semantic annotations from trajectory data by learning from sets of labeled data. Specifically, active learning approaches can minimize the set of trajectories to be annotated while preserving good performance measures. The ANALYTiC web-based interactive tool visually guides users through this annotation process.
Mizota, Tomoko; Kurashima, Yo; Poudel, Saseem; Watanabe, Yusuke; Shichinohe, Toshiaki; Hirano, Satoshi
2018-07-01
Despite its advantages, few trainees outside of North America have access to simulation training. We hypothesized that a stepwise training method using tele-mentoring system would be an efficient technique for training in basic laparoscopic skills. Residents were randomized into two groups and trained to proficiency in intracorporeal suturing. The stepwise group (SG) practiced the task step-by-step, while the other group practiced comprehensively (CG). Each participant received weekly coaching via two-way web conferencing software. The duration of the coaching sessions and self-practice time were compared between the two groups. Twenty residents from 15 institutions participated, and all achieved proficiency. Coaching sessions using tele-mentoring system were completed without difficulties. The SG required significantly shorter coaching time per session than the CG (p = .002). There was no significant difference in self-practice time. The stepwise training method with the tele-mentoring system appears to make efficient use of surgical trainees' and trainers' time. Copyright © 2017 Elsevier Inc. All rights reserved.
Cloud-based Predictive Modeling System and its Application to Asthma Readmission Prediction
Chen, Robert; Su, Hang; Khalilia, Mohammed; Lin, Sizhe; Peng, Yue; Davis, Tod; Hirsh, Daniel A; Searles, Elizabeth; Tejedor-Sojo, Javier; Thompson, Michael; Sun, Jimeng
2015-01-01
The predictive modeling process is time consuming and requires clinical researchers to handle complex electronic health record (EHR) data in restricted computational environments. To address this problem, we implemented a cloud-based predictive modeling system via a hybrid setup combining a secure private server with the Amazon Web Services (AWS) Elastic MapReduce platform. EHR data is preprocessed on a private server and the resulting de-identified event sequences are hosted on AWS. Based on user-specified modeling configurations, an on-demand web service launches a cluster of Elastic Compute 2 (EC2) instances on AWS to perform feature selection and classification algorithms in a distributed fashion. Afterwards, the secure private server aggregates results and displays them via interactive visualization. We tested the system on a pediatric asthma readmission task on a de-identified EHR dataset of 2,967 patients. We conduct a larger scale experiment on the CMS Linkable 2008–2010 Medicare Data Entrepreneurs’ Synthetic Public Use File dataset of 2 million patients, which achieves over 25-fold speedup compared to sequential execution. PMID:26958172
Evaluation of a Telerehabilitation System for Community-Based Rehabilitation
Schutte, Jamie; Gales, Sara; Filippone, Ashlee; Saptono, Andi; Parmanto, Bambang; McCue, Michael
2012-01-01
The use of web-based portals, while increasing in popularity in the fields of medicine and research, are rarely reported on in community-based rehabilitation programs. A program within the Pennsylvania Office of Vocational Rehabilitation’s Hiram G. Andrews Center, the Cognitive Skills Enhancement Program (CSEP), sought to enhance organization of program and participant information and communication between part- and full-time employees, supervisors and consultants. A telerehabilitation system was developed consisting of (1) a web-based portal to support a variety of clinical activities, and (2) the Versatile Integrated System for Telerehabilitation (VISYTER) video-conferencing system to support the collaboration and delivery of rehabilitation services remotely. This descriptive evaluation examines the usability of the telerehabilitation system incorporating both the portal and VISYTER. Telerehabilitation system users include CSEP staff members from three geographical locations and employed by two institutions. The IBM After-Scenario Questionnaire (ASQ) and Post-Study System Usability Questionnaire (PSSUQ), the Telehealth Usability Questionnaire (TUQ), and two demographic surveys were administered to gather both objective and subjective information. Results showed generally high levels of usability. Users commented that the telerehabilitation system improved communication, increased access to information, improved speed of completing tasks, and had an appealing interface. Areas where users would like to see improvements, including ease of accessing/editing documents and searching for information, are discussed. PMID:25945193
Using Geo-Data Corporately on the Response Phase of Emergency Management
NASA Astrophysics Data System (ADS)
Demir Ozbek, E.; Ates, S.; Aydinoglu, A. C.
2015-08-01
Response phase of emergency management is the most complex phase in the entire cycle because it requires cooperation between various actors relating to emergency sectors. A variety of geo-data is needed at the emergency response such as; existing data provided by different institutions and dynamic data collected by different sectors at the time of the disaster. Disaster event is managed according to elaborately defined activity-actor-task-geodata cycle. In this concept, every activity of emergency response is determined with Standard Operation Procedure that enables users to understand their tasks and required data in any activity. In this study, a general conceptual approach for disaster and emergency management system is developed based on the regulations to serve applications in Istanbul Governorship Provincial Disaster and Emergency Directorate. The approach is implemented to industrial facility explosion example. In preparation phase, optimum ambulance locations are determined according to general response time of the ambulance to all injury cases in addition to areas that have industrial fire risk. Management of the industrial fire case is organized according to defined actors, activities, and working cycle that describe required geo-data. A response scenario was prepared and performed for an industrial facility explosion event to exercise effective working cycle of actors. This scenario provides using geo-data corporately between different actors while required data for each task is defined to manage the industrial facility explosion event. Following developing web technologies, this scenario based approach can be effective to use geo-data on the web corporately.
Integration of Grid and Sensor Web for Flood Monitoring and Risk Assessment from Heterogeneous Data
NASA Astrophysics Data System (ADS)
Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii
2013-04-01
Over last decades we have witnessed the upward global trend in natural disaster occurrence. Hydrological and meteorological disasters such as floods are the main contributors to this pattern. In recent years flood management has shifted from protection against floods to managing the risks of floods (the European Flood risk directive). In order to enable operational flood monitoring and assessment of flood risk, it is required to provide an infrastructure with standardized interfaces and services. Grid and Sensor Web can meet these requirements. In this paper we present a general approach to flood monitoring and risk assessment based on heterogeneous geospatial data acquired from multiple sources. To enable operational flood risk assessment integration of Grid and Sensor Web approaches is proposed [1]. Grid represents a distributed environment that integrates heterogeneous computing and storage resources administrated by multiple organizations. SensorWeb is an emerging paradigm for integrating heterogeneous satellite and in situ sensors and data systems into a common informational infrastructure that produces products on demand. The basic Sensor Web functionality includes sensor discovery, triggering events by observed or predicted conditions, remote data access and processing capabilities to generate and deliver data products. Sensor Web is governed by the set of standards, called Sensor Web Enablement (SWE), developed by the Open Geospatial Consortium (OGC). Different practical issues regarding integration of Sensor Web with Grids are discussed in the study. We show how the Sensor Web can benefit from using Grids and vice versa. For example, Sensor Web services such as SOS, SPS and SAS can benefit from the integration with the Grid platform like Globus Toolkit. The proposed approach is implemented within the Sensor Web framework for flood monitoring and risk assessment, and a case-study of exploiting this framework, namely the Namibia SensorWeb Pilot Project, is described. The project was created as a testbed for evaluating and prototyping key technologies for rapid acquisition and distribution of data products for decision support systems to monitor floods and enable flood risk assessment. The system provides access to real-time products on rainfall estimates and flood potential forecast derived from the Tropical Rainfall Measuring Mission (TRMM) mission with lag time of 6 h, alerts from the Global Disaster Alert and Coordination System (GDACS) with lag time of 4 h, and the Coupled Routing and Excess STorage (CREST) model to generate alerts. These are alerts are used to trigger satellite observations. With deployed SPS service for NASA's EO-1 satellite it is possible to automatically task sensor with re-image capability of less 8 h. Therefore, with enabled computational and storage services provided by Grid and cloud infrastructure it was possible to generate flood maps within 24-48 h after trigger was alerted. To enable interoperability between system components and services OGC-compliant standards are utilized. [1] Hluchy L., Kussul N., Shelestov A., Skakun S., Kravchenko O., Gripich Y., Kopp P., Lupian E., "The Data Fusion Grid Infrastructure: Project Objectives and Achievements," Computing and Informatics, 2010, vol. 29, no. 2, pp. 319-334.
DMSP SSJ4 Data Restoration, Classification, and On-Line Data Access
NASA Technical Reports Server (NTRS)
Wing, Simon; Bredekamp, Joseph H. (Technical Monitor)
2000-01-01
Compress and clean raw data file for permanent storage We have identified various error conditions/types and developed algorithms to get rid of these errors/noises, including the more complicated noise in the newer data sets. (status = 100% complete). Internet access of compacted raw data. It is now possible to access the raw data via our web site, http://www.jhuapl.edu/Aurora/index.html. The software to read and plot the compacted raw data is also available from the same web site. The users can now download the raw data, read, plot, or manipulate the data as they wish on their own computer. The users are able to access the cleaned data sets. Internet access of the color spectrograms. This task has also been completed. It is now possible to access the spectrograms from the web site mentioned above. Improve the particle precipitation region classification. The algorithm for doing this task has been developed and implemented. As a result, the accuracies improved. Now the web site routinely distributes the results of applying the new algorithm to the cleaned data set. Mark the classification region on the spectrograms. The software to mark the classification region in the spectrograms has been completed. This is also available from our web site.
Improving the Accuracy of Attribute Extraction using the Relatedness between Attribute Values
NASA Astrophysics Data System (ADS)
Bollegala, Danushka; Tani, Naoki; Ishizuka, Mitsuru
Extracting attribute-values related to entities from web texts is an important step in numerous web related tasks such as information retrieval, information extraction, and entity disambiguation (namesake disambiguation). For example, for a search query that contains a personal name, we can not only return documents that contain that personal name, but if we have attribute-values such as the organization for which that person works, we can also suggest documents that contain information related to that organization, thereby improving the user's search experience. Despite numerous potential applications of attribute extraction, it remains a challenging task due to the inherent noise in web data -- often a single web page contains multiple entities and attributes. We propose a graph-based approach to select the correct attribute-values from a set of candidate attribute-values extracted for a particular entity. First, we build an undirected weighted graph in which, attribute-values are represented by nodes, and the edge that connects two nodes in the graph represents the degree of relatedness between the corresponding attribute-values. Next, we find the maximum spanning tree of this graph that connects exactly one attribute-value for each attribute-type. The proposed method outperforms previously proposed attribute extraction methods on a dataset that contains 5000 web pages.
Siden, Rivka; Tamer, Helen R; Skyles, Amy J; Dolan, Christopher S; Propes, Denise J; Redic, Kimberly
2014-11-01
Results of a survey assessing trends and innovations in the use of pharmacy technicians and other nonpharmacist staff in the research pharmacy setting are reported. A Web-based survey was distributed to Internet communities of members of the American Society of Health-System Pharmacists and the University Health-System Consortium involved in investigational drug research and related practice areas. The survey collected data on the characteristics of institutions with pharmacy department staff dedicated to such research activities and the participation of pharmacists, technicians, and other staff in key areas of research pharmacy operations. Survey responses from 51 institutions were included in the data analysis. Overall, the reported distribution of assigned responsibility for most evaluated research pharmacy tasks reflected traditional divisions of pharmacist and technician duties, with technicians performing tasks subject to a pharmacist check or pharmacists completing tasks alone. However, some institutions reported allowing technicians to perform a number of key tasks without direct pharmacist supervision, primarily in the areas of inventory management and sponsor monitoring and auditing; almost half of the surveyed institutions reported technician involvement in teaching activities. In general, the reported use of "tech-check-tech" arrangements in research pharmacies was very limited. Some responding institutions reported the innovative use of nonpharmacist staff (e.g., paid interns, students and residents on rotation). Although the majority of research pharmacy tasks related to direct patient care are performed by or under the direct supervision of pharmacists, a variety of other essential tasks are typically assigned to pharmacy technicians and other nonpharmacist staff. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Andújar-Montoya, María Dolores
2017-01-01
The main causes of building defects are errors in the design and the construction phases. These causes related to construction are mainly due to the general lack of control of construction work and represent approximately 75% of the anomalies. In particular, one of the main causes of such anomalies, which end in building defects, is the lack of control over the physical variables of the work environment during the execution of tasks. Therefore, the high percentage of defects detected in buildings that have the root cause in the construction phase could be avoidable with a more accurate and efficient control of the process. The present work proposes a novel integration model based on information and communications technologies for the automation of both construction work and its management at the execution phase, specifically focused on the flat roof construction process. Roofs represent the second area where more defects are claimed. The proposed model is based on a Web system, supported by a service oriented architecture, for the integral management of tasks through the Last Planner System methodology, but incorporating the management of task restrictions from the physical environment variables by designing specific sensing systems. Likewise, all workers are integrated into the management process by Internet-of-Things solutions that guide them throughout the execution process in a non-intrusive and transparent way. PMID:28737693
Andújar-Montoya, María Dolores; Marcos-Jorquera, Diego; García-Botella, Francisco Manuel; Gilart-Iglesias, Virgilio
2017-07-22
The main causes of building defects are errors in the design and the construction phases. These causes related to construction are mainly due to the general lack of control of construction work and represent approximately 75% of the anomalies. In particular, one of the main causes of such anomalies, which end in building defects, is the lack of control over the physical variables of the work environment during the execution of tasks. Therefore, the high percentage of defects detected in buildings that have the root cause in the construction phase could be avoidable with a more accurate and efficient control of the process. The present work proposes a novel integration model based on information and communications technologies for the automation of both construction work and its management at the execution phase, specifically focused on the flat roof construction process. Roofs represent the second area where more defects are claimed. The proposed model is based on a Web system, supported by a service oriented architecture, for the integral management of tasks through the Last Planner System methodology, but incorporating the management of task restrictions from the physical environment variables by designing specific sensing systems. Likewise, all workers are integrated into the management process by Internet-of-Things solutions that guide them throughout the execution process in a non-intrusive and transparent way.
Efficiently Selecting the Best Web Services
NASA Astrophysics Data System (ADS)
Goncalves, Marlene; Vidal, Maria-Esther; Regalado, Alfredo; Yacoubi Ayadi, Nadia
Emerging technologies and linking data initiatives have motivated the publication of a large number of datasets, and provide the basis for publishing Web services and tools to manage the available data. This wealth of resources opens a world of possibilities to satisfy user requests. However, Web services may have similar functionality and assess different performance; therefore, it is required to identify among the Web services that satisfy a user request, the ones with the best quality. In this paper we propose a hybrid approach that combines reasoning tasks with ranking techniques to aim at the selection of the Web services that best implement a user request. Web service functionalities are described in terms of input and output attributes annotated with existing ontologies, non-functionality is represented as Quality of Services (QoS) parameters, and user requests correspond to conjunctive queries whose sub-goals impose restrictions on the functionality and quality of the services to be selected. The ontology annotations are used in different reasoning tasks to infer service implicit properties and to augment the size of the service search space. Furthermore, QoS parameters are considered by a ranking metric to classify the services according to how well they meet a user non-functional condition. We assume that all the QoS parameters of the non-functional condition are equally important, and apply the Top-k Skyline approach to select the k services that best meet this condition. Our proposal relies on a two-fold solution which fires a deductive-based engine that performs different reasoning tasks to discover the services that satisfy the requested functionality, and an efficient implementation of the Top-k Skyline approach to compute the top-k services that meet the majority of the QoS constraints. Our Top-k Skyline solution exploits the properties of the Skyline Frequency metric and identifies the top-k services by just analyzing a subset of the services that meet the non-functional condition. We report on the effects of the proposed reasoning tasks, the quality of the top-k services selected by the ranking metric, and the performance of the proposed ranking techniques. Our results suggest that the number of services can be augmented by up two orders of magnitude. In addition, our ranking techniques are able to identify services that have the best values in at least half of the QoS parameters, while the performance is improved.
Algorithms and semantic infrastructure for mutation impact extraction and grounding.
Laurila, Jonas B; Naderi, Nona; Witte, René; Riazanov, Alexandre; Kouznetsov, Alexandre; Baker, Christopher J O
2010-12-02
Mutation impact extraction is a hitherto unaccomplished task in state of the art mutation extraction systems. Protein mutations and their impacts on protein properties are hidden in scientific literature, making them poorly accessible for protein engineers and inaccessible for phenotype-prediction systems that currently depend on manually curated genomic variation databases. We present the first rule-based approach for the extraction of mutation impacts on protein properties, categorizing their directionality as positive, negative or neutral. Furthermore protein and mutation mentions are grounded to their respective UniProtKB IDs and selected protein properties, namely protein functions to concepts found in the Gene Ontology. The extracted entities are populated to an OWL-DL Mutation Impact ontology facilitating complex querying for mutation impacts using SPARQL. We illustrate retrieval of proteins and mutant sequences for a given direction of impact on specific protein properties. Moreover we provide programmatic access to the data through semantic web services using the SADI (Semantic Automated Discovery and Integration) framework. We address the problem of access to legacy mutation data in unstructured form through the creation of novel mutation impact extraction methods which are evaluated on a corpus of full-text articles on haloalkane dehalogenases, tagged by domain experts. Our approaches show state of the art levels of precision and recall for Mutation Grounding and respectable level of precision but lower recall for the task of Mutant-Impact relation extraction. The system is deployed using text mining and semantic web technologies with the goal of publishing to a broad spectrum of consumers.
HTML 5 Displays for On-Board Flight Systems
NASA Technical Reports Server (NTRS)
Silva, Chandika
2016-01-01
During my Internship at NASA in the summer of 2016, I was assigned to a project which dealt with developing a web-server that would display telemetry and other system data using HTML 5, JavaScript, and CSS. By doing this, it would be possible to view the data across a variety of screen sizes, and establish a standard that could be used to simplify communication and software development between NASA and other countries. Utilizing a web- approach allowed us to add in more functionality, as well as make the displays more aesthetically pleasing for the users. When I was assigned to this project my main task was to first establish communication with the current display server. This display server would output data from the on-board systems in XML format. Once communication was established I was then asked to create a dynamic telemetry table web page that would update its header and change as new information came in. After this was completed, certain minor functionalities were added to the table such as a hide column and filter by system option. This was more for the purpose of making the table more useful for the users, as they can now filter and view relevant data. Finally my last task was to create a graphical system display for all the systems on the space craft. This was by far the most challenging part of my internship as finding a JavaScript library that was both free and contained useful functions to assist me in my task was difficult. In the end I was able to use the JointJs library and accomplish the task. With the help of my mentor and the HIVE lab team, we were able to establish stable communication with the display server. We also succeeded in creating a fully dynamic telemetry table and in developing a graphical system display for the advanced modular power system. Working in JSC for this internship has taught me a lot about coding in JavaScript and HTML 5. I was also introduced to the concept of developing software as a team, and exposed to the different types of programs that are used to simplify team coding such as GitLab. While in JSC, I took full advantage of and attended the lectures that were held here on site. I learned a lot about what it is NASA does and about the interesting projects that are conducted here. One of the lectures I attended was about the selection process and the criteria that is used to select future astronauts for flight missions. This truly had an impact on my future plans as it showed me that this path was a viable option for me. After this internship I plan on completing my undergraduate course work and plan to move on for a masters degree. However, during the time in which I will be completing my masters course work, I would like to apply for the NASA pathways graduate program and, if I am accepted, eventually move on to being a full time civil servant. Working in NASA has not only been enjoyable, but full of information and great experiences that have motivated me to seek a full time employment here in the near future.
Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search.
Jay, Caroline; Harper, Simon; Dunlop, Ian; Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain
2016-01-14
Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these "experts." Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the "Google generation" than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is "Google-like," enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F1,19=37.3, P<.001), with a main effect of task (F3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F1,19=18.0, P<.001). There was also a main effect of task (F2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation.
Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search
Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain
2016-01-01
Background Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these “experts.” Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. Objective The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the “Google generation” than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Methods Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is “Google-like,” enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Results Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F 1,19=37.3, P<.001), with a main effect of task (F 3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F 1,19=18.0, P<.001). There was also a main effect of task (F 2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. Conclusions The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation. PMID:26769334
Data mining for personal navigation
NASA Astrophysics Data System (ADS)
Hariharan, Gurushyam; Franti, Pasi; Mehta, Sandeep
2002-03-01
Relevance is the key in defining what data is to be extracted from the Internet. Traditionally, relevance has been defined mainly by keywords and user profiles. In this paper we discuss a fairly untouched dimension to relevance: location. Any navigational information sought by a user at large on earth is evidently governed by his location. We believe that task oriented data mining of the web amalgamated with location information is the key to providing relevant information for personal navigation. We explore the existential hurdles and propose novel approaches to tackle them. We also present naive, task-oriented data mining based approaches and their implementations in Java, to extract location based information. Ad-hoc pairing of data with coordinates (x, y) is very rare on the web. But if the same co-ordinates are converted to a logical address (state/city/street), a wide spectrum of location-based information base opens up. Hence, given the coordinates (x, y) on the earth, the scheme points to the logical address of the user. Location based information could either be picked up from fixed and known service providers (e.g. Yellow Pages) or from any arbitrary website on the Web. Once the web servers providing information relevant to the logical address are located, task oriented data mining is performed over these sites keeping in mind what information is interesting to the contemporary user. After all this, a simple data stream is provided to the user with information scaled to his convenience. The scheme has been implemented for cities of Finland.
Versatile clinical information system design for emergency departments.
Amouh, Teh; Gemo, Monica; Macq, Benoît; Vanderdonckt, Jean; El Gariani, Abdul Wahed; Reynaert, Marc S; Stamatakis, Lambert; Thys, Frédéric
2005-06-01
Compared to other hospital units, the emergency department presents some distinguishing characteristics of its own. Emergency health-care delivery is a collaborative process involving the contribution of several individuals who accomplish their tasks while working autonomously under pressure and sometimes with limited resources. Effective computerization of the emergency department information system presents a real challenge due to the complexity of the scenario. Current computerized support suffers from several problems, including inadequate data models, clumsy user interfaces, and poor integration with other clinical information systems. To tackle such complexity, we propose an approach combining three points of view, namely the transactions (in and out of the department), the (mono and multi) user interfaces and data management. Unlike current systems, we pay particular attention to the user-friendliness and versatility of our system. This means that intuitive user interfaces have been conceived and specific software modeling methodologies have been applied to provide our system with the flexibility and adaptability necessary for the individual and group coordinated tasks. Our approach has been implemented by prototyping a web-based, multiplatform, multiuser, and versatile clinical information system built upon multitier software architecture, using the Java programming language.
Pérez-Pérez, Martín; Glez-Peña, Daniel; Fdez-Riverola, Florentino; Lourenço, Anália
2015-02-01
Document annotation is a key task in the development of Text Mining methods and applications. High quality annotated corpora are invaluable, but their preparation requires a considerable amount of resources and time. Although the existing annotation tools offer good user interaction interfaces to domain experts, project management and quality control abilities are still limited. Therefore, the current work introduces Marky, a new Web-based document annotation tool equipped to manage multi-user and iterative projects, and to evaluate annotation quality throughout the project life cycle. At the core, Marky is a Web application based on the open source CakePHP framework. User interface relies on HTML5 and CSS3 technologies. Rangy library assists in browser-independent implementation of common DOM range and selection tasks, and Ajax and JQuery technologies are used to enhance user-system interaction. Marky grants solid management of inter- and intra-annotator work. Most notably, its annotation tracking system supports systematic and on-demand agreement analysis and annotation amendment. Each annotator may work over documents as usual, but all the annotations made are saved by the tracking system and may be further compared. So, the project administrator is able to evaluate annotation consistency among annotators and across rounds of annotation, while annotators are able to reject or amend subsets of annotations made in previous rounds. As a side effect, the tracking system minimises resource and time consumption. Marky is a novel environment for managing multi-user and iterative document annotation projects. Compared to other tools, Marky offers a similar visually intuitive annotation experience while providing unique means to minimise annotation effort and enforce annotation quality, and therefore corpus consistency. Marky is freely available for non-commercial use at http://sing.ei.uvigo.es/marky. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Automatic building of a web-like structure based on thermoplastic adhesive.
Leach, Derek; Wang, Liyu; Reusser, Dorothea; Iida, Fumiya
2014-09-01
Animals build structures to extend their control over certain aspects of the environment; e.g., orb-weaver spiders build webs to capture prey, etc. Inspired by this behaviour of animals, we attempt to develop robotics technology that allows a robot to automatically builds structures to help it accomplish certain tasks. In this paper we show automatic building of a web-like structure with a robot arm based on thermoplastic adhesive (TPA) material. The material properties of TPA, such as elasticity, adhesiveness, and low melting temperature, make it possible for a robot to form threads across an open space by an extrusion-drawing process and then combine several of these threads into a web-like structure. The problems addressed here are discovering which parameters determine the thickness of a thread and determining how web-like structures may be used for certain tasks. We first present a model for the extrusion and the drawing of TPA threads which also includes the temperature-dependent material properties. The model verification result shows that the increasing relative surface area of the TPA thread as it is drawn thinner increases the heat loss of the thread, and that by controlling how quickly the thread is drawn, a range of diameters can be achieved from 0.2-0.75 mm. We then present a method based on a generalized nonlinear finite element truss model. The model was validated and could predict the deformation of various web-like structures when payloads are added. At the end, we demonstrate automatic building of a web-like structure for payload bearing.
Schmutz, Sven; Sonderegger, Andreas; Sauer, Juergen
2017-09-01
The present study examined whether implementing recommendations of Web accessibility guidelines would have different effects on nondisabled users than on users with visual impairments. The predominant approach for making Web sites accessible for users with disabilities is to apply accessibility guidelines. However, it has been hardly examined whether this approach has side effects for nondisabled users. A comparison of the effects on both user groups would contribute to a better understanding of possible advantages and drawbacks of applying accessibility guidelines. Participants from two matched samples, comprising 55 participants with visual impairments and 55 without impairments, took part in a synchronous remote testing of a Web site. Each participant was randomly assigned to one of three Web sites, which differed in the level of accessibility (very low, low, and high) according to recommendations of the well-established Web Content Accessibility Guidelines 2.0 (WCAG 2.0). Performance (i.e., task completion rate and task completion time) and a range of subjective variables (i.e., perceived usability, positive affect, negative affect, perceived aesthetics, perceived workload, and user experience) were measured. Higher conformance to Web accessibility guidelines resulted in increased performance and more positive user ratings (e.g., perceived usability or aesthetics) for both user groups. There was no interaction between user group and accessibility level. Higher conformance to WCAG 2.0 may result in benefits for nondisabled users and users with visual impairments alike. Practitioners may use the present findings as a basis for deciding on whether and how to implement accessibility best.
Web-Based Social Work Courses: Guidelines for Developing and Implementing an Online Environment
ERIC Educational Resources Information Center
Dawson, Beverly Araujo; Fenster, Judy
2015-01-01
Although web-based courses in schools of social work have proliferated over the past decade, the literature contains few guidelines on steps that schools can take to develop such courses. Using Knowles's framework, which delineates tasks and themes involved in implementing e-learning in social work education, this article describes the cultivation…
ERIC Educational Resources Information Center
Traphagan, Tomoko; Traphagan, John; Dickens, Linda Neavel; Resta, Paul
2014-01-01
Motivated by the need to facilitate Net Generation students' information literacy (IL), or more specifically, to promote student understanding of legitimate, effective use of Web-based resources, this exploratory study investigated how analyzing, writing, posting, and monitoring Wikipedia entries might help students develop critical…
ERIC Educational Resources Information Center
Lin, Kuanyuh Tony
2009-01-01
A two-stage mixed methods approach was used to examine how foreign correspondents stationed in the United States use World Wide Web technology to maintain their news perspectives remotely. Despite emerging technology playing an increasingly significant role in the production of international journalism, the subject under investigation has been…
Information Tailoring Enhancements for Large Scale Social Data
2016-03-15
i.com) 1 Work Performed within This Reporting Period .................................................... 2 1.1 Implemented Temporal Analytics ...following tasks. Implemented Temporal Analysis Algorithms for Advanced Analytics in Scraawl. We implemented our backend web service design for the...temporal analysis and we created a prototyope GUI web service of Scraawl analytics dashboard. Upgraded Scraawl computational framework to increase
A Systematic Understanding of Successful Web Searches in Information-Based Tasks
ERIC Educational Resources Information Center
Zhou, Mingming
2013-01-01
The purpose of this study is to research how Chinese university students solve information-based problems. With the Search Performance Index as the measure of search success, participants were divided into high, medium and low-performing groups. Based on their web search logs, these three groups were compared along five dimensions of the search…
Learner Self-Regulation and Web 2.0 Tools Management in Personal Learning Environment
ERIC Educational Resources Information Center
Yen, Cherng-Jyh; Tu, Chih-Hsiung; Sujo-Montes, Laura E.; Armfield, Shadow W. J.; Chan, Junn-Yih
2013-01-01
Web 2.0 technology integration requires a higher level of self-regulated learning skills to create a Personal Learning Environment (PLE). This study examined each of the four aspects of learner self-regulation in online learning (i.e., environment structuring, goal setting, time management, & task strategies) as the predictor for level of…
Web 2.0 and Authentic Foreign Language Learning at Higher Education Level
ERIC Educational Resources Information Center
Martins, Maria de Lurdes Correia; Moreira, Gillian; Moreira, António
2012-01-01
Web 2.0 has afforded a number of opportunities for foreign language learning due to its open, participatory and social nature. A crucial aspect is authenticity--both situational and interactional--since students become involved in meaningful tasks, interacting in the target language with an authentic audience. In this paper we will reflect upon…
Automatic energy expenditure measurement for health science.
Catal, Cagatay; Akbulut, Akhan
2018-04-01
It is crucial to predict the human energy expenditure in any sports activity and health science application accurately to investigate the impact of the activity. However, measurement of the real energy expenditure is not a trivial task and involves complex steps. The objective of this work is to improve the performance of existing estimation models of energy expenditure by using machine learning algorithms and several data from different sensors and provide this estimation service in a cloud-based platform. In this study, we used input data such as breathe rate, and hearth rate from three sensors. Inputs are received from a web form and sent to the web service which applies a regression model on Azure cloud platform. During the experiments, we assessed several machine learning models based on regression methods. Our experimental results showed that our novel model which applies Boosted Decision Tree Regression in conjunction with the median aggregation technique provides the best result among other five regression algorithms. This cloud-based energy expenditure system which uses a web service showed that cloud computing technology is a great opportunity to develop estimation systems and the new model which applies Boosted Decision Tree Regression with the median aggregation provides remarkable results. Copyright © 2018 Elsevier B.V. All rights reserved.
Design of a Web-tool for diagnostic clinical trials handling medical imaging research.
Baltasar Sánchez, Alicia; González-Sistal, Angel
2011-04-01
New clinical studies in medicine are based on patients and controls using different imaging diagnostic modalities. Medical information systems are not designed for clinical trials employing clinical imaging. Although commercial software and communication systems focus on storage of image data, they are not suitable for storage and mining of new types of quantitative data. We sought to design a Web-tool to support diagnostic clinical trials involving different experts and hospitals or research centres. The image analysis of this project is based on skeletal X-ray imaging. It involves a computerised image method using quantitative analysis of regions of interest in healthy bone and skeletal metastases. The database is implemented with ASP.NET 3.5 and C# technologies for our Web-based application. For data storage, we chose MySQL v.5.0, one of the most popular open source databases. User logins were necessary, and access to patient data was logged for auditing. For security, all data transmissions were carried over encrypted connections. This Web-tool is available to users scattered at different locations; it allows an efficient organisation and storage of data (case report form) and images and allows each user to know precisely what his task is. The advantages of our Web-tool are as follows: (1) sustainability is guaranteed; (2) network locations for collection of data are secured; (3) all clinical information is stored together with the original images and the results derived from processed images and statistical analysis that enable us to perform retrospective studies; (4) changes are easily incorporated because of the modular architecture; and (5) assessment of trial data collected at different sites is centralised to reduce statistical variance.
Benchmark of Client and Server-Side Catchment Delineation Approaches on Web-Based Systems
NASA Astrophysics Data System (ADS)
Demir, I.; Sermet, M. Y.; Sit, M. A.
2016-12-01
Recent advances in internet and cyberinfrastructure technologies have provided the capability to acquire large scale spatial data from various gauges and sensor networks. The collection of environmental data increased demand for applications which are capable of managing and processing large-scale and high-resolution data sets. With the amount and resolution of data sets provided, one of the challenging tasks for organizing and customizing hydrological data sets is delineation of watersheds on demand. Watershed delineation is a process for creating a boundary that represents the contributing area for a specific control point or water outlet, with intent of characterization and analysis of portions of a study area. Although many GIS tools and software for watershed analysis are available on desktop systems, there is a need for web-based and client-side techniques for creating a dynamic and interactive environment for exploring hydrological data. In this project, we demonstrated several watershed delineation techniques on the web with various techniques implemented on the client-side using JavaScript and WebGL, and on the server-side using Python and C++. We also developed a client-side GPGPU (General Purpose Graphical Processing Unit) algorithm to analyze high-resolution terrain data for watershed delineation which allows parallelization using GPU. The web-based real-time analysis of watershed segmentation can be helpful for decision-makers and interested stakeholders while eliminating the need of installing complex software packages and dealing with large-scale data sets. Utilization of the client-side hardware resources also eliminates the need of servers due its crowdsourcing nature. Our goal for future work is to improve other hydrologic analysis methods such as rain flow tracking by adapting presented approaches.
NASA Astrophysics Data System (ADS)
Sivolella, A.; Ferreira, F.; Maidantchik, C.; Solans, C.; Solodkov, A.; Burghgrave, B.; Smirnov, Y.
2015-12-01
The ATLAS Tile Calorimeter collaboration assesses the quality of calibration data in order to ensure its proper operation. A number of tasks is then performed by executing several tools and accessing web systems, which were independently developed to meet distinct collaboration's requirements and do not necessarily are connected with each other. Thus, to attend the collaboration needs, several programs are usually implemented without a global perspective of the detector, requiring basic software features. In addition, functionalities may overlap in their objectives and frequently replicate resources retrieval mechanisms. Tile-in-ONE is a designed and implemented platform that assembles various web systems used by the calorimeter community through a single framework and a standard technology. It provides an infrastructure to support the code implementation, avoiding duplication of work while integrating with an overall view of the detector status. Database connectors smooth the process of information access since developers do not need to be aware of where records are placed and how to extract them. Within the environment, a dashboard stands for a particular Tile operation aspect and gets together plug-ins, i.e. software components that add specific features to an existing application. A server contains the platform core, which represents the basic environment to deal with the configuration, manage user settings and load plug-ins at runtime. A web middleware assists users to develop their own plug-ins, perform tests and integrate them into the platform as a whole. Backends are employed to allow that any type of application is interpreted and displayed in a uniform way. This paper describes Tile-in-ONE web platform.
Argubi-Wollesen, Andreas; Wollesen, Bettina; Leitner, Martin; Mattes, Klaus
2017-03-01
The purpose of this review is to name and describe the important factors of musculoskeletal strain originating from pushing and pulling tasks such as cart handling that are commonly found in industrial contexts. A literature database search was performed using the research platform Web of Science. For a study to be included in this review differences in measured or calculated strain had to be investigated with regard to: (1) cart weight/ load; (2) handle position and design; (3) exerted forces; (4) handling task (push and pull); or (5) task experience. Thirteen studies met the inclusion criteria and proved to be of adequate methodological quality by the standards of the Alberta Heritage Foundation for Medical Research. External load or cart weight proved to be the most influential factor of strain. The ideal handle positions ranged from hip to shoulder height and were dependent on the strain factor that was focused on as well as the handling task. Furthermore, task experience and subsequently handling technique were also key to reducing strain. Workplace settings that regularly involve pushing and pulling should be checked for potential improvements with regards to lower weight of the loaded handling device, handle design, and good practice guidelines to further reduce musculoskeletal disease prevalence.
Bioinformatics data distribution and integration via Web Services and XML.
Li, Xiao; Zhang, Yizheng
2003-11-01
It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.
Low cost silicon solar array project large area silicon sheet task: Silicon web process development
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.
1977-01-01
Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.
A web-based video annotation system for crowdsourcing surveillance videos
NASA Astrophysics Data System (ADS)
Gadgil, Neeraj J.; Tahboub, Khalid; Kirsh, David; Delp, Edward J.
2014-03-01
Video surveillance systems are of a great value to prevent threats and identify/investigate criminal activities. Manual analysis of a huge amount of video data from several cameras over a long period of time often becomes impracticable. The use of automatic detection methods can be challenging when the video contains many objects with complex motion and occlusions. Crowdsourcing has been proposed as an effective method for utilizing human intelligence to perform several tasks. Our system provides a platform for the annotation of surveillance video in an organized and controlled way. One can monitor a surveillance system using a set of tools such as training modules, roles and labels, task management. This system can be used in a real-time streaming mode to detect any potential threats or as an investigative tool to analyze past events. Annotators can annotate video contents assigned to them for suspicious activity or criminal acts. First responders are then able to view the collective annotations and receive email alerts about a newly reported incident. They can also keep track of the annotators' training performance, manage their activities and reward their success. By providing this system, the process of video analysis is made more efficient.
Development of a web service for analysis in a distributed network.
Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila
2014-01-01
We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes.
Development of a Web Service for Analysis in a Distributed Network
Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila
2014-01-01
Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586
The Evolvable Advanced Multi-Mission Operations System (AMMOS): Making Systems Interoperable
NASA Technical Reports Server (NTRS)
Ko, Adans Y.; Maldague, Pierre F.; Bui, Tung; Lam, Doris T.; McKinney, John C.
2010-01-01
The Advanced Multi-Mission Operations System (AMMOS) provides a common Mission Operation System (MOS) infrastructure to NASA deep space missions. The evolution of AMMOS has been driven by two factors: increasingly challenging requirements from space missions, and the emergence of new IT technology. The work described in this paper focuses on three key tasks related to IT technology requirements: first, to eliminate duplicate functionality; second, to promote the use of loosely coupled application programming interfaces, text based file interfaces, web-based frameworks and integrated Graphical User Interfaces (GUI) to connect users, data, and core functionality; and third, to build, develop, and deploy AMMOS services that are reusable, agile, adaptive to project MOS configurations, and responsive to industrially endorsed information technology standards.
Large-area sheet task advanced dendritic web growth development
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D. L.; Schruben, J.
1982-01-01
Thermal models were developed that accurately predict the thermally generated stresses in the web crystal which, if too high, cause the crystal to degenerate. The application of the modeling results to the design of low-stress experimental growth configurations will allow the growth of wider web crystals at higher growth velocities. A new experimental web growth machine was constructed. This facility includes all the features necessary for carrying out growth experiments under steady thermal conditions. Programmed growth initiation was developed to give reproducible crystal starts. Width control permits the growth of long ribbons at constant width. Melt level is controlled to 0.1 mm or better. Thus, the capability exists to grow long web crystals of constant width and thickness with little operator intervention, and web growth experiments can now be performed with growth variables controlled to a degree not previously possible.
Herrera-Hernandez, Maria C; Lai-Yuen, Susana K; Piegl, Les A; Zhang, Xiao
2016-10-26
This article presents the design of a web-based knowledge management system as a training and research tool for the exploration of key relationships between Western and Traditional Chinese Medicine, in order to facilitate relational medical diagnosis integrating these mainstream healing modalities. The main goal of this system is to facilitate decision-making processes, while developing skills and creating new medical knowledge. Traditional Chinese Medicine can be considered as an ancient relational knowledge-based approach, focusing on balancing interrelated human functions to reach a healthy state. Western Medicine focuses on specialties and body systems and has achieved advanced methods to evaluate the impact of a health disorder on the body functions. Identifying key relationships between Traditional Chinese and Western Medicine opens new approaches for health care practices and can increase the understanding of human medical conditions. Our knowledge management system was designed from initial datasets of symptoms, known diagnosis and treatments, collected from both medicines. The datasets were subjected to process-oriented analysis, hierarchical knowledge representation and relational database interconnection. Web technology was implemented to develop a user-friendly interface, for easy navigation, training and research. Our system was prototyped with a case study on chronic prostatitis. This trial presented the system's capability for users to learn the correlation approach, connecting knowledge in Western and Traditional Chinese Medicine by querying the database, mapping validated medical information, accessing complementary information from official sites, and creating new knowledge as part of the learning process. By addressing the challenging tasks of data acquisition and modeling, organization, storage and transfer, the proposed web-based knowledge management system is presented as a tool for users in medical training and research to explore, learn and update relational information for the practice of integrated medical diagnosis. This proposal in education has the potential to enable further creation of medical knowledge from both Traditional Chinese and Western Medicine for improved care providing. The presented system positively improves the information visualization, learning process and knowledge sharing, for training and development of new skills for diagnosis and treatment, and a better understanding of medical diseases. © IMechE 2016.
The Role of Tasks and Epistemological Beliefs in Online Peer Questioning
ERIC Educational Resources Information Center
Cho, Young Hoan; Lee, Jaejin; Jonassen, David H.
2011-01-01
The current study examines the assertion that students are motivated and learn more by carrying out tasks consistent with their epistemological beliefs in web-based learning environments. In the study, 120 undergraduate students in an educational technology course participated as part of their coursework. Using a wiki, triads reciprocally asked…
Valuing the Implementation of Financial Literacy Education
ERIC Educational Resources Information Center
Davis, Kimberlee; Durband, Dorothy Bagwell
2008-01-01
Placing a monetary value on education is a complex task. A more difficult task is to determine at what monetary level individuals will support educational improvements. The contingent valuation method was used to estimate the value of the implementation of financial literacy education in Texas public schools. A Web-based survey was administered to…
Investigating Student Choices in Performing Higher-Level Comprehension Tasks Using TED
ERIC Educational Resources Information Center
Bianchi, Francesca; Marenzi, Ivana
2016-01-01
The current paper describes a first experiment in the use of TED talks and open tagging exercises to train higher-level comprehension skills, and of automatic logging of the student's actions to investigate the student choices while performing analytical tasks. The experiment took advantage of an interactive learning platform--LearnWeb--that…
Leuteritz, Jan-Paul; Navarro, José; Berger, Rita
2017-01-01
The purpose of this paper is to clarify how leadership is able to improve team effectiveness, by means of its influence on group processes (i.e., increasing group development) and on the group task (i.e., decreasing task uncertainty). Four hundred and eight members of 107 teams in a German research and development (R&D) organization completed a web-based survey; they provided measures of transformational leadership, group development, 2 aspects of task uncertainty, task interdependence, and team effectiveness. In 54 of these teams, the leaders answered a web-based survey on team effectiveness. We tested the model with the data from team members, using structural equations modeling. Group development and a task uncertainty measurement that refers to unstable demands from outside the team partially mediate the effect of transformational leadership on team effectiveness in R&D organizations ( p < 0.05). Although transformational leaders reduce unclarity of goals ( p < 0.05), this seems not to contribute to team effectiveness. The data provided by the leaders was used to assess common source bias, which did not affect the interpretability of the results. Limitations include cross-sectional data and a lower than expected variance of task uncertainty across different job types. This paper contributes to understanding how knowledge worker teams deal effectively with task uncertainty and confirms the importance of group development in this context. This is the first study to examine the effects of transformational leadership and team processes on team effectiveness considering the task characteristics uncertainty and interdependence.
Leuteritz, Jan-Paul; Navarro, José; Berger, Rita
2017-01-01
The purpose of this paper is to clarify how leadership is able to improve team effectiveness, by means of its influence on group processes (i.e., increasing group development) and on the group task (i.e., decreasing task uncertainty). Four hundred and eight members of 107 teams in a German research and development (R&D) organization completed a web-based survey; they provided measures of transformational leadership, group development, 2 aspects of task uncertainty, task interdependence, and team effectiveness. In 54 of these teams, the leaders answered a web-based survey on team effectiveness. We tested the model with the data from team members, using structural equations modeling. Group development and a task uncertainty measurement that refers to unstable demands from outside the team partially mediate the effect of transformational leadership on team effectiveness in R&D organizations (p < 0.05). Although transformational leaders reduce unclarity of goals (p < 0.05), this seems not to contribute to team effectiveness. The data provided by the leaders was used to assess common source bias, which did not affect the interpretability of the results. Limitations include cross-sectional data and a lower than expected variance of task uncertainty across different job types. This paper contributes to understanding how knowledge worker teams deal effectively with task uncertainty and confirms the importance of group development in this context. This is the first study to examine the effects of transformational leadership and team processes on team effectiveness considering the task characteristics uncertainty and interdependence. PMID:28861012
A web-based decision support tool for prognosis simulation in multiple sclerosis.
Veloso, Mário
2014-09-01
A multiplicity of natural history studies of multiple sclerosis provides valuable knowledge of the disease progression but individualized prognosis remains elusive. A few decision support tools that assist the clinician in such task have emerged but have not received proper attention from clinicians and patients. The objective of the current work is to implement a web-based tool, conveying decision relevant prognostic scientific evidence, which will help clinicians discuss prognosis with individual patients. Data were extracted from a set of reference studies, especially those dealing with the natural history of multiple sclerosis. The web-based decision support tool for individualized prognosis simulation was implemented with NetLogo, a program environment suited for the development of complex adaptive systems. Its prototype has been launched online; it enables clinicians to predict both the likelihood of CIS to CDMS conversion, and the long-term prognosis of disability level and SPMS conversion, as well as assess and monitor the effects of treatment. More robust decision support tools, which convey scientific evidence and satisfy the needs of clinical practice by helping clinicians discuss prognosis expectations with individual patients, are required. The web-based simulation model herein introduced proposes to be a step forward toward this purpose. Copyright © 2014 Elsevier B.V. All rights reserved.
Embedded System Implementation on FPGA System With μCLinux OS
NASA Astrophysics Data System (ADS)
Fairuz Muhd Amin, Ahmad; Aris, Ishak; Syamsul Azmir Raja Abdullah, Raja; Kalos Zakiah Sahbudin, Ratna
2011-02-01
Embedded systems are taking on more complicated tasks as the processors involved become more powerful. The embedded systems have been widely used in many areas such as in industries, automotives, medical imaging, communications, speech recognition and computer vision. The complexity requirements in hardware and software nowadays need a flexibility system for further enhancement in any design without adding new hardware. Therefore, any changes in the design system will affect the processor that need to be changed. To overcome this problem, a System On Programmable Chip (SOPC) has been designed and developed using Field Programmable Gate Array (FPGA). A softcore processor, NIOS II 32-bit RISC, which is the microprocessor core was utilized in FPGA system together with the embedded operating system(OS), μClinux. In this paper, an example of web server is explained and demonstrated
ERIC Educational Resources Information Center
Hemard, Dominique
2006-01-01
If web-based technology is increasingly becoming the central plank of contemporary teaching and learning processes, there is still too little evidence to suggest that it is delivering purposeful learning activities beyond its widely perceived potential as a learning resource providing content and learning objects. This is due in part to the…
ERIC Educational Resources Information Center
Jager, Sake; Meima, Estelle; Oggel, Gerdientje
2013-01-01
This article reports our findings on using WebCEF as a CEFR familiarization and self-assessment tool for oral proficiency. Furthermore, we outline how we have implemented Skype as a tool for telecollaboration in our language programmes. The primary purpose of our study was to explore how students and teachers would perceive the potential benefits…
Choosing Web 2.0 Tools for Instruction: An Extension of Task-Technology Fit
ERIC Educational Resources Information Center
Gupta, Saurabh
2014-01-01
The growth of technology and the inclusion of "digital natives" as students in the education world have created a demand pull for the use of Web 2.0 technologies in education. Dominant among these tools have been wikis, blogs and discussion boards. Distance education experts view the use of these tools as differentiators when compared to…
ERIC Educational Resources Information Center
Lee, John K.; Calandra, Brendan
2004-01-01
Two versions of a Web site on the United States Constitution were used by students in separate high school history classes to solve problems that emerged from four constitutional scenarios. One site contained embedded conceptual scaffolding devices in the form of textual annotations; the other did not. The results of our study demonstrated the…
ERIC Educational Resources Information Center
Swan, Gerry
2009-01-01
While blogs, wikis and many other Web 2.0 applications can be employed in learning settings, instruction is not the primary purpose for these tools. The educational field must actively participate in the definition and development of what repurposed or new Web 2.0 applications means in educational settings. One way of viewing this needed…
ERIC Educational Resources Information Center
Klemm, E. Barbara; Iding, Marie K.; Crosby, Martha E.
This study addresses the need to develop research-based criteria for science teacher educators to use in preparing teachers to critically evaluate and select web-based resources for their students' use. The study focuses on the cognitive load imposed on the learner for tasks required in using text, illustrations, and other features of multi-…
ERIC Educational Resources Information Center
Huang, Chung-Kai; Lin, Chun-Yu; Chiang, Yueh-Hui
2010-01-01
This study aims to create a blended learning environment, based on the concept of competency-based training, in a Chinese as a Foreign Language (CFL) classroom at an American university. Drupal platform and web 2.0 tools were used as supplements to traditional face-to-face classroom instruction. Students completed various selective tasks and…
Teaching Web Application Development: A Case Study in a Computer Science Course
ERIC Educational Resources Information Center
Del Fabro, Marcos Didonet; de Alimeda, Eduardo Cunha; Sluzarski, Fabiano
2012-01-01
Teaching web development in Computer Science undergraduate courses is a difficult task. Often, there is a gap between the students' experiences and the reality in the industry. As a consequence, the students are not always well-prepared once they get the degree. This gap is due to several reasons, such as the complexity of the assignments, the…
Effects of Web-Based Collaborative Writing on Individual L2 Writing Development
ERIC Educational Resources Information Center
Bikowski, Dawn; Vithanage, Ramyadarshanie
2016-01-01
This study investigated the effect of repeated in-class web-based collaborative writing tasks on second language writers' (L2) individual writing scores. A pre-test post-test research model was used in addition to participant surveys, class observations, and teacher interviews. Participants included 59 L2 writers in a writing class at a large U.S.…
Design on the MUVE: Synergizing Online Design Education with Multi-User Virtual Environments (MUVE)
ERIC Educational Resources Information Center
Sakalli, Isinsu; Chung, WonJoon
2015-01-01
The world is becoming increasingly virtual. Since the invention of the World Wide Web, information and human interaction has been transferring to the web at a rapid rate. Education is one of the many institutions that is taking advantage of accessing large numbers of people globally through computers. While this can be a simpler task for…
ERIC Educational Resources Information Center
White, Kelsey D.; Heidrich, Emily
2013-01-01
Most educators are aware that some students utilize web-based machine translators for foreign language assignments, however, little research has been done to determine how and why students utilize these programs, or what the implications are for language learning and teaching. In this mixed-methods study we utilized surveys, a translation task,…
Dorval, A D; Christini, D J; White, J A
2001-10-01
We describe a system for real-time control of biological and other experiments. This device, based around the Real-Time Linux operating system, was tested specifically in the context of dynamic clamping, a demanding real-time task in which a computational system mimics the effects of nonlinear membrane conductances in living cells. The system is fast enough to represent dozens of nonlinear conductances in real time at clock rates well above 10 kHz. Conductances can be represented in deterministic form, or more accurately as discrete collections of stochastically gating ion channels. Tests were performed using a variety of complex models of nonlinear membrane mechanisms in excitable cells, including simulations of spatially extended excitable structures, and multiple interacting cells. Only in extreme cases does the computational load interfere with high-speed "hard" real-time processing (i.e., real-time processing that never falters). Freely available on the worldwide web, this experimental control system combines good performance. immense flexibility, low cost, and reasonable ease of use. It is easily adapted to any task involving real-time control, and excels in particular for applications requiring complex control algorithms that must operate at speeds over 1 kHz.
Super: a web server to rapidly screen superposable oligopeptide fragments from the protein data bank
Collier, James H.; Lesk, Arthur M.; Garcia de la Banda, Maria; Konagurthu, Arun S.
2012-01-01
Searching for well-fitting 3D oligopeptide fragments within a large collection of protein structures is an important task central to many analyses involving protein structures. This article reports a new web server, Super, dedicated to the task of rapidly screening the protein data bank (PDB) to identify all fragments that superpose with a query under a prespecified threshold of root-mean-square deviation (RMSD). Super relies on efficiently computing a mathematical bound on the commonly used structural similarity measure, RMSD of superposition. This allows the server to filter out a large proportion of fragments that are unrelated to the query; >99% of the total number of fragments in some cases. For a typical query, Super scans the current PDB containing over 80 500 structures (with ∼40 million potential oligopeptide fragments to match) in under a minute. Super web server is freely accessible from: http://lcb.infotech.monash.edu.au/super. PMID:22638586
Mazur, Lukasz M; Mosaly, Prithima R; Moore, Carlton; Comitz, Elizabeth; Yu, Fei; Falchook, Aaron D; Eblan, Michael J; Hoyle, Lesley M; Tracton, Gregg; Chera, Bhishamjit S; Marks, Lawrence B
2016-11-01
To assess the relationship between (1) task demands and workload, (2) task demands and performance, and (3) workload and performance, all during physician-computer interactions in a simulated environment. Two experiments were performed in 2 different electronic medical record (EMR) environments: WebCIS (n = 12) and Epic (n = 17). Each participant was instructed to complete a set of prespecified tasks on 3 routine clinical EMR-based scenarios: urinary tract infection (UTI), pneumonia (PN), and heart failure (HF). Task demands were quantified using behavioral responses (click and time analysis). At the end of each scenario, subjective workload was measured using the NASA-Task-Load Index (NASA-TLX). Physiological workload was measured using pupillary dilation and electroencephalography (EEG) data collected throughout the scenarios. Performance was quantified based on the maximum severity of omission errors. Data analysis indicated that the PN and HF scenarios were significantly more demanding than the UTI scenario for participants using WebCIS (P < .01), and that the PN scenario was significantly more demanding than the UTI and HF scenarios for participants using Epic (P < .01). In both experiments, the regression analysis indicated a significant relationship only between task demands and performance (P < .01). Results suggest that task demands as experienced by participants are related to participants' performance. Future work may support the notion that task demands could be used as a quality metric that is likely representative of performance, and perhaps patient outcomes. The present study is a reasonable next step in a systematic assessment of how task demands and workload are related to performance in EMR-evolving environments. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Development and tuning of an original search engine for patent libraries in medicinal chemistry.
Pasche, Emilie; Gobeill, Julien; Kreim, Olivier; Oezdemir-Zaech, Fatma; Vachon, Therese; Lovis, Christian; Ruch, Patrick
2014-01-01
The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks clearly increases the effectiveness of the system. We conclude that different search tasks demand different information retrieval engines' settings in order to yield optimal end-user retrieval.
Development and tuning of an original search engine for patent libraries in medicinal chemistry
2014-01-01
Background The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. Methods We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. Results The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. Conclusions We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks clearly increases the effectiveness of the system. We conclude that different search tasks demand different information retrieval engines' settings in order to yield optimal end-user retrieval. PMID:24564220
TurboTech Technical Evaluation Automated System
NASA Technical Reports Server (NTRS)
Tiffany, Dorothy J.
2009-01-01
TurboTech software is a Web-based process that simplifies and semiautomates technical evaluation of NASA proposals for Contracting Officer's Technical Representatives (COTRs). At the time of this reporting, there have been no set standards or systems for training new COTRs in technical evaluations. This new process provides boilerplate text in response to interview style questions. This text is collected into a Microsoft Word document that can then be further edited to conform to specific cases. By providing technical language and a structured format, TurboTech allows the COTRs to concentrate more on the actual evaluation, and less on deciding what language would be most appropriate. Since the actual word choice is one of the more time-consuming parts of a COTRs job, this process should allow for an increase in quantity of proposals evaluated. TurboTech is applicable to composing technical evaluations of contractor proposals, task and delivery orders, change order modifications, requests for proposals, new work modifications, task assignments, as well as any changes to existing contracts.
Stager, Ron; Chambers, Douglas; Wiatzka, Gerd; Dupre, Monica; Callough, Micah; Benson, John; Santiago, Erwin; van Veen, Walter
2017-04-01
The Port Hope Area Initiative is a project mandated and funded by the Government of Canada to remediate properties with legacy low-level radioactive waste contamination in the Town of Port Hope, Ontario. The management and use of large amounts of data from surveys of some 4800 properties is a significant task critical to the success of the project. A large amount of information is generated through the surveys, including scheduling individual field visits to the properties, capture of field data laboratory sample tracking, QA/QC, property report generation and project management reporting. Web-mapping tools were used to track and display temporal progress of various tasks and facilitated consideration of spatial associations of contamination levels. The IM system facilitated the management and integrity of the large amounts of information collected, evaluation of spatial associations, automated report reproduction and consistent application and traceable execution for this project.x. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data
NASA Astrophysics Data System (ADS)
Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.
2017-12-01
Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.
Business logic for geoprocessing of distributed geodata
NASA Astrophysics Data System (ADS)
Kiehle, Christian
2006-12-01
This paper describes the development of a business-logic component for the geoprocessing of distributed geodata. The business logic acts as a mediator between the data and the user, therefore playing a central role in any spatial information system. The component is used in service-oriented architectures to foster the reuse of existing geodata inventories. Based on a geoscientific case study of groundwater vulnerability assessment and mapping, the demands for such architectures are identified with special regard to software engineering tasks. Methods are derived from the field of applied Geosciences (Hydrogeology), Geoinformatics, and Software Engineering. In addition to the development of a business logic component, a forthcoming Open Geospatial Consortium (OGC) specification is introduced: the OGC Web Processing Service (WPS) specification. A sample application is introduced to demonstrate the potential of WPS for future information systems. The sample application Geoservice Groundwater Vulnerability is described in detail to provide insight into the business logic component, and demonstrate how information can be generated out of distributed geodata. This has the potential to significantly accelerate the assessment and mapping of groundwater vulnerability. The presented concept is easily transferable to other geoscientific use cases dealing with distributed data inventories. Potential application fields include web-based geoinformation systems operating on distributed data (e.g. environmental planning systems, cadastral information systems, and others).
In-camera video-stream processing for bandwidth reduction in web inspection
NASA Astrophysics Data System (ADS)
Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.
1996-02-01
Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.
Wilbur, W. John
2012-01-01
The Comparative Toxicogenomics Database (CTD) contains manually curated literature that describes chemical–gene interactions, chemical–disease relationships and gene–disease relationships. Finding articles containing this information is the first and an important step to assist manual curation efficiency. However, the complex nature of named entities and their relationships make it challenging to choose relevant articles. In this article, we introduce a machine learning framework for prioritizing CTD-relevant articles based on our prior system for the protein–protein interaction article classification task in BioCreative III. To address new challenges in the CTD task, we explore a new entity identification method for genes, chemicals and diseases. In addition, latent topics are analyzed and used as a feature type to overcome the small size of the training set. Applied to the BioCreative 2012 Triage dataset, our method achieved 0.8030 mean average precision (MAP) in the official runs, resulting in the top MAP system among participants. Integrated with PubTator, a Web interface for annotating biomedical literature, the proposed system also received a positive review from the CTD curation team. PMID:23160415
Kim, Sun; Kim, Won; Wei, Chih-Hsuan; Lu, Zhiyong; Wilbur, W John
2012-01-01
The Comparative Toxicogenomics Database (CTD) contains manually curated literature that describes chemical-gene interactions, chemical-disease relationships and gene-disease relationships. Finding articles containing this information is the first and an important step to assist manual curation efficiency. However, the complex nature of named entities and their relationships make it challenging to choose relevant articles. In this article, we introduce a machine learning framework for prioritizing CTD-relevant articles based on our prior system for the protein-protein interaction article classification task in BioCreative III. To address new challenges in the CTD task, we explore a new entity identification method for genes, chemicals and diseases. In addition, latent topics are analyzed and used as a feature type to overcome the small size of the training set. Applied to the BioCreative 2012 Triage dataset, our method achieved 0.8030 mean average precision (MAP) in the official runs, resulting in the top MAP system among participants. Integrated with PubTator, a Web interface for annotating biomedical literature, the proposed system also received a positive review from the CTD curation team.
ISS Operations Cost Reductions Through Automation of Real-Time Planning Tasks
NASA Technical Reports Server (NTRS)
Hall, Timothy A.; Clancey, William J.; McDonald, Aaron; Toschlog, Jason; Tucker, Tyson; Khan, Ahmed; Madrid, Steven (Eric)
2011-01-01
In 2007 the Johnson Space Center s Mission Operations Directorate (MOD) management team challenged their organizations to find ways to reduce the cost of operations for supporting the International Space Station (ISS) in the Mission Control Center (MCC). Each MOD organization was asked to define and execute projects that would help them attain cost reductions by 2012. The MOD Operations Division Flight Planning Branch responded to this challenge by launching several software automation projects that would allow them to greatly improve console operations and reduce ISS console staffing and intern reduce operating costs. These tasks ranged from improving the management and integration mission plan changes, to automating the uploading and downloading of information to and from the ISS and the associated ground complex tasks that required multiple decision points. The software solutions leveraged several different technologies including customized web applications and implementation of industry standard web services architecture; as well as engaging a previously TRL 4-5 technology developed by Ames Research Center (ARC) that utilized an intelligent agent-based system to manage and automate file traffic flow, archive data, and generate console logs. These projects to date have allowed the MOD Operations organization to remove one full time (7 x 24 x 365) ISS console position in 2010; with the goal of eliminating a second full time ISS console support position by 2012. The team will also reduce one long range planning console position by 2014. When complete, these Flight Planning Branch projects will account for the elimination of 3 console positions and a reduction in staffing of 11 engineering personnel (EP) for ISS.
Designing an information search interface for younger and older adults.
Pak, Richard; Price, Margaux M
2008-08-01
The present study examined Web-based information retrieval as a function of age for two information organization schemes: hierarchical organization and one organized around tags or keywords. Older adults' performance in information retrieval tasks has traditionally been lower compared with younger adults'. The current study examined the degree to which information organization moderated age-related performance differences on an information retrieval task. The theory of fluid and crystallized intelligence may provide insight into different kinds of information architectures that may reduce age-related differences in computer-based information retrieval performance. Fifty younger (18-23 years of age) and 50 older (55-76 years of age) participants browsed a Web site for answers to specific questions. Half of the participants browsed the hierarchically organized system (taxonomy), which maintained a one-to-one relationship between menu link and page, whereas the other half browsed the tag-based interface, with a many-to-one relationship between menu and page. This difference was expected to interact with age-related differences in fluid and crystallized intelligence. Age-related differences in information retrieval performance persisted; however, a tag-based retrieval interface reduced age-related differences, as compared with a taxonomical interface. Cognitive aging theory can lead to interface interventions that reduce age-related differences in performance with technology. In an information retrieval paradigm, older adults may be able to leverage their increased crystallized intelligence to offset fluid intelligence declines in a computer-based information search task. More research is necessary, but the results suggest that information retrieval interfaces organized around keywords may reduce age-related differences in performance.
Enabling task-based information prioritization via semantic web encodings
NASA Astrophysics Data System (ADS)
Michaelis, James R.
2016-05-01
Modern Soldiers rely upon accurate and actionable information technology to achieve mission objectives. While increasingly rich sensor networks for Areas of Operation (AO) can offer many directions for aiding Soldiers, limitations are imposed by current tactical edge systems on the rate that content can be transmitted. Furthermore, mission tasks will often require very specific sets of information which may easily be drowned out by other content sources. Prior research on Quality and Value of Information (QoI/VoI) has aimed to define ways to prioritize information objects based on their intrinsic attributes (QoI) and perceived value to a consumer (VoI). As part of this effort, established ranking approaches for obtaining Subject Matter Expert (SME) recommendations, such as the Analytic Hierarchy Process (AHP) have been considered. However, limited work has been done to tie Soldier context - such as descriptions of their mission and tasks - back to intrinsic attributes of information objects. As a first step toward addressing the above challenges, this work introduces an ontology-backed approach - rooted in Semantic Web publication practices - for expressing both AHP decision hierarchies and corresponding SME feedback. Following a short discussion on related QoI/VoI research, an ontology-based data structure is introduced for supporting evaluation of Information Objects, using AHP rankings designed to facilitate information object prioritization. Consistent with alternate AHP approaches, prioritization in this approach is based on pairwise comparisons between Information Objects with respect to established criteria, as well as on pairwise comparison of the criteria to assess their relative importance. The paper concludes with a discussion of both ongoing and future work.
DecoFungi: a web application for automatic characterisation of dye decolorisation in fungal strains.
Domínguez, César; Heras, Jónathan; Mata, Eloy; Pascual, Vico
2018-02-27
Fungi have diverse biotechnological applications in, among others, agriculture, bioenergy generation, or remediation of polluted soil and water. In this context, culture media based on color change in response to degradation of dyes are particularly relevant; but measuring dye decolorisation of fungal strains mainly relies on a visual and semiquantitative classification of color intensity changes. Such a classification is a subjective, time-consuming and difficult to reproduce process. DecoFungi is the first, at least up to the best of our knowledge, application to automatically characterise dye decolorisation level of fungal strains from images of inoculated plates. In order to deal with this task, DecoFungi employs a deep-learning model, accessible through a user-friendly web interface, with an accuracy of 96.5%. DecoFungi is an easy to use system for characterising dye decolorisation level of fungal strains from images of inoculated plates.
Executing Medical Guidelines on the Web: Towards Next Generation Healthcare
NASA Astrophysics Data System (ADS)
Argüello, M.; Des, J.; Fernandez-Prieto, M. J.; Perez, R.; Paniagua, H.
There is still a lack of full integration between current Electronic Health Records (EHRs) and medical guidelines that encapsulate evidence-based medicine. Thus, general practitioners (GPs) and specialised physicians still have to read document-based medical guidelines and decide among various options for managing common non-life-threatening conditions where the selection of the most appropriate therapeutic option for each individual patient can be a difficult task. This paper presents a simulation framework and computational test-bed, called V.A.F. Framework, for supporting simulations of clinical situations that boosted the integration between Health Level Seven (HL7) and Semantic Web technologies (OWL, SWRL, and OWL-S) to achieve content layer interoperability between online clinical cases and medical guidelines, and therefore, it proves that higher integration between EHRs and evidence-based medicine can be accomplished which could lead to a next generation of healthcare systems that provide more support to physicians and increase patients' safety.
Named Entity Recognition in a Hungarian NL Based QA System
NASA Astrophysics Data System (ADS)
Tikkl, Domonkos; Szidarovszky, P. Ferenc; Kardkovacs, Zsolt T.; Magyar, Gábor
In WoW project our purpose is to create a complex search interface with the following features: search in the deep web content of contracted partners' databases, processing Hungarian natural language (NL) questions and transforming them to SQL queries for database access, image search supported by a visual thesaurus that describes in a structural form the visual content of images (also in Hungarian). This paper primarily focuses on a particular problem of question processing task: the entity recognition. Before going into details we give a short overview of the project's aims.
NASA Technical Reports Server (NTRS)
Hughitt, Brian; Generazio, Edward (Principal Investigator); Nichols, Charles; Myers, Mika (Principal Investigator); Spencer, Floyd (Principal Investigator); Waller, Jess (Principal Investigator); Wladyka, Jordan (Principal Investigator); Aldrin, John; Burke, Eric; Cerecerez, Laura;
2016-01-01
NASA-STD-5009 requires that successful flaw detection by NDE methods be statistically qualified for use on fracture critical metallic components, but does not standardize practices. This task works towards standardizing calculations and record retention with a web-based tool, the NNWG POD Standards Library or NPSL. Test methods will also be standardized with an appropriately flexible appendix to -5009 identifying best practices. Additionally, this appendix will describe how specimens used to qualify NDE systems will be cataloged, stored and protected from corrosion, damage, or loss.
Gruber, Andreas R; Bernhart, Stephan H; Lorenz, Ronny
2015-01-01
The ViennaRNA package is a widely used collection of programs for thermodynamic RNA secondary structure prediction. Over the years, many additional tools have been developed building on the core programs of the package to also address issues related to noncoding RNA detection, RNA folding kinetics, or efficient sequence design considering RNA-RNA hybridizations. The ViennaRNA web services provide easy and user-friendly web access to these tools. This chapter describes how to use this online platform to perform tasks such as prediction of minimum free energy structures, prediction of RNA-RNA hybrids, or noncoding RNA detection. The ViennaRNA web services can be used free of charge and can be accessed via http://rna.tbi.univie.ac.at.
Human exposure assessment resources on the World Wide Web.
Schwela, Dieter; Hakkinen, Pertti J
2004-05-20
Human exposure assessment is frequently noted as a weak link and bottleneck in the risk assessment process. Fortunately, the World Wide Web and Internet are providing access to numerous valuable sources of human exposure assessment-related information, along with opportunities for information exchange. Internet mailing lists are available as potential online help for exposure assessment questions, e.g. RISKANAL has several hundred members from numerous countries. Various Web sites provide opportunities for training, e.g. Web sites offering general human exposure assessment training include two from the US Environmental Protection Agency (EPA) and four from the US National Library of Medicine. Numerous other Web sites offer access to a wide range of exposure assessment information. For example, the (US) Alliance for Chemical Awareness Web site addresses direct and indirect human exposures, occupational exposures and ecological exposure assessments. The US EPA's Exposure Factors Program Web site provides a focal point for current information and data on exposure factors relevant to the United States. In addition, the International Society of Exposure Analysis Web site provides information about how this society seeks to foster and advance the science of exposure analysis. A major opportunity exists for risk assessors and others to broaden the level of exposure assessment information available via Web sites. Broadening the Web's exposure information could include human exposure factors-related information about country- or region-specific ranges in body weights, drinking water consumption, etc. along with residential factors-related information on air changeovers per hour in various types of residences. Further, country- or region-specific ranges on how various tasks are performed by various types of consumers could be collected and provided. Noteworthy are that efforts are underway in Europe to develop a multi-country collection of exposure factors and the European Commission is in the early stages of planning and developing a Web-accessible information system (EIS-ChemRisks) to serve as a single gateway to all major European initiatives on human exposure to chemicals contained and released from cleaning products, textiles, toys, etc.
Web-based learning resources - new opportunities for competency development.
Moen, Anne; Nygård, Kathrine A; Gauperaa, Torunn
2009-01-01
Creating web-based learning environments holds great promise for on the job training and competence development in nursing. The web-based learning environment was designed and customized by four professional development nurses. We interviewed five RNs that pilot tested the web-based resource. Our findings give some insight into how the web-based design tool are perceived and utilized, and how content is represented in the learning environment. From a competency development perspective, practicing authentic tasks in a web-based learning environment can be useful to train skills and keep up important routines. The approach found in this study also needs careful consideration. Emphasizing routines and skills can be important to reduce variation and ensure more streamlined practice from an institution-wide quality improvement efforts. How the emphasis on routines and skills plays out towards the individual's overall professional development needs further careful studies.
Semantic Web-based digital, field and virtual geological
NASA Astrophysics Data System (ADS)
Babaie, H. A.
2012-12-01
Digital, field and virtual Semantic Web-based education (SWBE) of geological mapping requires the construction of a set of searchable, reusable, and interoperable digital learning objects (LO) for learners, teachers, and authors. These self-contained units of learning may be text, image, or audio, describing, for example, how to calculate the true dip of a layer from two structural contours or find the apparent dip along a line of section. A collection of multi-media LOs can be integrated, through domain and task ontologies, with mapping-related learning activities and Web services, for example, to search for the description of lithostratigraphic units in an area, or plotting orientation data on stereonet. Domain ontologies (e.g., GeologicStructure, Lithostratigraphy, Rock) represent knowledge in formal languages (RDF, OWL) by explicitly specifying concepts, relations, and theories involved in geological mapping. These ontologies are used by task ontologies that formalize the semantics of computational tasks (e.g., measuring the true thickness of a formation) and activities (e.g., construction of cross section) for all actors to solve specific problems (making map, instruction, learning support, authoring). A SWBE system for geological mapping should also involve ontologies to formalize teaching strategy (pedagogical styles), learner model (e.g., for student performance, personalization of learning), interface (entry points for activities of all actors), communication (exchange of messages among different components and actors), and educational Web services (for interoperability). In this ontology-based environment, actors interact with the LOs through educational servers, that manage (reuse, edit, delete, store) ontologies, and through tools which communicate with Web services to collect resources and links to other tools. Digital geological mapping involves a location-based, spatial organization of geological elements in a set of GIS thematic layers. Each layer in the stack assembles a set of polygonal (e.g., formation, member, intrusion), linear (e.g., fault, contact), and/or point (e.g., sample or measurement site) geological elements. These feature classes, represented in domain ontologies by classes, have their own sets of property (attribute, association relation) and topological (e.g., overlap, adjacency, containment), and network (cross-cuttings; connectivity) relationships. Since geological mapping involves describing and depicting different aspects of each feature class (e.g., contact, formation, structure), the same geographic region may be investigated by different communities, for example, for its stratigraphy, rock type, structure, soil type, and isotopic and paleontological age, using sets of ontologies. These data can become interconnected applying the Semantic Web technologies, on the Linked Open Data Cloud, based on their underlying common geographic coordinates. Sets of geological data published on the Cloud will include multiple RDF links to Cloud's geospatial nodes such as GeoNames and Linked GeoData. During mapping, a device such as smartphone, laptop, or iPad, with GPS and GIS capability and a DBpedia Mobile client, can use the current position to discover and query all the geological linked data, and add new data to the thematic layers and publish them to the Cloud.
The aware toolbox for the detection of law infringements on web pages
NASA Astrophysics Data System (ADS)
Shahab, Asif; Kieninger, Thomas; Dengel, Andreas
2010-01-01
In the project Aware we aim to develop an automatic assistant for the detection of law infringements on web pages. The motivation for this project is that many authors of web pages are at some points infringing copyrightor other laws, mostly without being aware of that fact, and are more and more often confronted with costly legal warnings. As the legal environment is constantly changing, an important requirement of Aware is that the domain knowledge can be maintained (and initially defined) by numerous legal experts remotely working without further assistance of the computer scientists. Consequently, the software platform was chosen to be a web-based generic toolbox that can be configured to suit individual analysis experts, definitions of analysis flow, information gathering and report generation. The report generated by the system summarizes all critical elements of a given web page and provides case specific hints to the page author and thus forms a new type of service. Regarding the analysis subsystems, Aware mainly builds on existing state-of-the-art technologies. Their usability has been evaluated for each intended task. In order to control the heterogeneous analysis components and to gather the information, a lightweight scripting shell has been developed. This paper describes the analysis technologies, ranging from text based information extraction, over optical character recognition and phonetic fuzzy string matching to a set of image analysis and retrieval tools; as well as the scripting language to define the analysis flow.
ERIC Educational Resources Information Center
Puerta Melguizo, Mari Carmen; Vidya, Uti; van Oostendorp, Herre
2012-01-01
We studied the effects of menu type, navigation path complexity and spatial ability on information retrieval performance and web disorientation or lostness. Two innovative aspects were included: (a) navigation path relevance and (b) information gathering tasks. As expected we found that, when measuring aspects directly related to navigation…
Using Virtual Reality for Task-Based Exercises in Teaching Non-Traditional Students of German
ERIC Educational Resources Information Center
Libbon, Stephanie
2004-01-01
Using task-based exercises that required web searches and online activities, this course introduced non-traditional students to the sights and sounds of the German culture and language and simultaneously to computer technology. Through partner work that required negotiation of the net as well as of the language, these adult beginning German…
Support of US CLIVAR Project Office 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cummings, Donna
Director of JOSS, supervised the U.S. CLIVAR Project Office Director and helped direct the officer to enhance the goals and objectives of the U.S. CLIVAR Project and budget. Financial Manager of JOSS, worked to complete proposals and monitor compliance with award requirements and funding limitations and ensure the U.S. CLIVAR Project Office complied with UCAR policies and procedures. Project Coordinator administered the funding for the U.S. CLIVAR Project Office and was responsible for coordinating special projects that required additional support from JOSS technical staff. These projects included activities such as website updates, technology upgrades, production of printed reports, and developmentmore » of graphic elements like logos. Web Developer worked both on web development and graphic work and the work consisted of the following: Maintaining the site ? installing updates to Drupal CMS (Content Management System). Creating new templates for webpages and styling them with CSS and JavaScript/jQuery code. Fixing the styling on webpages that the content contributor/manager (Jenn Mays) created and has had trouble with. Creating new web forms for abstract uploading, subscriptions, and meeting registrations. Created 4 webpages for the ?ASP: Key Uncertainties in the Global Carbon-Cycle? meeting. Developed a document review form, instruction webpages, login redirect, dynamic table with form submissions for the US CLIVAR SSC Science Plan Document Review. This was open to the public from June 12, 2013 until July 10, 2013. During this time the user accounts had to be checked (daily) that were created by the public, to delete any spam ones. Graphics work: preparing images for general use on webpages, webpage banners, and for meeting name badges, creating a US CLIVAR letterhead, redesigning the US AMOC logo. System Administrator spent time working on the migration of the US CLIVAR site from the USGCRP office to UCAR here Boulder. This was done to increase the general speed of the site & to allow the web developer to work in it more efficiently. Main tasks were to Archive the old Site, create new development site for web developer, and move web address to new website when web developer was finished with development. There are no patients or equipment related to this proposals« less
ERIC Educational Resources Information Center
Paladino, Emily B.; Klentzin, Jacqueline C.; Mills, Chloe P.
2017-01-01
Based on in-person, task-based usability testing and interviews, the authors' library Web site was recently overhauled in order to improve user experience. This led to the authors' interest in additional usability testing methods and test environments that would most closely fit their library's goals and situation. The appeal of card sorting…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lurie, Gordon
2007-01-02
The cell phone software allows any Java enabled cell phone to view sensor and meteorological data via an internet connection using a secure connection to the CB-EMIS Web Service. Users with appropriate privileges can monitor the state of the sensors and perform simple maintenance tasks remotely. All sensitive data is downloaded from the web service, thus protecting sensitive data in the event a cell phone is lost.
ERIC Educational Resources Information Center
Yang, Chien-Hui; Tzuo, Pei Wen; Komara, Cecile
2011-01-01
Developed by Dodge (1995), WebQuest is an inquiry-based teaching tool, in which students of all ages and levels participate in an authentic task that use pre-designed, pre-defined internet resources, though other print resources can also be used. Learners will put the focus on gathering, summarizing, synthesizing, and evaluating the information…
ERIC Educational Resources Information Center
Rappolt-Schlichtmann, Gabrielle; Daley, Samantha G.; Lim, Seoin; Lapinski, Scott; Robinson, Kristin H.; Johnson, Mindy
2013-01-01
Science notebooks can play a critical role in activity-based science learning, but the tasks of recording, organizing, analyzing, and interpreting data create barriers that impede science learning for many students. This study (a) assessed in a randomized controlled trial the potential for a web-based science notebook designed using the Universal…
Ontology-Based Administration of Web Directories
NASA Astrophysics Data System (ADS)
Horvat, Marko; Gledec, Gordan; Bogunović, Nikola
Administration of a Web directory and maintenance of its content and the associated structure is a delicate and labor intensive task performed exclusively by human domain experts. Subsequently there is an imminent risk of a directory structures becoming unbalanced, uneven and difficult to use to all except for a few users proficient with the particular Web directory and its domain. These problems emphasize the need to establish two important issues: i) generic and objective measures of Web directories structure quality, and ii) mechanism for fully automated development of a Web directory's structure. In this paper we demonstrate how to formally and fully integrate Web directories with the Semantic Web vision. We propose a set of criteria for evaluation of a Web directory's structure quality. Some criterion functions are based on heuristics while others require the application of ontologies. We also suggest an ontology-based algorithm for construction of Web directories. By using ontologies to describe the semantics of Web resources and Web directories' categories it is possible to define algorithms that can build or rearrange the structure of a Web directory. Assessment procedures can provide feedback and help steer the ontology-based construction process. The issues raised in the article can be equally applied to new and existing Web directories.
Problems and challenges in patient information retrieval: a descriptive study.
Kogan, S.; Zeng, Q.; Ash, N.; Greenes, R. A.
2001-01-01
Many patients now turn to the Web for health care information. However, a lack of domain knowledge and unfamiliarity with medical vocabulary and concepts restrict their ability to successfully obtain information they seek. The purpose of this descriptive study was to identify and classify the problems a patient encounters while performing information retrieval tasks on the Web, and the challenges it poses to informatics research. In this study, we observed patients performing various retrieval tasks, and measured the effectiveness of, satisfaction with, and usefulness of the results. Our study showed that patient information retrieval often failed to produce successful results due to a variety of problems. We propose a classification of patient IR problems based on our observations. PMID:11825205
An ontological knowledge framework for adaptive medical workflow.
Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir
2008-10-01
As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly.
Ergatis: a web interface and scalable software system for bioinformatics workflows
Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.
2010-01-01
Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634
Astronomical Instrumentation System Markup Language
NASA Astrophysics Data System (ADS)
Goldbaum, Jesse M.
2016-05-01
The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.
OntoPop: An Ontology Population System for the Semantic Web
NASA Astrophysics Data System (ADS)
Thongkrau, Theerayut; Lalitrojwong, Pattarachai
The development of ontology at the instance level requires the extraction of the terms defining the instances from various data sources. These instances then are linked to the concepts of the ontology, and relationships are created between these instances for the next step. However, before establishing links among data, ontology engineers must classify terms or instances from a web document into an ontology concept. The tool for help ontology engineer in this task is called ontology population. The present research is not suitable for ontology development applications, such as long time processing or analyzing large or noisy data sets. OntoPop system introduces a methodology to solve these problems, which comprises two parts. First, we select meaningful features from syntactic relations, which can produce more significant features than any other method. Second, we differentiate feature meaning and reduce noise based on latent semantic analysis. Experimental evaluation demonstrates that the OntoPop works well, significantly out-performing the accuracy of 49.64%, a learning accuracy of 76.93%, and executes time of 5.46 second/instance.
NASA Scientific Data Purchase Project: From Collection to User
NASA Technical Reports Server (NTRS)
Nicholson, Lamar; Policelli, Fritz; Fletcher, Rose
2002-01-01
NASA's Scientific Data Purchase (SDP) project is currently a $70 million operation managed by the Earth Science Applications Directorate at Stennis Space Center. The SDP project was developed in 1997 to purchase scientific data from commercial sources for distribution to NASA Earth science researchers. Our current data holdings include 8TB of remote sensing imagery consisting of 18 products from 4 companies. Our anticipated data volume is 60 TB by 2004, and we will be receiving new data products from several additional companies. Our current system capacity is 24 TB, expandable to 89 TB. Operations include tasking of new data collections, archive ordering, shipment verification, data validation, distribution, metrics, finances, customer feedback, and technical support. The program has been included in the Stennis Space Center Commercial Remote Sensing ISO 9001 registration since its inception. Our operational system includes automatic quality control checks on data received (with MatLab analysis); internally developed, custom Web-based interfaces that tie into commercial-off-the-shelf software; and an integrated relational database that links and tracks all data through operations. We've distributed nearly 1500 datasets, and almost 18,000 data files have been downloaded from our public web site; on a 10-point scale, our customer satisfaction index is 8.32 at a 23% response level. More information about the SDP is available on our Web site.
2009-06-01
search engines are not up to this task, as they have been optimized to catalog information quickly and efficiently for user ease of access while promoting retail commerce at the same time. This thesis presents a performance analysis of a new search engine algorithm designed to help find IED education networks using the Nutch open-source search engine architecture. It reveals which web pages are more important via references from other web pages regardless of domain. In addition, this thesis discusses potential evaluation and monitoring techniques to be used in conjunction
Web servicing the biological office.
Szugat, Martin; Güttler, Daniel; Fundel, Katrin; Sohler, Florian; Zimmer, Ralf
2005-09-01
Biologists routinely use Microsoft Office applications for standard analysis tasks. Despite ubiquitous internet resources, information needed for everyday work is often not directly and seamlessly available. Here we describe a very simple and easily extendable mechanism using Web Services to enrich standard MS Office applications with internet resources. We demonstrate its capabilities by providing a Web-based thesaurus for biological objects, which maps names to database identifiers and vice versa via an appropriate synonym list. The client application ProTag makes these features available in MS Office applications using Smart Tags and Add-Ins. http://services.bio.ifi.lmu.de/prothesaurus/
75 FR 27986 - Electronic Filing System-Web (EFS-Web) Contingency Option
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-19
...] Electronic Filing System--Web (EFS-Web) Contingency Option AGENCY: United States Patent and Trademark Office... availability of its patent electronic filing system, Electronic Filing System--Web (EFS-Web) by providing a new contingency option when the primary portal to EFS-Web has an unscheduled outage. Previously, the entire EFS...
Enhancing UCSF Chimera through web services
Huang, Conrad C.; Meng, Elaine C.; Morris, John H.; Pettersen, Eric F.; Ferrin, Thomas E.
2014-01-01
Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. PMID:24861624
Xu, Yan; Wang, Yining; Sun, Jian-Tao; Zhang, Jianwen; Tsujii, Junichi; Chang, Eric
2013-01-01
To build large collections of medical terms from semi-structured information sources (e.g. tables, lists, etc.) and encyclopedia sites on the web. The terms are classified into the three semantic categories, Medical Problems, Medications, and Medical Tests, which were used in i2b2 challenge tasks. We developed two systems, one for Chinese and another for English terms. The two systems share the same methodology and use the same software with minimum language dependent parts. We produced large collections of terms by exploiting billions of semi-structured information sources and encyclopedia sites on the Web. The standard performance metric of recall (R) is extended to three different types of Recall to take the surface variability of terms into consideration. They are Surface Recall (), Object Recall (), and Surface Head recall (). We use two test sets for Chinese. For English, we use a collection of terms in the 2010 i2b2 text. Two collections of terms, one for English and the other for Chinese, have been created. The terms in these collections are classified as either of Medical Problems, Medications, or Medical Tests in the i2b2 challenge tasks. The English collection contains 49,249 (Problems), 89,591 (Medications) and 25,107 (Tests) terms, while the Chinese one contains 66,780 (Problems), 101,025 (Medications), and 15,032 (Tests) terms. The proposed method of constructing a large collection of medical terms is both efficient and effective, and, most of all, independent of language. The collections will be made publicly available. PMID:23874426
Xu, Yan; Wang, Yining; Sun, Jian-Tao; Zhang, Jianwen; Tsujii, Junichi; Chang, Eric
2013-01-01
To build large collections of medical terms from semi-structured information sources (e.g. tables, lists, etc.) and encyclopedia sites on the web. The terms are classified into the three semantic categories, Medical Problems, Medications, and Medical Tests, which were used in i2b2 challenge tasks. We developed two systems, one for Chinese and another for English terms. The two systems share the same methodology and use the same software with minimum language dependent parts. We produced large collections of terms by exploiting billions of semi-structured information sources and encyclopedia sites on the Web. The standard performance metric of recall (R) is extended to three different types of Recall to take the surface variability of terms into consideration. They are Surface Recall (R(S)), Object Recall (R(O)), and Surface Head recall (R(H)). We use two test sets for Chinese. For English, we use a collection of terms in the 2010 i2b2 text. Two collections of terms, one for English and the other for Chinese, have been created. The terms in these collections are classified as either of Medical Problems, Medications, or Medical Tests in the i2b2 challenge tasks. The English collection contains 49,249 (Problems), 89,591 (Medications) and 25,107 (Tests) terms, while the Chinese one contains 66,780 (Problems), 101,025 (Medications), and 15,032 (Tests) terms. The proposed method of constructing a large collection of medical terms is both efficient and effective, and, most of all, independent of language. The collections will be made publicly available.
TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling
NASA Astrophysics Data System (ADS)
Nelson, J.; Jones, N.; Ames, D. P.
2015-12-01
Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsh, Amber; Harsch, Tim; Pitt, Julie
2007-08-31
The computer side of the IMAGE project consists of a collection of Perl scripts that perform a variety of tasks; scripts are available to insert, update and delete data from the underlying Oracle database, download data from NCBI's Genbank and other sources, and generate data files for download by interested parties. Web scripts make up the tracking interface, and various tools available on the project web-site (image.llnl.gov) that provide a search interface to the database.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-10
...-2506-01] RIN 0648-XC276 Science Advisory Board Satellite Task Force; Availability of Draft Report and... notice on behalf of the NOAA Science Advisory Board (SAB) to announce the availability of the draft..., 2012. ADDRESSES: The Draft Report of the SATTF will be available on the NOAA Science Advisory Board Web...
Graphite Girls in a Gigabyte World: Managing the World Wide Web in 700 Square Feet
ERIC Educational Resources Information Center
Ogletree, Tamra; Saurino, Penelope; Johnson, Christie
2009-01-01
Our action research project examined the on-task and off-task behaviors of university-level student, use of wireless laptops in face-to-face classes in order to establish rules of wireless laptop etiquette in classroom settings. Participants in the case study of three university classrooms included undergraduate, graduate, and doctoral students.…
Web application for detailed real-time database transaction monitoring for CMS condition data
NASA Astrophysics Data System (ADS)
de Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio
2012-12-01
In the upcoming LHC era, database have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, allocated in several servers, both inside and outside the CERN network. In this scenario, the task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks such as data transfer, inspection, planning and security issues. We present here a web application based on Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use case models. In addition the application detects errors in database transactions (for example identify any mistake made by user, application failure, unexpected network shutdown or Structured Query Language (SQL) statement error) and provides warning messages from the different users' perspectives. Finally, in order to fullfill the requirements of the CMS experiment community, and to meet the new development in many Web client tools, our application was further developed, and new features were deployed.
Workflow-Based Software Development Environment
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
2013-01-01
The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment
Towards systems neuroscience of ADHD: A meta-analysis of 55 fMRI studies
Cortese, Samuele; Kelly, Clare; Chabernaud, Camille; Proal, Erika; Di Martino, Adriana; Milham, Michael P.; Castellanos, F. Xavier
2013-01-01
Objective To perform a comprehensive meta-analysis of task-based functional MRI studies of Attention-Deficit/Hyperactivity Disorder (ADHD). Method PubMed, Ovid, EMBASE, Web of Science, ERIC, CINHAL, and NeuroSynth were searched for studies published through 06/30/2011. Significant differences in activation of brain regions between individuals with ADHD and comparisons were detected using activation likelihood estimation meta-analysis (p<0.05, corrected). Dysfunctional regions in ADHD were related to seven reference neuronal systems. We performed a set of meta-analyses focused on age groups (children; adults), clinical characteristics (history of stimulant treatment; presence of psychiatric comorbidities), and specific neuropsychological tasks (inhibition; working memory; vigilance/attention). Results Fifty-five studies were included (39 in children, 16 in adults). In children, hypoactivation in ADHD vs. comparisons was found mostly in systems involved in executive functions (frontoparietal network) and attention (ventral attentional network). Significant hyperactivation in ADHD vs. comparisons was observed predominantly within the default, ventral attention, and somatomotor networks. In adults, ADHD-related hypoactivation was predominant in the frontoparietal system, while ADHD-related hyperactivation was present in the visual, dorsal attention, and default networks. Significant ADHD-related dysfunction largely reflected task features and was detected even in the absence of comorbid mental disorders or history of stimulant treatment. Conclusions A growing literature provides evidence of ADHD-related dysfunction within multiple neuronal systems involved in higher-level cognitive functions but also in sensorimotor processes, including the visual system, and in the default network. This meta-analytic evidence extends early models of ADHD pathophysiology focused on prefrontal-striatal circuits. PMID:22983386
Kamatuka, Kenta; Hattori, Masahiro; Sugiyama, Tomoyasu
2016-12-01
RNA interference (RNAi) screening is extensively used in the field of reverse genetics. RNAi libraries constructed using random oligonucleotides have made this technology affordable. However, the new methodology requires exploration of the RNAi target gene information after screening because the RNAi library includes non-natural sequences that are not found in genes. Here, we developed a web-based tool to support RNAi screening. The system performs short hairpin RNA (shRNA) target prediction that is informed by comprehensive enquiry (SPICE). SPICE automates several tasks that are laborious but indispensable to evaluate the shRNAs obtained by RNAi screening. SPICE has four main functions: (i) sequence identification of shRNA in the input sequence (the sequence might be obtained by sequencing clones in the RNAi library), (ii) searching the target genes in the database, (iii) demonstrating biological information obtained from the database, and (iv) preparation of search result files that can be utilized in a local personal computer (PC). Using this system, we demonstrated that genes targeted by random oligonucleotide-derived shRNAs were not different from those targeted by organism-specific shRNA. The system facilitates RNAi screening, which requires sequence analysis after screening. The SPICE web application is available at http://www.spice.sugysun.org/.
SHARIT, JOSEPH; HERNÁNDEZ, MARIO A.; CZAJA, SARA J.; PIROLLI, PETER
2009-01-01
This study investigated the influences of knowledge, particularly Internet, Web browser, and search engine knowledge, as well as cognitive abilities on older adult information seeking on the Internet. The emphasis on aspects of cognition was informed by a modeling framework of search engine information-seeking behavior. Participants from two older age groups were recruited: twenty people in a younger-old group (ages 60–70) and twenty people in an older-old group (ages 71–85). Ten younger adults (ages 18–39) served as a comparison group. All participants had at least some Internet search experience. The experimental task consisted of six realistic search problems, all involving information related to health and well-being and which varied in degree of complexity. The results indicated that though necessary, Internet-related knowledge was not sufficient in explaining information-seeking performance, and suggested that a combination of both knowledge and key cognitive abilities is important for successful information seeking. In addition, the cognitive abilities that were found to be critical for task performance depended on the search problem’s complexity. Also, significant differences in task performance between the younger and the two older age groups were found on complex, but not on simple problems. Overall, the results from this study have implications for instructing older adults on Internet information seeking and for the design of Web sites. PMID:20011130
Empirical analysis of web-based user-object bipartite networks
NASA Astrophysics Data System (ADS)
Shang, Ming-Sheng; Lü, Linyuan; Zhang, Yi-Cheng; Zhou, Tao
2010-05-01
Understanding the structure and evolution of web-based user-object networks is a significant task since they play a crucial role in e-commerce nowadays. This letter reports the empirical analysis on two large-scale web sites, audioscrobbler.com and del.icio.us, where users are connected with music groups and bookmarks, respectively. The degree distributions and degree-degree correlations for both users and objects are reported. We propose a new index, named collaborative similarity, to quantify the diversity of tastes based on the collaborative selection. Accordingly, the correlation between degree and selection diversity is investigated. We report some novel phenomena well characterizing the selection mechanism of web users and outline the relevance of these phenomena to the information recommendation problem.
Large-area sheet task advanced dendritic web growth development
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.
1984-01-01
The thermal models used for analyzing dendritic web growth and calculating the thermal stress were reexamined to establish the validity limits imposed by the assumptions of the models. Also, the effects of thermal conduction through the gas phase were evaluated and found to be small. New growth designs, both static and dynamic, were generated using the modeling results. Residual stress effects in dendritic web were examined. In the laboratory, new techniques for the control of temperature distributions in three dimensions were developed. A new maximum undeformed web width of 5.8 cm was achieved. A 58% increase in growth velocity of 150 micrometers thickness was achieved with dynamic hardware. The area throughput goals for transient growth of 30 and 35 sq cm/min were exceeded.
Briache, Abdelaali; Marrakchi, Kamar; Kerzazi, Amine; Navas-Delgado, Ismael; Rossi Hassani, Badr D; Lairini, Khalid; Aldana-Montes, José F
2012-01-25
Saccharomyces cerevisiae is recognized as a model system representing a simple eukaryote whose genome can be easily manipulated. Information solicited by scientists on its biological entities (Proteins, Genes, RNAs...) is scattered within several data sources like SGD, Yeastract, CYGD-MIPS, BioGrid, PhosphoGrid, etc. Because of the heterogeneity of these sources, querying them separately and then manually combining the returned results is a complex and time-consuming task for biologists most of whom are not bioinformatics expert. It also reduces and limits the use that can be made on the available data. To provide transparent and simultaneous access to yeast sources, we have developed YeastMed: an XML and mediator-based system. In this paper, we present our approach in developing this system which takes advantage of SB-KOM to perform the query transformation needed and a set of Data Services to reach the integrated data sources. The system is composed of a set of modules that depend heavily on XML and Semantic Web technologies. User queries are expressed in terms of a domain ontology through a simple form-based web interface. YeastMed is the first mediation-based system specific for integrating yeast data sources. It was conceived mainly to help biologists to find simultaneously relevant data from multiple data sources. It has a biologist-friendly interface easy to use. The system is available at http://www.khaos.uma.es/yeastmed/.
NASA Astrophysics Data System (ADS)
Shulgina, T. M.; Gordova, Y. E.; Martynova, Y. V.
2014-12-01
A problem of making education relevant to the workplace tasks is a key problem of higher education in the professional field of environmental sciences. To answer this challenge several new courses for students of "Climatology" and "Meteorology" specialties were developed and implemented at the Tomsk State University, which comprises theoretical knowledge from up-to-date environmental sciences with computational tasks. To organize the educational process we use an open-source course management system Moodle (www.moodle.org). It gave us an opportunity to combine text and multimedia in a theoretical part of educational courses. The hands-on approach is realized through development of innovative trainings which are performed within the information-computational web GIS platform "Climate" (http://climate.scert.ru/). The platform has a set of tools and data bases allowing a researcher to perform climate changes analysis on the selected territory. The tools are also used for students' trainings, which contain practical tasks on climate modeling and climate changes assessment and analysis. Laboratory exercises are covering three topics: "Analysis of regional climate changes"; "Analysis of climate extreme indices on the regional scale"; and "Analysis of future climate". They designed to consolidate students' knowledge of discipline, to instill in them the skills to work independently with large amounts of geophysical data using modern processing and analysis tools of web-GIS platform "Climate" and to train them to present results obtained on laboratory work as reports with the statement of the problem, the results of calculations and logically justified conclusion. Thus, students are engaged in n the use of modern tools of the geophysical data analysis and it cultivates dynamic of their professional learning. The approach can help us to fill in this gap because it is the only approach that offers experience, increases students involvement, advance the use of modern information and communication tools. Financial support for this research from the RFBR (13-05-12034, 14-05-00502), SB RAS project VIII.80.2.1 and grant of the President of RF (№ 181) is acknowledged.
Lee, Chi-Ching; Chen, Yi-Ping Phoebe; Yao, Tzu-Jung; Ma, Cheng-Yu; Lo, Wei-Cheng; Lyu, Ping-Chiang; Tang, Chuan Yi
2013-04-10
Sequencing of microbial genomes is important because of microbial-carrying antibiotic and pathogenetic activities. However, even with the help of new assembling software, finishing a whole genome is a time-consuming task. In most bacteria, pathogenetic or antibiotic genes are carried in genomic islands. Therefore, a quick genomic island (GI) prediction method is useful for ongoing sequencing genomes. In this work, we built a Web server called GI-POP (http://gipop.life.nthu.edu.tw) which integrates a sequence assembling tool, a functional annotation pipeline, and a high-performance GI predicting module, in a support vector machine (SVM)-based method called genomic island genomic profile scanning (GI-GPS). The draft genomes of the ongoing genome projects in contigs or scaffolds can be submitted to our Web server, and it provides the functional annotation and highly probable GI-predicting results. GI-POP is a comprehensive annotation Web server designed for ongoing genome project analysis. Researchers can perform annotation and obtain pre-analytic information include possible GIs, coding/non-coding sequences and functional analysis from their draft genomes. This pre-analytic system can provide useful information for finishing a genome sequencing project. Copyright © 2012 Elsevier B.V. All rights reserved.
A service-based framework for pharmacogenomics data integration
NASA Astrophysics Data System (ADS)
Wang, Kun; Bai, Xiaoying; Li, Jing; Ding, Cong
2010-08-01
Data are central to scientific research and practices. The advance of experiment methods and information retrieval technologies leads to explosive growth of scientific data and databases. However, due to the heterogeneous problems in data formats, structures and semantics, it is hard to integrate the diversified data that grow explosively and analyse them comprehensively. As more and more public databases are accessible through standard protocols like programmable interfaces and Web portals, Web-based data integration becomes a major trend to manage and synthesise data that are stored in distributed locations. Mashup, a Web 2.0 technique, presents a new way to compose content and software from multiple resources. The paper proposes a layered framework for integrating pharmacogenomics data in a service-oriented approach using the mashup technology. The framework separates the integration concerns from three perspectives including data, process and Web-based user interface. Each layer encapsulates the heterogeneous issues of one aspect. To facilitate the mapping and convergence of data, the ontology mechanism is introduced to provide consistent conceptual models across different databases and experiment platforms. To support user-interactive and iterative service orchestration, a context model is defined to capture information of users, tasks and services, which can be used for service selection and recommendation during a dynamic service composition process. A prototype system is implemented and cases studies are presented to illustrate the promising capabilities of the proposed approach.
GoWeb: a semantic search engine for the life science web.
Dietze, Heiko; Schroeder, Michael
2009-10-01
Current search engines are keyword-based. Semantic technologies promise a next generation of semantic search engines, which will be able to answer questions. Current approaches either apply natural language processing to unstructured text or they assume the existence of structured statements over which they can reason. Here, we introduce a third approach, GoWeb, which combines classical keyword-based Web search with text-mining and ontologies to navigate large results sets and facilitate question answering. We evaluate GoWeb on three benchmarks of questions on genes and functions, on symptoms and diseases, and on proteins and diseases. The first benchmark is based on the BioCreAtivE 1 Task 2 and links 457 gene names with 1352 functions. GoWeb finds 58% of the functional GeneOntology annotations. The second benchmark is based on 26 case reports and links symptoms with diseases. GoWeb achieves 77% success rate improving an existing approach by nearly 20%. The third benchmark is based on 28 questions in the TREC genomics challenge and links proteins to diseases. GoWeb achieves a success rate of 79%. GoWeb's combination of classical Web search with text-mining and ontologies is a first step towards answering questions in the biomedical domain. GoWeb is online at: http://www.gopubmed.org/goweb.
Computer and internet use by ophthalmologists and trainees in an academic centre.
Somal, Kirandeep; Lam, Wai-Ching; Tam, Eric
2009-06-01
The purpose of this study was to determine computer, internet, and department web site use by members of the Department of Ophthalmology and Vision Sciences at the University of Toronto in Toronto, Ont. Cross-sectional analysis. Eighty-eight members of the Department of Ophthalmology and Vision Sciences who responded to a survey. One hundred forty-eight department members (93 staff, 24 residents, and 31 fellows) were invited via e-mail to complete an online survey looking at computer and internet use. Participation was voluntary. Individuals who did not fill in an online response were sent a paper copy of the survey. No identifying fields were used in the data analysis. A response rate of 59% (88/148) was obtained. Fifty-nine percent of respondents described their computer skill as "good" or better; 86.4% utilized a computer in their clinical practice. Performance of computer-related tasks included accessing e-mail (98.9%), accessing medical literature (87.5%), conducting personal affairs (83%), and accessing conference/round schedules (65.9%). The survey indicated that 89.1% of respondents accessed peer-reviewed material online, including eMedicine (60.2%) and UpToDate articles (48.9%). Thirty-three percent of department members reported never having visited the department web site. Impediments to web site use included information not up to date (27.3%), information not of interest (22.1%), and difficulty locating the web site (20.8%). The majority of ophthalmologists and trainees in an academic centre utilize computer and internet resources for various tasks. A weak linear correlation was found between lower age of respondent and higher self-evaluated experience with computers (r = -0.43). Although use of the current department web site was low, respondents were interested in seeing improvements to the web site to increase its utility.
Beach, Scott R; Schulz, Richard; Matthews, Judith T; Courtney, Karen; Dabbs, Annette DeVito
2014-11-01
Quality of Life technology (QoLT) stresses humans and technology as mutually dependent and aware, working together to improve task performance and quality of life. This study examines preferences for technology versus human assistance and control in the context of QoLT. Data are from a nationally representative, cross-sectional web-based sample of 416 US baby boomers (45-64) and 114 older adults (65+) on preferences for technology versus human assistance and control in the performance of kitchen and personal care tasks. Multinomial logistic regression and ordinary least squares regression were used to determine predictors of these preferences. Respondents were generally accepting of technology assistance but wanted to maintain control over its' operation. Baby boomers were more likely to prefer technology than older adults, and those with fewer QoLT privacy concerns and who thought they were more likely to need future help were more likely to prefer technology over human assistance and more willing to relinquish control to technology. Results suggest the need for design of person- and context-aware QoLT systems that are responsive to user desires for level of control over operation of the technology. The predictors of these preferences suggest potentially receptive markets for the targeting of QoLT systems.
Design and implementation of space physics multi-model application integration based on web
NASA Astrophysics Data System (ADS)
Jiang, Wenping; Zou, Ziming
With the development of research on space environment and space science, how to develop network online computing environment of space weather, space environment and space physics models for Chinese scientific community is becoming more and more important in recent years. Currently, There are two software modes on space physics multi-model application integrated system (SPMAIS) such as C/S and B/S. the C/S mode which is traditional and stand-alone, demands a team or workshop from many disciplines and specialties to build their own multi-model application integrated system, that requires the client must be deployed in different physical regions when user visits the integrated system. Thus, this requirement brings two shortcomings: reducing the efficiency of researchers who use the models to compute; inconvenience of accessing the data. Therefore, it is necessary to create a shared network resource access environment which could help users to visit the computing resources of space physics models through the terminal quickly for conducting space science research and forecasting spatial environment. The SPMAIS develops high-performance, first-principles in B/S mode based on computational models of the space environment and uses these models to predict "Space Weather", to understand space mission data and to further our understanding of the solar system. the main goal of space physics multi-model application integration system (SPMAIS) is to provide an easily and convenient user-driven online models operating environment. up to now, the SPMAIS have contained dozens of space environment models , including international AP8/AE8 IGRF T96 models and solar proton prediction model geomagnetic transmission model etc. which are developed by Chinese scientists. another function of SPMAIS is to integrate space observation data sets which offers input data for models online high-speed computing. In this paper, service-oriented architecture (SOA) concept that divides system into independent modules according to different business needs is applied to solve the problem of the independence of the physical space between multiple models. The classic MVC(Model View Controller) software design pattern is concerned to build the architecture of space physics multi-model application integrated system. The JSP+servlet+javabean technology is used to integrate the web application programs of space physics multi-model. It solves the problem of multi-user requesting the same job of model computing and effectively balances each server computing tasks. In addition, we also complete follow tasks: establishing standard graphical user interface based on Java Applet application program; Designing the interface between model computing and model computing results visualization; Realizing three-dimensional network visualization without plug-ins; Using Java3D technology to achieve a three-dimensional network scene interaction; Improved ability to interact with web pages and dynamic execution capabilities, including rendering three-dimensional graphics, fonts and color control. Through the design and implementation of the SPMAIS based on Web, we provide an online computing and application runtime environment of space physics multi-model. The practical application improves that researchers could be benefit from our system in space physics research and engineering applications.
A WorkFlow Engine Oriented Modeling System for Hydrologic Sciences
NASA Astrophysics Data System (ADS)
Lu, B.; Piasecki, M.
2009-12-01
In recent years the use of workflow engines for carrying out modeling and data analyses tasks has gained increased attention in the science and engineering communities. Tasks like processing raw data coming from sensors and passing these raw data streams to filters for QA/QC procedures possibly require multiple and complicated steps that need to be repeated over and over again. A workflow sequence that carries out a number of steps of various complexity is an ideal approach to deal with these tasks because the sequence can be stored, called up and repeated over again and again. This has several advantages: for one it ensures repeatability of processing steps and with that provenance, an issue that is increasingly important in the science and engineering communities. It also permits the hand off of lengthy and time consuming tasks that can be error prone to a chain of processing actions that are carried out automatically thus reducing the chance for error on the one side and freeing up time to carry out other tasks on the other hand. This paper aims to present the development of a workflow engine embedded modeling system which allows to build up working sequences for carrying out numerical modeling tasks regarding to hydrologic science. Trident, which facilitates creating, running and sharing scientific data analysis workflows, is taken as the central working engine of the modeling system. Current existing functionalities of the modeling system involve digital watershed processing, online data retrieval, hydrologic simulation and post-event analysis. They are stored as sequences or modules respectively. The sequences can be invoked to implement their preset tasks in orders, for example, triangulating a watershed from raw DEM. Whereas the modules encapsulated certain functions can be selected and connected through a GUI workboard to form sequences. This modeling system is demonstrated by setting up a new sequence for simulating rainfall-runoff processes which involves embedded Penn State Integrated Hydrologic Model(PIHM) module for hydrologic simulation as a kernel, DEM processing sub-sequence which prepares geospatial data for PIHM, data retrieval module which access time series data from online data repository via web services or from local database, post- data management module which stores , visualizes and analyzes model outputs.
Shahar, Yuval; Young, Ohad; Shalom, Erez; Mayaffit, Alon; Moskovitch, Robert; Hessing, Alon; Galperin, Maya
2004-01-01
We propose to present a poster (and potentially also a demonstration of the implemented system) summarizing the current state of our work on a hybrid, multiple-format representation of clinical guidelines that facilitates conversion of guidelines from free text to a formal representation. We describe a distributed Web-based architecture (DeGeL) and a set of tools using the hybrid representation. The tools enable performing tasks such as guideline specification, semantic markup, search, retrieval, visualization, eligibility determination, runtime application and retrospective quality assessment. The representation includes four parallel formats: Free text (one or more original sources); semistructured text (labeled by the target guideline-ontology semantic labels); semiformal text (which includes some control specification); and a formal, machine-executable representation. The specification, indexing, search, retrieval, and browsing tools are essentially independent of the ontology chosen for guideline representation, but editing the semi-formal and formal formats requires ontology-specific tools, which we have developed in the case of the Asbru guideline-specification language. The four formats support increasingly sophisticated computational tasks. The hybrid guidelines are stored in a Web-based library. All tools, such as for runtime guideline application or retrospective quality assessment, are designed to operate on all representations. We demonstrate the hybrid framework by providing examples from the semantic markup and search tools.
Resource-Bounded Information Acquisition and Learning
2012-05-01
candidate features arrive one at a time, and the learner’s task is to select a ‘best so far’ set of features from streaming features. Krause et al...on Artificial Intelligence. [31] Gatterbauer, Wolfgang . Estimating required recall for successful knowledge acquisition from the web. In Proceedings of...the 15th international conference on World Wide Web (New York, NY, USA, 2006), WWW ’06, ACM, pp. 969– 970. [32] Gatterbauer, Wolfgang . Rules of thumb
ERIC Educational Resources Information Center
McCarthy-Tucker, Sherri
A study analyzed the relative effectiveness of three teaching strategies for enhancing vocabulary and reading comprehension. Sixty-eight students in three fourth-grade classrooms in a suburban southwestern public school were presented with a vocabulary lesson on weather from the reading text according to one of the following strategies: (1) basal…
Interactive Vulnerability Analysis Enhancement Results
2012-12-01
from JavaEE web based applications to other non-web based Java programs. Technology developed in this effort should be generally applicable to other...Generating a rule is a 2 click process that requires no input from the user. • Task 3: Added support for non- Java EE applications Aspect’s...investigated a variety of Java -based technologies and how IAST can support them. We were successful in adding support for Scala, a popular new language, and
The SAMCO Web-platform for resilience assessment in mountainous valleys impacted by landslide risks.
NASA Astrophysics Data System (ADS)
Grandjean, Gilles; Thomas, Loic; Bernardie, Severine
2016-04-01
The ANR-SAMCO project aims to develop a proactive resilience framework enhancing the overall resilience of societies on the impacts of mountain risks. The project aims to elaborate methodological tools to characterize and measure ecosystem and societal resilience from an operative perspective on three mountain representative case studies. To achieve this objective, the methodology is split in several points: (1) the definition of the potential impacts of global environmental changes (climate system, ecosystem e.g. land use, socio-economic system) on landslide hazards, (2) the analysis of these consequences in terms of vulnerability (e.g. changes in the location and characteristics of the impacted areas and level of their perturbation) and (3) the implementation of a methodology for quantitatively investigating and mapping indicators of mountain slope vulnerability exposed to several hazard types, and the development of a GIS-based demonstration platform available on the web. The strength and originality of the SAMCO project lies in the combination of different techniques, methodologies and models (multi-hazard assessment, risk evolution in time, vulnerability functional analysis, and governance strategies) that are implemented in a user-oriented web-platform, currently in development. We present the first results of this development task, architecture and functions of the web-tools, the case studies database showing the multi-hazard maps and the stakes at risks. Risk assessment over several area of interest in Alpine or Pyrenean valleys are still in progress, but the first analyses are presented for current and future periods for which climate change and land-use (economical, geographical and social aspects) scenarios are taken into account. This tool, dedicated to stakeholders, should be finally used to evaluate resilience of mountainous regions since multiple scenarios can be tested and compared.
Human dynamics revealed through Web analytics
NASA Astrophysics Data System (ADS)
Gonçalves, Bruno; Ramasco, José J.
2008-08-01
The increasing ubiquity of Internet access and the frequency with which people interact with it raise the possibility of using the Web to better observe, understand, and monitor several aspects of human social behavior. Web sites with large numbers of frequently returning users are ideal for this task. If these sites belong to companies or universities, their usage patterns can furnish information about the working habits of entire populations. In this work, we analyze the properly anonymized logs detailing the access history to Emory University’s Web site. Emory is a medium-sized university located in Atlanta, Georgia. We find interesting structure in the activity patterns of the domain and study in a systematic way the main forces behind the dynamics of the traffic. In particular, we find that linear preferential linking, priority-based queuing, and the decay of interest for the contents of the pages are the essential ingredients to understand the way users navigate the Web.
jsPsych: a JavaScript library for creating behavioral experiments in a Web browser.
de Leeuw, Joshua R
2015-03-01
Online experiments are growing in popularity, and the increasing sophistication of Web technology has made it possible to run complex behavioral experiments online using only a Web browser. Unlike with offline laboratory experiments, however, few tools exist to aid in the development of browser-based experiments. This makes the process of creating an experiment slow and challenging, particularly for researchers who lack a Web development background. This article introduces jsPsych, a JavaScript library for the development of Web-based experiments. jsPsych formalizes a way of describing experiments that is much simpler than writing the entire experiment from scratch. jsPsych then executes these descriptions automatically, handling the flow from one task to another. The jsPsych library is open-source and designed to be expanded by the research community. The project is available online at www.jspsych.org .
ERIC Educational Resources Information Center
Chen, Julian ChengChiang; Brown, Kimberly Lynn
2012-01-01
The majority of writing tasks assigned to second language (L2) learners tend to target an abstract audience and the writing generated is not meant for real or meaningful purposes. The emergence of Web 2.0 concepts has created a potential educational environment where students have access to a widely distributed, authentic audience with a simple…
A User-centered Model for Web Site Design
Kinzie, Mable B.; Cohn, Wendy F.; Julian, Marti F.; Knaus, William A.
2002-01-01
As the Internet continues to grow as a delivery medium for health information, the design of effective Web sites becomes increasingly important. In this paper, the authors provide an overview of one effective model for Web site design, a user-centered process that includes techniques for needs assessment, goal/task analysis, user interface design, and rapid prototyping. They detail how this approach was employed to design a family health history Web site, Health Heritage
Developing web-based data analysis tools for precision farming using R and Shiny
NASA Astrophysics Data System (ADS)
Jahanshiri, Ebrahim; Mohd Shariff, Abdul Rashid
2014-06-01
Technologies that are set to increase the productivity of agricultural practices require more and more data. Nevertheless, farming data is also being increasingly cheap to collect and maintain. Bulk of data that are collected by the sensors and samples need to be analysed in an efficient and transparent manner. Web technologies have long being used to develop applications that can assist the farmers and managers. However until recently, analysing the data in an online environment has not been an easy task especially in the eyes of data analysts. This barrier is now overcome by the availability of new application programming interfaces that can provide real-time web based data analysis. In this paper developing a prototype web based application for data analysis using new facilities in R statistical package and its web development facility, Shiny is explored. The pros and cons of this type of data analysis environment for precision farming are enumerated and future directions in web application development for agricultural data are discussed.
NASA Astrophysics Data System (ADS)
Valentine, Andrew; Belski, Iouri; Hamilton, Margaret
2017-11-01
Problem-solving is a key engineering skill, yet is an area in which engineering graduates underperform. This paper investigates the potential of using web-based tools to teach students problem-solving techniques without the need to make use of class time. An idea generation experiment involving 90 students was designed. Students were surveyed about their study habits and reported they use electronic-based materials more than paper-based materials while studying, suggesting students may engage with web-based tools. Students then generated solutions to a problem task using either a paper-based template or an equivalent web interface. Students who used the web-based approach performed as well as students who used the paper-based approach, suggesting the technique can be successfully adopted and taught online. Web-based tools may therefore be adopted as supplementary material in a range of engineering courses as a way to increase students' options for enhancing problem-solving skills.
Wilber 3: A Python-Django Web Application For Acquiring Large-scale Event-oriented Seismic Data
NASA Astrophysics Data System (ADS)
Newman, R. L.; Clark, A.; Trabant, C. M.; Karstens, R.; Hutko, A. R.; Casey, R. E.; Ahern, T. K.
2013-12-01
Since 2001, the IRIS Data Management Center (DMC) WILBER II system has provided a convenient web-based interface for locating seismic data related to a particular event, and requesting a subset of that data for download. Since its launch, both the scale of available data and the technology of web-based applications have developed significantly. Wilber 3 is a ground-up redesign that leverages a number of public and open-source projects to provide an event-oriented data request interface with a high level of interactivity and scalability for multiple data types. Wilber 3 uses the IRIS/Federation of Digital Seismic Networks (FDSN) web services for event data, metadata, and time-series data. Combining a carefully optimized Google Map with the highly scalable SlickGrid data API, the Wilber 3 client-side interface can load tens of thousands of events or networks/stations in a single request, and provide instantly responsive browsing, sorting, and filtering of event and meta data in the web browser, without further reliance on the data service. The server-side of Wilber 3 is a Python-Django application, one of over a dozen developed in the last year at IRIS, whose common framework, components, and administrative overhead represent a massive savings in developer resources. Requests for assembled datasets, which may include thousands of data channels and gigabytes of data, are queued and executed using the Celery distributed Python task scheduler, giving Wilber 3 the ability to operate in parallel across a large number of nodes.
Bringing Control System User Interfaces to the Web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xihui; Kasemir, Kay
With the evolution of web based technologies, especially HTML5 [1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY [3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY and provides the convenience of re-using existing OPI files. On the other hand, it uses generic JavaScript and a generic communication mechanism between the web browser andmore » web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA) [4]. It is a protocol that provides efficient control system data communication using WebSocket [5], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. WebPDA is control system independent, potentially supporting any type of control system.« less
GEM System: automatic prototyping of cell-wide metabolic pathway models from genomes.
Arakawa, Kazuharu; Yamada, Yohei; Shinoda, Kosaku; Nakayama, Yoichi; Tomita, Masaru
2006-03-23
Successful realization of a "systems biology" approach to analyzing cells is a grand challenge for our understanding of life. However, current modeling approaches to cell simulation are labor-intensive, manual affairs, and therefore constitute a major bottleneck in the evolution of computational cell biology. We developed the Genome-based Modeling (GEM) System for the purpose of automatically prototyping simulation models of cell-wide metabolic pathways from genome sequences and other public biological information. Models generated by the GEM System include an entire Escherichia coli metabolism model comprising 968 reactions of 1195 metabolites, achieving 100% coverage when compared with the KEGG database, 92.38% with the EcoCyc database, and 95.06% with iJR904 genome-scale model. The GEM System prototypes qualitative models to reduce the labor-intensive tasks required for systems biology research. Models of over 90 bacterial genomes are available at our web site.
Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd
2014-01-01
Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.
Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data
Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.
2005-01-01
The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787
Integrated system for remotely monitoring critical physiological parameters
NASA Astrophysics Data System (ADS)
Alexakis, S.; Karalis, S.; Asvestas, P.
2015-09-01
Monitoring several human parameters (temperature, heart rate, blood pressure etc.) is an essential task in health care in hospitals as well as in home care. This paper presents the design and implementation of an integrated, embedded system that includes an electrocardiograph of nine leads and two channels, a digital thermometer for measuring the body temperature and a power supply. The system provides networking capabilities (wired or wireless) and is accessible by means of a web interface that allows the user to select the leads, as well as to review the values of heart rate (beats per minute) and body temperature. Furthermore, there is the option of saving all the data in a Micro SD memory card or in a Google Spreadsheet. The necessary analog circuits for signal conditioning (amplification and filtering) were manufactured on printed circuit boards (PCB). The system was built around Arduino Yun, which is a platform that contains a microcontroller and a microprocessor running a special LINUX distribution. Furthermore, the Arduino Yun provides the necessary network connectivity capabilities by means of the integrated Wi-Fi and Ethernet interfaces. The web interface was developed using HTML pages with JavaScript support. The system was tested on simulated data as well as real data, providing satisfactory accuracy regarding the measurement of the heart rate (±3 bpm error) and the temperature (±0.3°C error).
Applying Web Usage Mining for Personalizing Hyperlinks in Web-Based Adaptive Educational Systems
ERIC Educational Resources Information Center
Romero, Cristobal; Ventura, Sebastian; Zafra, Amelia; de Bra, Paul
2009-01-01
Nowadays, the application of Web mining techniques in e-learning and Web-based adaptive educational systems is increasing exponentially. In this paper, we propose an advanced architecture for a personalization system to facilitate Web mining. A specific Web mining tool is developed and a recommender engine is integrated into the AHA! system in…
Use of Open Standards and Technologies at the Lunar Mapping and Modeling Project
NASA Astrophysics Data System (ADS)
Law, E.; Malhotra, S.; Bui, B.; Chang, G.; Goodale, C. E.; Ramirez, P.; Kim, R. M.; Sadaqathulla, S.; Rodriguez, L.
2011-12-01
The Lunar Mapping and Modeling Project (LMMP), led by the Marshall Space Flight center (MSFC), is tasked by NASA. The project is responsible for the development of an information system to support lunar exploration activities. It provides lunar explorers a set of tools and lunar map and model products that are predominantly derived from present lunar missions (e.g., the Lunar Reconnaissance Orbiter (LRO)) and from historical missions (e.g., Apollo). At Jet Propulsion Laboratory (JPL), we have built the LMMP interoperable geospatial information system's underlying infrastructure and a single point of entry - the LMMP Portal by employing a number of open standards and technologies. The Portal exposes a set of services to users to allow search, visualization, subset, and download of lunar data managed by the system. Users also have access to a set of tools that visualize, analyze and annotate the data. The infrastructure and Portal are based on web service oriented architecture. We designed the system to support solar system bodies in general including asteroids, earth and planets. We employed a combination of custom software, commercial and open-source components, off-the-shelf hardware and pay-by-use cloud computing services. The use of open standards and web service interfaces facilitate platform and application independent access to the services and data, offering for instances, iPad and Android mobile applications and large screen multi-touch with 3-D terrain viewing functions, for a rich browsing and analysis experience from a variety of platforms. The web services made use of open standards including: Representational State Transfer (REST); and Open Geospatial Consortium (OGC)'s Web Map Service (WMS), Web Coverage Service (WCS), Web Feature Service (WFS). Its data management services have been built on top of a set of open technologies including: Object Oriented Data Technology (OODT) - open source data catalog, archive, file management, data grid framework; openSSO - open source access management and federation platform; solr - open source enterprise search platform; redmine - open source project collaboration and management framework; GDAL - open source geospatial data abstraction library; and others. Its data products are compliant with Federal Geographic Data Committee (FGDC) metadata standard. This standardization allows users to access the data products via custom written applications or off-the-shelf applications such as GoogleEarth. We will demonstrate this ready-to-use system for data discovery and visualization by walking through the data services provided through the portal such as browse, search, and other tools. We will further demonstrate image viewing and layering of lunar map images from the Internet, via mobile devices such as Apple's iPad.
An experimental test of a fundamental food web motif.
Rip, Jason M K; McCann, Kevin S; Lynn, Denis H; Fawcett, Sonia
2010-06-07
Large-scale changes to the world's ecosystem are resulting in the deterioration of biostructure-the complex web of species interactions that make up ecological communities. A difficult, yet crucial task is to identify food web structures, or food web motifs, that are the building blocks of this baroque network of interactions. Once identified, these food web motifs can then be examined through experiments and theory to provide mechanistic explanations for how structure governs ecosystem stability. Here, we synthesize recent ecological research to show that generalist consumers coupling resources with different interaction strengths, is one such motif. This motif amazingly occurs across an enormous range of spatial scales, and so acts to distribute coupled weak and strong interactions throughout food webs. We then perform an experiment that illustrates the importance of this motif to ecological stability. We find that weak interactions coupled to strong interactions by generalist consumers dampen strong interaction strengths and increase community stability. This study takes a critical step by isolating a common food web motif and through clear, experimental manipulation, identifies the fundamental stabilizing consequences of this structure for ecological communities.