Northeast Artificial Intelligence Consortium (NAIC) Review of Technical Tasks. Volume 2, Part 2.
1987-07-01
A-A19 774 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUN (MIC) 1/5 YVIEN OF TEOICR. T.. (U) NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM SYRACUSE MY J...NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM (NAIC) *p,* ~ Review of Technical Tasks ,.. 12 PERSONAL AUTHOR(S) (See reverse) . P VI J.F. Allen, P.B. Berra...See reverse) /" I ABSTRACT (Coninue on ’.wrse if necessary and identify by block number) % .. *. -. ’ The Northeast Artificial Intelligence Consortium
1989-10-01
Northeast Aritificial Intelligence Consortium (NAIC). i Table of Contents Execu tive Sum m ary...o g~nIl ’vLr COPY o~ T- RADC-TR-89-259, Vol XI (of twelve) N Interim Report SOctober 1989 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT...ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION Northeast Artificial (If applicable) Intelligence Consortium (NAIC) . Rome Air Development
Autonomous Mission Operations for Sensor Webs
NASA Astrophysics Data System (ADS)
Underbrink, A.; Witt, K.; Stanley, J.; Mandl, D.
2008-12-01
We present interim results of a 2005 ROSES AIST project entitled, "Using Intelligent Agents to Form a Sensor Web for Autonomous Mission Operations", or SWAMO. The goal of the SWAMO project is to shift the control of spacecraft missions from a ground-based, centrally controlled architecture to a collaborative, distributed set of intelligent agents. The network of intelligent agents intends to reduce management requirements by utilizing model-based system prediction and autonomic model/agent collaboration. SWAMO agents are distributed throughout the Sensor Web environment, which may include multiple spacecraft, aircraft, ground systems, and ocean systems, as well as manned operations centers. The agents monitor and manage sensor platforms, Earth sensing systems, and Earth sensing models and processes. The SWAMO agents form a Sensor Web of agents via peer-to-peer coordination. Some of the intelligent agents are mobile and able to traverse between on-orbit and ground-based systems. Other agents in the network are responsible for encapsulating system models to perform prediction of future behavior of the modeled subsystems and components to which they are assigned. The software agents use semantic web technologies to enable improved information sharing among the operational entities of the Sensor Web. The semantics include ontological conceptualizations of the Sensor Web environment, plus conceptualizations of the SWAMO agents themselves. By conceptualizations of the agents, we mean knowledge of their state, operational capabilities, current operational capacities, Web Service search and discovery results, agent collaboration rules, etc. The need for ontological conceptualizations over the agents is to enable autonomous and autonomic operations of the Sensor Web. The SWAMO ontology enables automated decision making and responses to the dynamic Sensor Web environment and to end user science requests. The current ontology is compatible with Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) Sensor Model Language (SensorML) concepts and structures. The agents are currently deployed on the U.S. Naval Academy MidSTAR-1 satellite and are actively managing the power subsystem on-orbit without the need for human intervention.
Northeast Artificial Intelligence Consortium Annual Report - 1988 Parallel Vision. Volume 9
1989-10-01
supports the Northeast Aritificial Intelligence Consortium (NAIC). Volume 9 Parallel Vision Report submitted by Christopher M. Brown Randal C. Nelson...NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT - 1988 Parallel Vision Syracuse University Christopher M. Brown and Randal C. Nelson...Technical Director Directorate of Intelligence & Reconnaissance FOR THE COMMANDER: IGOR G. PLONISCH Directorate of Plans & Programs If your address has
1989-10-01
Encontro Portugues de Inteligencia Artificial (EPIA), Oporto, Portugal, September 1985. [15] N. J. Nilsson. Principles Of Artificial Intelligence. Tioga...FI1 F COPY () RADC-TR-89-259, Vol II (of twelve) Interim Report October 1969 AD-A218 154 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL...7a. NAME OF MONITORING ORGANIZATION Northeast Artificial Of p0ilcabe) Intelligence Consortium (NAIC) Rome_____ Air___ Development____Center
New Generation Sensor Web Enablement
Bröring, Arne; Echterhoff, Johannes; Jirka, Simon; Simonis, Ingo; Everding, Thomas; Stasch, Christoph; Liang, Steve; Lemmens, Rob
2011-01-01
Many sensor networks have been deployed to monitor Earth’s environment, and more will follow in the future. Environmental sensors have improved continuously by becoming smaller, cheaper, and more intelligent. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not straightforward. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. The concept of the Sensor Web reflects such a kind of infrastructure for sharing, finding, and accessing sensors and their data across different applications. It hides the heterogeneous sensor hardware and communication protocols from the applications built on top of it. The Sensor Web Enablement initiative of the Open Geospatial Consortium standardizes web service interfaces and data encodings which can be used as building blocks for a Sensor Web. This article illustrates and analyzes the recent developments of the new generation of the Sensor Web Enablement specification framework. Further, we relate the Sensor Web to other emerging concepts such as the Web of Things and point out challenges and resulting future work topics for research on Sensor Web Enablement. PMID:22163760
NASA Astrophysics Data System (ADS)
Bhattacharya, D.; Painho, M.
2017-09-01
The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC) keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.
Cyber Intelligence Research Consortium (Poster)
2014-10-24
OCT 2014 2. REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE Cyber Intelligence Research Consortium Poster 5a. CONTRACT NUMBER 5b...nontechnical audiences Environmental Context Provides scope for the analytical effort • Highlights the importance of context - technical and nontechnical... Environmental Context Reporting & Feedback Macroanalysis Microanalysis Data Gathering Steering Committee: Guide Consortium activities and plan for future
Report of the 4th Workshop for Technology Transfer for Intelligent Compaction Consortium.
DOT National Transportation Integrated Search
2016-03-01
On October 2728, 2015, the Kentucky Transportation Cabinet (KYTC) hosted the 4th workshop for : the Technology Transfer for Intelligent Compaction Consortium (TTICC), a Transportation Pooled Fund : (TPF5(233)) initiative designed to identify, s...
1988-01-01
MONITORING ORGANIZATION Northeast Artificial (If applicaole)nelincCostum(AcRome Air Development Center (COCU) Inteligence Consortium (NAIC)I 6c. ADDRESS...f, Offell RADC-TR-88-1 1, Vol IV (of eight) Interim Technical ReportS June 1988 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT 1986...13441-5700 EMENT NO NO NO ACCESSION NO62702F 5 8 71 " " over) I 58 27 13 " TITLE (Include Security Classification) NORTHEAST ARTIFICIAL INTELLIGENCE
1990-12-01
knowledge and meta-reasoning. In Proceedings of EP14-85 ("Encontro Portugues de Inteligencia Artificial "), pages 138-154, Oporto, Portugal, 1985. [19] N, J...See reverse) 7. PERFORMING ORGANIZATION NAME(S) AND ADORESS(ES) 8. PERFORMING ORGANIZATION Northeast Artificial Intelligence...ABSTRACTM-2.,-- The Northeast Artificial Intelligence Consortium (NAIC) was created by the Air Force Systems Command, Rome Air Development Center, and
The Web's Unelected Government.
ERIC Educational Resources Information Center
Garfinkel, Simson L.
1998-01-01
The World Wide Web Consortium--an organization based at the Massachusetts Institute of Technology (MIT) that has 275 corporate members and holds closed meetings--is the closest thing the Web has to a central authority; however, almost nobody outside the telecommunications industry understands what the consortium is. Analyzes the role this body may…
Overview of the World Wide Web Consortium (W3C) (SIGs IA, USE).
ERIC Educational Resources Information Center
Daly, Janet
2000-01-01
Provides an overview of a planned session to describe the work of the World Wide Web Consortium, including technical specifications for HTML (Hypertext Markup Language), XML (Extensible Markup Language), CSS (Cascading Style Sheets), and over 20 other Web standards that address graphics, multimedia, privacy, metadata, and other technologies. (LRW)
Measuring emotional intelligence of medical school applicants.
Carrothers, R M; Gregory, S W; Gallagher, T J
2000-05-01
To discuss the development, pilot testing, and analysis of a 34-item semantic differential instrument for measuring medical school applicants' emotional intelligence (the EI instrument). The authors analyzed data from the admission interviews of 147 1997 applicants to a six-year BS/MD program that is composed of three consortium universities. They compared the applicants' scores on traditional admission criteria (e.g., GPA and traditional interview assessments) with their scores on the EI instrument (which comprised five dimensions of emotional intelligence), breaking the data out by consortium university (each of which has its own educational ethos) and gender. They assessed the EI instrument's reliability and validity for assessing noncognitive personal and interpersonal qualities of medical school applicants. The five dimensions of emotional intelligence (maturity, compassion, morality, sociability, and calm disposition) indicated fair to excellent internal consistency: reliability coefficients were .66 to .95. Emotional intelligence as measured by the instrument was related to both being female and matriculating at the consortium university that has an educational ethos that values the social sciences and humanities. Based on this pilot study, the 34-item EI instrument demonstrates the ability to measure attributes that indicate desirable personal and interpersonal skills in medical school applicants.
Web Intelligence and Artificial Intelligence in Education
ERIC Educational Resources Information Center
Devedzic, Vladan
2004-01-01
This paper surveys important aspects of Web Intelligence (WI) in the context of Artificial Intelligence in Education (AIED) research. WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web-related products, systems, services, and…
1989-03-01
1978. Williams. B.C. Qualitative Analysis of MOS Circuits. Artificial Inteligence . 1984. 24.. Wilson. K. From Association to Structure. Amsterdam:North...D-A208 378 RADC-TR-88-324, Vol II (of nine), Part B Interim Report March 1969 4. NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT 1987...II (of nine), Part B 6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION Northeast Artificial (ff ’aolicbl
1989-10-01
of.ezpertiae Seymour. Wright (or artificisi. intelligence distributed. ai planning robo tics computer.vsion))." Implementation: (replace-values-in-constraint...by mechanical partners or advisors that customize the system’s response to the idiosyncrasies of the student. This paper describes the initial
NASA Astrophysics Data System (ADS)
Buck, Justin; Leadbetter, Adam
2015-04-01
New users for the growing volume of ocean data for purposes such as 'big data' data products and operational data assimilation/ingestion require data to be readily ingestible. This can be achieved via the application of World Wide Web Consortium (W3C) Linked Data and Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) standards to data management. As part of several Horizons 2020 European projects (SenseOCEAN, ODIP, AtlantOS) the British Oceanographic Data Centre (BODC) are working on combining existing data centre architecture and SWE software such as Sensor Observation Services with a Linked Data front end. The standards to enable data delivery are proven and well documented1,2 There are practical difficulties when SWE standards are applied to real time data because of internal hardware bandwidth restrictions and a requirement to constrain data transmission costs. A pragmatic approach is proposed where sensor metadata and data output in OGC standards are implemented "shore-side" with sensors and instruments transmitting unique resolvable web linkages to persistent OGC SensorML records published at the BODC. References: 1. World Wide Web Consortium. (2013). Linked Data. Available: http://www.w3.org/standards/semanticweb/data. Last accessed 8th October 2014. 2. Open Geospatial Consortium. (2014). Sensor Web Enablement (SWE). Available: http://www.opengeospatial.org/ogc/markets-technologies/swe. Last accessed 8th October 2014.
1990-12-01
data rate to the electronics would be much lower on the average and the data much "richer" in information. Intelligent use of...system bottleneck, a high data rate should be provided by I/O systems. 2. machines with intelligent storage management specially designed for logic...management information processing, surveillance sensors, intelligence data collection and handling, solid state sciences, electromagnetics, and propagation, and electronic reliability/maintainability and compatibility.
Business intelligence and capacity planning: web-based solutions.
James, Roger
2010-07-01
Income (activity) and expenditure (costs) form the basis of a modern hospital's 'business intelligence'. However, clinical engagement in business intelligence is patchy. This article describes the principles of business intelligence and outlines some recent developments using web-based applications.
Intelligent web agents for a 3D virtual community
NASA Astrophysics Data System (ADS)
Dave, T. M.; Zhang, Yanqing; Owen, G. S. S.; Sunderraman, Rajshekhar
2003-08-01
In this paper, we propose an Avatar-based intelligent agent technique for 3D Web based Virtual Communities based on distributed artificial intelligence, intelligent agent techniques, and databases and knowledge bases in a digital library. One of the goals of this joint NSF (IIS-9980130) and ACM SIGGRAPH Education Committee (ASEC) project is to create a virtual community of educators and students who have a common interest in comptuer graphics, visualization, and interactive techniqeus. In this virtual community (ASEC World) Avatars will represent the educators, students, and other visitors to the world. Intelligent agents represented as specially dressed Avatars will be available to assist the visitors to ASEC World. The basic Web client-server architecture of the intelligent knowledge-based avatars is given. Importantly, the intelligent Web agent software system for the 3D virtual community is implemented successfully.
Web-Based Intelligent E-Learning Systems: Technologies and Applications
ERIC Educational Resources Information Center
Ma, Zongmin
2006-01-01
Collecting and presenting the latest research and development results from the leading researchers in the field of e-learning systems, Web-Based Intelligent E-Learning Systems: Technologies and Applications provides a single record of current research and practical applications in Web-based intelligent e-learning systems. This book includes major…
Provenance-Based Approaches to Semantic Web Service Discovery and Usage
ERIC Educational Resources Information Center
Narock, Thomas William
2012-01-01
The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…
ERIC Educational Resources Information Center
Institute for Educational Leadership, Washington, DC.
This report presents the proceedings of a consortium at which leading developmental neuroscientists from across the United States and Canada met at Johns Hopkins University to explore the relationship between children's health and learning and to propose policy changes. Early brain development and its relationship to intelligence, learning, and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, S.C.; Woolf, B.
The Northeast Artificial Intelligence Consortium (NAIC) was created by the Air Force Systems Command, Rome Air Development Center, and the Office of Scientific Research. Its purpose is to conduct pertinent research in artificial intelligence and to perform activities ancillary to this research. This report describes progress that has been made in the fourth year of the existence of the NAIC on the technical research tasks undertaken at the member universities. The topics covered in general are: versatile expert system for equipment maintenance, distributed AI for communications system control, automatic photointerpretation, time-oriented problem solving, speech understanding systems, knowledge base maintenance, hardwaremore » architectures for very large systems, knowledge-based reasoning and planning, and a knowledge acquisition, assistance, and explanation system. The specific topic for this volume is the recognition of plans expressed in natural language, followed by their discussion and use.« less
Improving Web Accessibility in a University Setting
ERIC Educational Resources Information Center
Olive, Geoffrey C.
2010-01-01
Improving Web accessibility for disabled users visiting a university's Web site is explored following the World Wide Web Consortium (W3C) guidelines and Section 508 of the Rehabilitation Act rules for Web page designers to ensure accessibility. The literature supports the view that accessibility is sorely lacking, not only in the USA, but also…
Electronic Ramp to Success: Designing Campus Web Pages for Users with Disabilities.
ERIC Educational Resources Information Center
Coombs, Norman
2002-01-01
Discusses key issues in addressing the challenge of Web accessibility for people with disabilities, including tools for Web authoring, repairing, and accessibility validation, and relevant legal issues. Presents standards for Web accessibility, including the Section 508 Standards from the Federal Access Board, and the World Wide Web Consortium's…
Secure, Autonomous, Intelligent Controller for Integrating Distributed Sensor Webs
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2007-01-01
This paper describes the infrastructure and protocols necessary to enable near-real-time commanding, access to space-based assets, and the secure interoperation between sensor webs owned and controlled by various entities. Select terrestrial and aeronautics-base sensor webs will be used to demonstrate time-critical interoperability between integrated, intelligent sensor webs both terrestrial and between terrestrial and space-based assets. For this work, a Secure, Autonomous, Intelligent Controller and knowledge generation unit is implemented using Virtual Mission Operation Center technology.
Educational Assessment via a Web-Based Intelligent System
ERIC Educational Resources Information Center
Huang, Jingshan; He, Lei; Davidson-Shivers, Gayle V.
2011-01-01
Effective assessment is vital in educational activities. We propose IWAS (intelligent Web-based assessment system), an intelligent, generalized and real-time system to assess both learning and teaching. IWAS provides a foundation for more efficiency in instructional activities and, ultimately, students' performances. Our contributions are…
Advancing Cyber Intelligence Practices Through the SEI’s Consortium
2015-01-27
blogsjsocial media Extracurricular Activities Vu lnerabilities from these individuals roles with non-target entities-non-profits, activist groups, or...information to identify, track, and predict cyber capabilities, intentions, and activities to offer courses of action that enhance decision making 7 SEI...8 SEI Webinar Series January 27, 2015 © 2015 Carnegie Mellon University Offerings Steering Committee: Guide Consortium activities and plan for
ERIC Educational Resources Information Center
Ercan, Orhan; Ural, Evrim; Köse, Sinan
2017-01-01
For a sustainable world, it is very important for students to develop positive environmental attitudes and to have awareness of energy use. The study aims to investigate the effect of web assisted instruction with emotional intelligence content on 8th grade students' emotional intelligence, attitudes towards environment and energy saving, academic…
Development of an Intelligent Instruction System for Mathematical Computation
ERIC Educational Resources Information Center
Kim, Du Gyu; Lee, Jaemu
2013-01-01
In this paper, we propose the development of a web-based, intelligent instruction system to help elementary school students for mathematical computation. We concentrate on the intelligence facilities which support diagnosis and advice. The existing web-based instruction systems merely give information on whether the learners' replies are…
ERIC Educational Resources Information Center
Fast, Karl V.; Campbell, D. Grant
2001-01-01
Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…
ERIC Educational Resources Information Center
Lytras, Miltiadis, Ed.; Naeve, Ambjorn, Ed.
2005-01-01
In the context of Knowledge Society, the convergence of knowledge and learning management is a critical milestone. "Intelligent Learning Infrastructure for Knowledge Intensive Organizations: A Semantic Web Perspective" provides state-of-the art knowledge through a balanced theoretical and technological discussion. The semantic web perspective…
Computational Intelligence in Web-Based Education: A Tutorial
ERIC Educational Resources Information Center
Vasilakos, Thanos; Devedzic, Vladan; Kinshuk; Pedrycz, Witold
2004-01-01
This article discusses some important aspects of Web Intelligence (WI) in the context of educational applications. Some of the key components of WI have already attracted developers of web-based educational systems for quite some time- ontologies, adaptivity and personalization, and agents. The paper focuses on the application of Computational…
Operational Marine Data Acquisition and Delivery Powered by Web and Geospatial Standards
NASA Astrophysics Data System (ADS)
Thomas, R.; Buck, J. J. H.
2015-12-01
As novel sensor types and new platforms are deployed to monitor the global oceans, the volumes of scientific and environmental data collected in the marine context are rapidly growing. In order to use these data in both the traditional operational modes and in innovative "Big Data" applications the data must be readily understood by software agents. One approach to achieving this is the application of both World Wide Web and Open Geospatial Consortium standards: namely Linked Data1 and Sensor Web Enablement2 (SWE). The British Oceanographic Data Centre (BODC) is adopting this strategy in a number of European Commission funded projects (NETMAR; SenseOCEAN; Ocean Data Interoperability Platform - ODIP; and AtlantOS) to combine its existing data archiving architecture with SWE components (such as Sensor Observation Services) and a Linked Data interface. These will evolve the data management and data transfer from a process that requires significant manual intervention to an automated operational process enabling the rapid, standards-based, ingestion and delivery of data. This poster will show the current capabilities of BODC and the status of on-going implementation of this strategy. References1. World Wide Web Consortium. (2013). Linked Data. Available:http://www.w3.org/standards/semanticweb/data. Last accessed 7th April 20152. Open Geospatial Consortium. (2014). Sensor Web Enablement (SWE). Available:http://www.opengeospatial.org/ogc/markets-technologies/swe. Last accessed 8th October 2014
ELM-ART--An Interactive and Intelligent Web-Based Electronic Textbook
ERIC Educational Resources Information Center
Weber, Gerhard; Brusilovsky, Peter
2016-01-01
This paper present provides a broader view on ELM-ART, one of the first Web-based Intelligent Educational systems that offered a creative combination of two different paradigms--Intelligent Tutoring and Adaptive Hypermedia technologies. The unique dual nature of ELM-ART contributed to its long life and research impact and was a result of…
Effectiveness of Web Quest in Enhancing 4th Grade Students' Spiritual Intelligence
ERIC Educational Resources Information Center
Jwaifell, Mustafa; Al-Mouhtadi, Reham; Aldarabah, Intisar
2015-01-01
Spiritual intelligence has gained great interest from a good number of the researchers and scholars, while there is a lack of using new technologies such as WebQuest as an instructional tool; which is one of the e-learning applications in education in enhancing spiritual intelligence of 4th graders in Jordanian schools. This study aimed at…
Computational Intelligence and Its Impact on Future High-Performance Engineering Systems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler)
1996-01-01
This document contains presentations from the joint UVA/NASA Workshop on Computational Intelligence held at the Virginia Consortium of Engineering and Science Universities, Hampton, Virginia, June 27-28, 1995. The presentations addressed activities in the areas of fuzzy logic, neural networks, and evolutionary computations. Workshop attendees represented NASA, the National Science Foundation, the Department of Energy, National Institute of Standards and Technology (NIST), the Jet Propulsion Laboratory, industry, and academia. The workshop objectives were to assess the state of technology in the Computational intelligence area and to provide guidelines for future research.
Sensor Web Interoperability Testbed Results Incorporating Earth Observation Satellites
NASA Technical Reports Server (NTRS)
Frye, Stuart; Mandl, Daniel J.; Alameh, Nadine; Bambacus, Myra; Cappelaere, Pat; Falke, Stefan; Derezinski, Linda; Zhao, Piesheng
2007-01-01
This paper describes an Earth Observation Sensor Web scenario based on the Open Geospatial Consortium s Sensor Web Enablement and Web Services interoperability standards. The scenario demonstrates the application of standards in describing, discovering, accessing and tasking satellites and groundbased sensor installations in a sequence of analysis activities that deliver information required by decision makers in response to national, regional or local emergencies.
ERIC Educational Resources Information Center
Sherman, Deborah Witt; Matzo, Marianne LaPorte; Rogers, Susan; McLaughlin, Maureen; Virani, Rose
2002-01-01
Describes one of nine modules in the End-of-Life Nursing Education Consortium Curriculum, a train-the-trainer course to prepare nurses for palliative care. Discuses teaching strategies to achieve high-quality care and includes a list of print and web resources. (SK)
ERIC Educational Resources Information Center
Manoj, T. I.; Devanathan, S.
2010-01-01
This research study is the report of an experiment conducted to find out the effects of web based inquiry science environment on cognitive outcomes in Biological science in correlation to Emotional intelligence. Web based inquiry science environment (WISE) provides a platform for creating inquiry-based science projects for students to work…
ERIC Educational Resources Information Center
McGlamery, Susan; Coffman, Steve
2000-01-01
Explores the possibility of using Web contact center software to offer reference assistance to remote users. Discusses a project by the Metropolitan Cooperative Library System/Santiago Library System consortium to test contact center software and to develop a virtual reference network. (Author/LRW)
Hill, W D; Davies, G; van de Lagemaat, L N; Christoforou, A; Marioni, R E; Fernandes, C P D; Liewald, D C; Croning, M D R; Payton, A; Craig, L C A; Whalley, L J; Horan, M; Ollier, W; Hansell, N K; Wright, M J; Martin, N G; Montgomery, G W; Steen, V M; Le Hellard, S; Espeseth, T; Lundervold, A J; Reinvang, I; Starr, J M; Pendleton, N; Grant, S G N; Bates, T C; Deary, I J
2014-01-01
Differences in general cognitive ability (intelligence) account for approximately half of the variation in any large battery of cognitive tests and are predictive of important life events including health. Genome-wide analyses of common single-nucleotide polymorphisms indicate that they jointly tag between a quarter and a half of the variance in intelligence. However, no single polymorphism has been reliably associated with variation in intelligence. It remains possible that these many small effects might be aggregated in networks of functionally linked genes. Here, we tested a network of 1461 genes in the postsynaptic density and associated complexes for an enriched association with intelligence. These were ascertained in 3511 individuals (the Cognitive Ageing Genetics in England and Scotland (CAGES) consortium) phenotyped for general cognitive ability, fluid cognitive ability, crystallised cognitive ability, memory and speed of processing. By analysing the results of a genome wide association study (GWAS) using Gene Set Enrichment Analysis, a significant enrichment was found for fluid cognitive ability for the proteins found in the complexes of N-methyl-D-aspartate receptor complex; P=0.002. Replication was sought in two additional cohorts (N=670 and 2062). A meta-analytic P-value of 0.003 was found when these were combined with the CAGES consortium. The results suggest that genetic variation in the macromolecular machines formed by membrane-associated guanylate kinase (MAGUK) scaffold proteins and their interaction partners contributes to variation in intelligence. PMID:24399044
Automated Data Quality Assurance using OGC Sensor Web Enablement Frameworks for Marine Observatories
NASA Astrophysics Data System (ADS)
Toma, Daniel; Bghiel, Ikram; del Rio, Joaquin; Hidalgo, Alberto; Carreras, Normandino; Manuel, Antoni
2014-05-01
Over the past years, environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent. Therefore, many sensor networks are increasingly deployed to monitor our environment. But due to the large number of sensor manufacturers, accompanying protocols and data encoding, automated integration and data quality assurance of diverse sensors in an observing systems is not straightforward, requiring development of data management code and manual tedious configuration. However, over the past few years it has been demonstrated that Open-Geospatial Consortium (OGC) frameworks can enable web services with fully-described sensor systems, including data processing, sensor characteristics and quality control tests and results. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The data management software which enables access to sensors, data processing and quality control tests has to be implemented and the results have to be manually mapped to the SWE models. In this contribution, we describe a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) OGC PUCK protocol - a simple standard embedded instrument protocol to store and retrieve directly from the devices the declarative description of sensor characteristics and quality control tests, (2) an automatic mechanism for data processing and quality control tests underlying the Sensor Web - the Sensor Interface Descriptor (SID) concept, as well as (3) a model for the declarative description of sensor which serves as a generic data management mechanism - designed as a profile and extension of OGC SWE's SensorML standard. We implement and evaluate our approach by applying it to the OBSEA Observatory, and can be used to demonstrate the ability to assess data quality for temperature, salinity, air pressure and wind speed and direction observations off the coast of Garraf, in the north-eastern Spain.
VHBuild.com: A Web-Based System for Managing Knowledge in Projects.
ERIC Educational Resources Information Center
Li, Heng; Tang, Sandy; Man, K. F.; Love, Peter E. D.
2002-01-01
Describes an intelligent Web-based construction project management system called VHBuild.com which integrates project management, knowledge management, and artificial intelligence technologies. Highlights include an information flow model; time-cost optimization based on genetic algorithms; rule-based drawing interpretation; and a case-based…
Intelligent Web-Based English Instruction in Middle Schools
ERIC Educational Resources Information Center
Jia, Jiyou
2015-01-01
The integration of technology into educational environments has become more prominent over the years. The combination of technology and face-to-face interaction with instructors allows for a thorough, more valuable educational experience. "Intelligent Web-Based English Instruction in Middle Schools" addresses the concerns associated with…
Acceptable Use Policies in a Web 2.0 & Mobile Era: A Guide for School Districts
ERIC Educational Resources Information Center
Consortium for School Networking (NJ1), 2011
2011-01-01
Web 2.0 applications and mobile Internet devices have added new issues to the safety/access situation for schools. The purpose of this guide is to assist school districts in developing, rethinking, or revising Internet policies as a consequence of the emergence of Web 2.0, and the growing pervasiveness of smart phone use. The Consortium for School…
Standardization efforts in IP telephony
NASA Astrophysics Data System (ADS)
Sengodan, Senthil; Bansal, Raj
1999-11-01
The recent interest in IP telephony has led to a tremendous increase of standardization activities in the area. The three main standards bodies in the area of IP telephony are the International Telecommunication Union's (ITU-T) Study Group (SG) 16, the Internet Engineering Task Force (IETF) and the European Telecommunication Standards Institute's (ETSI) TIPHON project. In addition, forums such as the International Multimedia Teleconferencing Consortium (IMTC), the Intelligent Network Forum (INF), the International Softswitch Consortium (ISC), the Electronic Computer Telephony Forum (ECTF), and the MIT's Internet Telephony Consortium (ITC) are looking into various other aspects that aim at the growth of this industry. This paper describes the main tasks (completed and in progress) undertaken by these organizations. In describing such work, an overview of the underlying technology is also provided.
Design and implementation of CUAHSI WaterML and WaterOneFlow Web Services
NASA Astrophysics Data System (ADS)
Valentine, D. W.; Zaslavsky, I.; Whitenack, T.; Maidment, D.
2007-12-01
WaterOneFlow is a term for a group of web services created by and for the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) community. CUAHSI web services facilitate the retrieval of hydrologic observations information from online data sources using the SOAP protocol. CUAHSI Water Markup Language (below referred to as WaterML) is an XML schema defining the format of messages returned by the WaterOneFlow web services. \
1990-12-01
subject to resource constraints. Mul- tista~ze negotiation has been developed as a means by which an agent can acquire ,em 0ugh additional knowledge to...complete knowledge often expands the search space without providing a compensatiN means for focusing the search. In a multi-agent system with each...These relationships have strengthened our abilities to conduct meaningful research and to assist the transfer of technolog frni th, 81 university
Collaborative Working for Large Digitisation Projects
ERIC Educational Resources Information Center
Yeates, Robin; Guy, Damon
2006-01-01
Purpose: To explore the effectiveness of large-scale consortia for disseminating local heritage via the web. To describe the creation of a large geographically based cultural heritage consortium in the South East of England and management lessons resulting from a major web site digitisation project. To encourage the improved sharing of experience…
A Leaner, Meaner Markup Language.
ERIC Educational Resources Information Center
Online & CD-ROM Review, 1997
1997-01-01
In 1996 a working group of the World Wide Web Consortium developed and released a simpler form of markup language, Extensible Markup Language (XML), combining the flexibility of standard Generalized Markup Language (SGML) and the Web suitability of HyperText Markup Language (HTML). Reviews SGML and discusses XML's suitability for journal…
ISLLC 2008: Websites to Support Mastery
ERIC Educational Resources Information Center
Follo, Eric; Klocko, Barbara A.
2009-01-01
The author presents here the most comprehensive and applicable list of web sites useful to both education leadership faculty and graduate students who are either practicing school leaders or those aspiring for the principalship. The web sites are based on the Interstate School Leaders Licensure Consortium (ISLLC) Standards recently developed by…
DOT National Transportation Integrated Search
2005-03-01
The Crash Avoidance Metrics Partnership (CAMP) Vehicle Safety Communications Consortium (VSCC) comprised of BMW, DaimlerChrysler, Ford, GM, Nissan, Toyota, and Volkswagen, in partnership with USDOT, established the Vehicle Safety Communications (VSC)...
2017-06-01
for GIFT Cloud, the web -based application version of the Generalized Intelligent Framework for Tutoring (GIFT). GIFT is a modular, open-source...external applications. GIFT is available to users with a GIFT Account at no cost. GIFT Cloud is an implementation of GIFT. This web -based application...section. Approved for public release; distribution is unlimited. 3 3. Requirements for GIFT Cloud GIFT Cloud is accessed via a web browser
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-08
... DEPARTMENT OF TRANSPORTATION ITS Joint Program Office; Intelligent Transportation Systems Program... the Intelligent Transportation Systems (ITS) Program Advisory Committee (ITSPAC). The Web conference... Transportation on all matters relating to the study, development, and implementation of intelligent...
Effects of an Intelligent Web-Based English Instruction System on Students' Academic Performance
ERIC Educational Resources Information Center
Jia, J.; Chen, Y.; Ding, Z.; Bai, Y.; Yang, B.; Li, M.; Qi, J.
2013-01-01
This research conducted quasi-experiments in four middle schools to evaluate the long-term effects of an intelligent web-based English instruction system, Computer Simulation in Educational Communication (CSIEC), on students' academic attainment. The analysis of regular examination scores and vocabulary test validates the positive impact of CSIEC,…
ERIC Educational Resources Information Center
Alam, Najma H.
2014-01-01
The problem observed in this study is the low level of compliance of higher education website accessibility with Section 508 of the Rehabilitation Act of 1973. The literature supports the non-compliance of websites with the federal policy in general. Studies were performed to analyze the accessibility of fifty-four sample web pages using automated…
1989-03-01
resolved through global or local (associated with specific nodes) conflict resolution strategies.$ One solution is to order the arcs in the set... according to their specificity and to execute the first triggered arc. In this way, the most specific subsuming situation will be preferred over other...priority; another says that random questions should be discouraged. In such a case a non-conventional resolution mechanism must be used to resolve the
ERIC Educational Resources Information Center
Khatun, Nazma; Miwa, Jouji
2016-01-01
This research project was aimed to develop an intelligent Bengali handwriting education system to improve the literacy level in Bangladesh. Due to the socio-economical limitation, all of the population does not have the chance to go to school. Here, we developed a prototype of web-based (iPhone/smartphone or computer browser) intelligent…
An Intelligent Learning Diagnosis System for Web-Based Thematic Learning Platform
ERIC Educational Resources Information Center
Huang, Chenn-Jung; Liu, Ming-Chou; Chu, San-Shine; Cheng, Chih-Lun
2007-01-01
This work proposes an intelligent learning diagnosis system that supports a Web-based thematic learning model, which aims to cultivate learners' ability of knowledge integration by giving the learners the opportunities to select the learning topics that they are interested, and gain knowledge on the specific topics by surfing on the Internet to…
DOT National Transportation Integrated Search
2003-04-01
Over 17 percent of all fatal crashes occur during winter weather conditions. Of those, 60 percent happen in rural areas (most on non-interstate roadways). The Federal Highway Administration (FHWA) Intelligent Transportation System (ITS) Joint Program...
Web Site Lets Students Bid for a Degree.
ERIC Educational Resources Information Center
Gose, Ben
1999-01-01
A new Web site provides a new structure for tuition discounting by allowing a student to bid the amount he/she is willing to pay, which member institutions accept or reject. The site is intended to help match students with institutional members of the consortium, usually less-well-known smaller schools. Possible difficulties for both institutions…
Web-based e-learning and virtual lab of human-artificial immune system.
Gong, Tao; Ding, Yongsheng; Xiong, Qin
2014-05-01
Human immune system is as important in keeping the body healthy as the brain in supporting the intelligence. However, the traditional models of the human immune system are built on the mathematics equations, which are not easy for students to understand. To help the students to understand the immune systems, a web-based e-learning approach with virtual lab is designed for the intelligent system control course by using new intelligent educational technology. Comparing the traditional graduate educational model within the classroom, the web-based e-learning with the virtual lab shows the higher inspiration in guiding the graduate students to think independently and innovatively, as the students said. It has been found that this web-based immune e-learning system with the online virtual lab is useful for teaching the graduate students to understand the immune systems in an easier way and design their simulations more creatively and cooperatively. The teaching practice shows that the optimum web-based e-learning system can be used to increase the learning effectiveness of the students.
A Collaborative Decision Environment for UAV Operations
NASA Technical Reports Server (NTRS)
D'Ortenzio, Matthew V.; Enomoto, Francis Y.; Johan, Sandra L.
2005-01-01
NASA is developing Intelligent Mission Management (IMM) technology for science missions employing long endurance unmanned aerial vehicles (UAV's). The IMM groundbased component is the Collaborative Decision Environment (CDE), a ground system that provides the Mission/Science team with situational awareness, collaboration, and decisionmaking tools. The CDE is used for pre-flight planning, mission monitoring, and visualization of acquired data. It integrates external data products used for planning and executing a mission, such as weather, satellite data products, and topographic maps by leveraging established and emerging Open Geospatial Consortium (OGC) standards to acquire external data products via the Internet, and an industry standard geographic information system (GIs) toolkit for visualization As a Science/Mission team may be geographically dispersed, the CDE is capable of providing access to remote users across wide area networks using Web Services technology. A prototype CDE is being developed for an instrument checkout flight on a manned aircraft in the fall of 2005, in preparation for a full deployment in support of the US Forest Service and NASA Ames Western States Fire Mission in 2006.
Koscielny, Gautier; Yaikhom, Gagarine; Iyer, Vivek; Meehan, Terrence F.; Morgan, Hugh; Atienza-Herrero, Julian; Blake, Andrew; Chen, Chao-Kung; Easty, Richard; Di Fenza, Armida; Fiegel, Tanja; Grifiths, Mark; Horne, Alan; Karp, Natasha A.; Kurbatova, Natalja; Mason, Jeremy C.; Matthews, Peter; Oakley, Darren J.; Qazi, Asfand; Regnart, Jack; Retha, Ahmad; Santos, Luis A.; Sneddon, Duncan J.; Warren, Jonathan; Westerberg, Henrik; Wilson, Robert J.; Melvin, David G.; Smedley, Damian; Brown, Steve D. M.; Flicek, Paul; Skarnes, William C.; Mallon, Ann-Marie; Parkinson, Helen
2014-01-01
The International Mouse Phenotyping Consortium (IMPC) web portal (http://www.mousephenotype.org) provides the biomedical community with a unified point of access to mutant mice and rich collection of related emerging and existing mouse phenotype data. IMPC mouse clinics worldwide follow rigorous highly structured and standardized protocols for the experimentation, collection and dissemination of data. Dedicated ‘data wranglers’ work with each phenotyping center to collate data and perform quality control of data. An automated statistical analysis pipeline has been developed to identify knockout strains with a significant change in the phenotype parameters. Annotation with biomedical ontologies allows biologists and clinicians to easily find mouse strains with phenotypic traits relevant to their research. Data integration with other resources will provide insights into mammalian gene function and human disease. As phenotype data become available for every gene in the mouse, the IMPC web portal will become an invaluable tool for researchers studying the genetic contributions of genes to human diseases. PMID:24194600
Migrating Department of Defense (DoD) Web Service Based Applications to Mobile Computing Platforms
2012-03-01
World Wide Web Consortium (W3C) Geolocation API to identify the device’s location and then center the map on the device. Finally, we modify the entry...THIS PAGE INTENTIONALLY LEFT BLANK xii List of Acronyms and Abbreviations API Application Programming Interface CSS Cascading Style Sheets CLIMO...Java API for XML Web Services Reference Implementation JS JavaScript JSNI JavaScript Native Interface METOC Meteorological and Oceanographic MAA Mobile
1988-06-01
extraction nets. TerrainMaps: Tools for physical and pseudo-physical molding and growing of features on terrain and thematic maps. 5-13 + ,m , mmmmmm mmmmm...ok 1) the student neem confused, and 2) the teot for wroag-answerstshold is met Recognizing a confused tudent is admittedly a mabjective and imprecise...you know that GRADE in iine 9 is a control variable? Student: Yes 2. Tutor: OIL What i the value of GRADE at anytime during loop execution? Studam
The New Challenges for E-learning: The Educational Semantic Web
ERIC Educational Resources Information Center
Aroyo, Lora; Dicheva, Darina
2004-01-01
The big question for many researchers in the area of educational systems now is what is the next step in the evolution of e-learning? Are we finally moving from a scattered intelligence to a coherent space of collaborative intelligence? How close we are to the vision of the Educational Semantic Web and what do we need to do in order to realize it?…
E-Learning 3.0 = E-Learning 2.0 + Web 3.0?
ERIC Educational Resources Information Center
Hussain, Fehmida
2012-01-01
Web 3.0, termed as the semantic web or the web of data is the transformed version of Web 2.0 with technologies and functionalities such as intelligent collaborative filtering, cloud computing, big data, linked data, openness, interoperability and smart mobility. If Web 2.0 is about social networking and mass collaboration between the creator and…
The Relationship between Music Instruction and Academic Achievement in Mathematics
ERIC Educational Resources Information Center
Sharpe, Nechelle Nipper
2013-01-01
The purpose of this study was to investigate the relationship between music instruction and mathematics achievement scores for 6th grade students at an Atlanta public school. Guided by Gardner's multiple intelligences model, neurological research, and National Consortium of Arts Education research, this study used a quasi-experimental…
Exploring NASA GES DISC Data with Interoperable Services
NASA Technical Reports Server (NTRS)
Zhao, Peisheng; Yang, Wenli; Hegde, Mahabal; Wei, Jennifer C.; Kempler, Steven; Pham, Long; Teng, William; Savtchenko, Andrey
2015-01-01
Overview of NASA GES DISC (NASA Goddard Earth Science Data and Information Services Center) data with interoperable services: Open-standard and Interoperable Services Improve data discoverability, accessibility, and usability with metadata, catalogue and portal standards Achieve data, information and knowledge sharing across applications with standardized interfaces and protocols Open Geospatial Consortium (OGC) Data Services and Specifications Web Coverage Service (WCS) -- data Web Map Service (WMS) -- pictures of data Web Map Tile Service (WMTS) --- pictures of data tiles Styled Layer Descriptors (SLD) --- rendered styles.
Resources | Division of Cancer Prevention
Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |
OpenFIRE - A Web GIS Service for Distributing the Finnish Reflection Experiment Datasets
NASA Astrophysics Data System (ADS)
Väkevä, Sakari; Aalto, Aleksi; Heinonen, Aku; Heikkinen, Pekka; Korja, Annakaisa
2017-04-01
The Finnish Reflection Experiment (FIRE) is a land-based deep seismic reflection survey conducted between 2001 and 2003 by a research consortium of the Universities of Helsinki and Oulu, the Geological Survey of Finland, and a Russian state-owned enterprise SpetsGeofysika. The dataset consists of 2100 kilometers of high-resolution profiles across the Archaean and Proterozoic nuclei of the Fennoscandian Shield. Although FIRE data have been available on request since 2009, the data have remained underused outside the original research consortium. The original FIRE data have been quality-controlled. The shot gathers have been cross-checked and comprehensive errata has been created. The brute stacks provided by the Russian seismic contractor have been reprocessed into seismic sections and replotted. A complete documentation of the intermediate processing steps is provided together with guidelines for setting up a computing environment and plotting the data. An open access web service "OpenFIRE" for the visualization and the downloading of FIRE data has been created. The service includes a mobile-responsive map application capable of enriching seismic sections with data from other sources such as open data from the National Land Survey and the Geological Survey of Finland. The AVAA team of the Finnish Open Science and Research Initiative has provided a tailored Liferay portal with necessary web components such as an API (Application Programming Interface) for download requests. INSPIRE (Infrastructure for Spatial Information in Europe) -compliant discovery metadata have been produced and geospatial data will be exposed as Open Geospatial Consortium standard services. The technical guidelines of the European Plate Observing System have been followed and the service could be considered as a reference application for sharing reflection seismic data. The OpenFIRE web service is available at www.seismo.helsinki.fi/openfire
ERIC Educational Resources Information Center
Bartelet, Dimona; Ghysels, Joris; Groot, Wim; Haelermans, Carla; van den Brink, Henriëtte Maassen
2016-01-01
This article examines an educational experiment with a unique combination of 3 elements: homework, the use of information and communication technology and a large degree of freedom of choice (student autonomy). More particularly, we study the effectiveness of a web-based intelligent tutoring system (ITS) that a school offers to its students as…
ERIC Educational Resources Information Center
Wadmany, Rivka; Zeichner, Orit; Melamed, Orly
2014-01-01
Students in a teacher training college in Israel have developed and taught curricula on the intelligent use of the Web. The educational programs were based on activities thematically related to the world of digital citizenship, such as the rights of the child and the Internet, identity theft, copyrights, freedom of expression and its limitations,…
IAServ: an intelligent home care web services platform in a cloud for aging-in-place.
Su, Chuan-Jun; Chiang, Chang-Yu
2013-11-12
As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients' needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.
IAServ: An Intelligent Home Care Web Services Platform in a Cloud for Aging-in-Place
Su, Chuan-Jun; Chiang, Chang-Yu
2013-01-01
As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients’ needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet. PMID:24225647
Massage Therapy for Health Purposes
... Web site: www.nih.gov/health/clinicaltrials/ Cochrane Database of Systematic Reviews The Cochrane Database of Systematic ... Licensed Complementary and Alternative Healthcare Professions. Seattle, WA: Academic Consortium for Complementary and Alternative Health Care; 2009. ...
Moran, Jean M; Feng, Mary; Benedetti, Lisa A; Marsh, Robin; Griffith, Kent A; Matuszak, Martha M; Hess, Michael; McMullen, Matthew; Fisher, Jennifer H; Nurushev, Teamour; Grubb, Margaret; Gardner, Stephen; Nielsen, Daniel; Jagsi, Reshma; Hayman, James A; Pierce, Lori J
A database in which patient data are compiled allows analytic opportunities for continuous improvements in treatment quality and comparative effectiveness research. We describe the development of a novel, web-based system that supports the collection of complex radiation treatment planning information from centers that use diverse techniques, software, and hardware for radiation oncology care in a statewide quality collaborative, the Michigan Radiation Oncology Quality Consortium (MROQC). The MROQC database seeks to enable assessment of physician- and patient-reported outcomes and quality improvement as a function of treatment planning and delivery techniques for breast and lung cancer patients. We created tools to collect anonymized data based on all plans. The MROQC system representing 24 institutions has been successfully deployed in the state of Michigan. Since 2012, dose-volume histogram and Digital Imaging and Communications in Medicine-radiation therapy plan data and information on simulation, planning, and delivery techniques have been collected. Audits indicated >90% accurate data submission and spurred refinements to data collection methodology. This model web-based system captures detailed, high-quality radiation therapy dosimetry data along with patient- and physician-reported outcomes and clinical data for a radiation therapy collaborative quality initiative. The collaborative nature of the project has been integral to its success. Our methodology can be applied to setting up analogous consortiums and databases. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Rule-based statistical data mining agents for an e-commerce application
NASA Astrophysics Data System (ADS)
Qin, Yi; Zhang, Yan-Qing; King, K. N.; Sunderraman, Rajshekhar
2003-03-01
Intelligent data mining techniques have useful e-Business applications. Because an e-Commerce application is related to multiple domains such as statistical analysis, market competition, price comparison, profit improvement and personal preferences, this paper presents a hybrid knowledge-based e-Commerce system fusing intelligent techniques, statistical data mining, and personal information to enhance QoS (Quality of Service) of e-Commerce. A Web-based e-Commerce application software system, eDVD Web Shopping Center, is successfully implemented uisng Java servlets and an Oracle81 database server. Simulation results have shown that the hybrid intelligent e-Commerce system is able to make smart decisions for different customers.
Domain-specific Web Service Discovery with Service Class Descriptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocco, D; Caverlee, J; Liu, L
2005-02-14
This paper presents DynaBot, a domain-specific web service discovery system. The core idea of the DynaBot service discovery system is to use domain-specific service class descriptions powered by an intelligent Deep Web crawler. In contrast to current registry-based service discovery systems--like the several available UDDI registries--DynaBot promotes focused crawling of the Deep Web of services and discovers candidate services that are relevant to the domain of interest. It uses intelligent filtering algorithms to match services found by focused crawling with the domain-specific service class descriptions. We demonstrate the capability of DynaBot through the BLAST service discovery scenario and describe ourmore » initial experience with DynaBot.« less
Proposal for a Web Encoding Service (wes) for Spatial Data Transactio
NASA Astrophysics Data System (ADS)
Siew, C. B.; Peters, S.; Rahman, A. A.
2015-10-01
Web services utilizations in Spatial Data Infrastructure (SDI) have been well established and standardized by Open Geospatial Consortium (OGC). Similar web services for 3D SDI are also being established in recent years, with extended capabilities to handle 3D spatial data. The increasing popularity of using City Geographic Markup Language (CityGML) for 3D city modelling applications leads to the needs for large spatial data handling for data delivery. This paper revisits the available web services in OGC Web Services (OWS), and propose the background concepts and requirements for encoding spatial data via Web Encoding Service (WES). Furthermore, the paper discusses the data flow of the encoder within web service, e.g. possible integration with Web Processing Service (WPS) or Web 3D Services (W3DS). The integration with available web service could be extended to other available web services for efficient handling of spatial data, especially 3D spatial data.
FUDAOWANG: A Web-Based Intelligent Tutoring System Implementing Advanced Education Concepts
ERIC Educational Resources Information Center
Xu, Wei; Zhao, Ke; Li, Yatao; Yi, Zhenzhen
2012-01-01
Determining how to provide good tutoring functions is an important research direction of intelligent tutoring systems. In this study, the authors develop an intelligent tutoring system with good tutoring functions, called "FUDAOWANG." The research domain that FUDAOWANG treats is junior middle school mathematics, which belongs to the objective…
Mutual Intelligibility between Closely Related Languages in Europe
ERIC Educational Resources Information Center
Gooskens, Charlotte; van Heuven, Vincent J.; Golubovic, Jelena; Schüppert, Anja; Swarte, Femke; Voigt, Stefanie
2018-01-01
By means of a large-scale web-based investigation, we established the degree of mutual intelligibility of 16 closely related spoken languages within the Germanic, Slavic and Romance language families in Europe. We first present the results of a selection of 1833 listeners representing the mutual intelligibility between young, educated Europeans…
ERIC Educational Resources Information Center
Haney, Walt
Three do-it-yourself intelligence test handbooks, five mini-textbooks, and five consumer protection guides are reviewed. Each type of publication reflects different social ideologies and communicates favorable, cautiously neutral, or critical messages, respectively, about testing. Psychologists Jules Leopold, Martin Lutterjohan, and Victor…
Using the Web for Competitive Intelligence (CI) Gathering
NASA Technical Reports Server (NTRS)
Rocker, JoAnne; Roncaglia, George
2002-01-01
Businesses use the Internet as a way to communicate company information as a way of engaging their customers. As the use of the Web for business transactions and advertising grows, so too, does the amount of useful information for practitioners of competitive intelligence (CI). CI is the legal and ethical practice of information gathering about competitors and the marketplace. Information sources like company webpages, online newspapers and news organizations, electronic journal articles and reports, and Internet search engines allow CI practitioners analyze company strengths and weaknesses for their customers. More company and marketplace information than ever is available on the Internet and a lot of it is free. Companies should view the Web not only as a business tool but also as a source of competitive intelligence. In a highly competitive marketplace can any organization afford to ignore information about the other players and customers in that same marketplace?
Using Sensor Web Processes and Protocols to Assimilate Satellite Data into a Forecast Model
NASA Technical Reports Server (NTRS)
Goodman, H. Michael; Conover, Helen; Zavodsky, Bradley; Maskey, Manil; Jedlovec, Gary; Regner, Kathryn; Li, Xiang; Lu, Jessica; Botts, Mike; Berthiau, Gregoire
2008-01-01
The goal of the Sensor Management Applied Research Technologies (SMART) On-Demand Modeling project is to develop and demonstrate the readiness of the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) capabilities to integrate both space-based Earth observations and forecast model output into new data acquisition and assimilation strategies. The project is developing sensor web-enabled processing plans to assimilate Atmospheric Infrared Sounding (AIRS) satellite temperature and moisture retrievals into a regional Weather Research and Forecast (WRF) model over the southeastern United States.
An Intelligent Case-Based Help Desk Providing Web-Based Support for EOSDIS Customers
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.; Thurman, David A.
1998-01-01
This paper describes a project that extends the concept of help desk automation by offering World Wide Web access to a case-based help desk. It explores the use of case-based reasoning and cognitive engineering models to create an 'intelligent' help desk system, one that learns. It discusses the AutoHelp architecture for such a help desk and summarizes the technologies used to create a help desk for NASA data users.
Hacking Your Ride: Is Web 2.0 Creating Vulnerabilities To Surface Transportation
2016-09-01
SMSN’s vulnerabilities, and uncovers terrorists’ malign use of social media for intelligence gathering. Academic researchers have already discovered...of social media for intelligence gathering. Academic researchers have already discovered threats in social navigation platforms such as Waze and...51 4. Social Navigation as Intelligence ................................................52 C. FUTURE CONCERNS
Developmental Process Model for the Java Intelligent Tutoring System
ERIC Educational Resources Information Center
Sykes, Edward
2007-01-01
The Java Intelligent Tutoring System (JITS) was designed and developed to support the growing trend of Java programming around the world. JITS is an advanced web-based personalized tutoring system that is unique in several ways. Most programming Intelligent Tutoring Systems require the teacher to author problems with corresponding solutions. JITS,…
EIIS: An Educational Information Intelligent Search Engine Supported by Semantic Services
ERIC Educational Resources Information Center
Huang, Chang-Qin; Duan, Ru-Lin; Tang, Yong; Zhu, Zhi-Ting; Yan, Yong-Jian; Guo, Yu-Qing
2011-01-01
The semantic web brings a new opportunity for efficient information organization and search. To meet the special requirements of the educational field, this paper proposes an intelligent search engine enabled by educational semantic support service, where three kinds of searches are integrated into Educational Information Intelligent Search (EIIS)…
Smart Aerospace eCommerce: Using Intelligent Agents in a NASA Mission Services Ordering Application
NASA Technical Reports Server (NTRS)
Moleski, Walt; Luczak, Ed; Morris, Kim; Clayton, Bill; Scherf, Patricia; Obenschain, Arthur F. (Technical Monitor)
2002-01-01
This paper describes how intelligent agent technology was successfully prototyped and then deployed in a smart eCommerce application for NASA. An intelligent software agent called the Intelligent Service Validation Agent (ISVA) was added to an existing web-based ordering application to validate complex orders for spacecraft mission services. This integration of intelligent agent technology with conventional web technology satisfies an immediate NASA need to reduce manual order processing costs. The ISVA agent checks orders for completeness, consistency, and correctness, and notifies users of detected problems. ISVA uses NASA business rules and a knowledge base of NASA services, and is implemented using the Java Expert System Shell (Jess), a fast rule-based inference engine. The paper discusses the design of the agent and knowledge base, and the prototyping and deployment approach. It also discusses future directions and other applications, and discusses lessons-learned that may help other projects make their aerospace eCommerce applications smarter.
Geographic Resources on the Web: Bringing the World to Your Classroom.
ERIC Educational Resources Information Center
Green, Tim
2001-01-01
Presents an annotated bibliography of Web sites that can be useful for geography classroom teachers and of interest to students. Includes Web sites for the United States Geological Survey, the Central Intelligence Agency, University of Wisconsin-Stevens Point, and GlobeXplorer. (CMK)
Korner, Eli J; Oinonen, Michael J; Browne, Robert C
2003-02-01
The University HealthSystem Consortium (UHC) represents a strategic alliance of 169 academic health centers and associated institutions engaged in knowledge sharing and idea-generation. The use of the Internet as a tool in the delivery of UHC's products and services has increased dramatically over the past year and will continue to increase during the foreseeable future. This paper examines the current state of UHC-member institution driven tools and services that utilize the Web as a fundamental component in their delivery. The evolution of knowledge management at UHC, its management information and reporting tools, and expansion of e-commerce provide real world examples of Internet use in health care delivery and management. Health care workers are using these Web-based tools to help manage rising costs and optimize patient outcomes. Policy, technical, and organizational issues must be resolved to facilitate rapid adoption of Internet applications.
Lau, Adela S M
2011-11-11
Web 2.0 provides a platform or a set of tools such as blogs, wikis, really simple syndication (RSS), podcasts, tags, social bookmarks, and social networking software for knowledge sharing, learning, social interaction, and the production of collective intelligence in a virtual environment. Web 2.0 is also becoming increasingly popular in e-learning and e-social communities. The objectives were to investigate how Web 2.0 tools can be applied for knowledge sharing, learning, social interaction, and the production of collective intelligence in the nursing domain and to investigate what behavioral perceptions are involved in the adoption of Web 2.0 tools by nurses. The decomposed technology acceptance model was applied to construct the research model on which the hypotheses were based. A questionnaire was developed based on the model and data from nurses (n = 388) were collected from late January 2009 until April 30, 2009. Pearson's correlation analysis and t tests were used for data analysis. Intention toward using Web 2.0 tools was positively correlated with usage behavior (r = .60, P < .05). Behavioral intention was positively correlated with attitude (r = .72, P < .05), perceived behavioral control (r = .58, P < .05), and subjective norm (r = .45, P < .05). In their decomposed constructs, perceived usefulness (r = .7, P < .05), relative advantage (r = .64, P < .05), and compatibility (r = .60,P < .05) were positively correlated with attitude, but perceived ease of use was not significantly correlated (r = .004, P < .05) with it. Peer (r = .47, P < .05), senior management (r = .24,P < .05), and hospital (r = .45, P < .05) influences had positive correlations with subjective norm. Resource (r = .41,P < .05) and technological (r = .69,P < .05) conditions were positively correlated with perceived behavioral control. The identified behavioral perceptions may further health policy makers' understanding of nurses' concerns regarding and barriers to the adoption of Web 2.0 tools and enable them to better plan the strategy of implementation of Web 2.0 tools for knowledge sharing, learning, social interaction, and the production of collective intelligence.
2011-01-01
Background Web 2.0 provides a platform or a set of tools such as blogs, wikis, really simple syndication (RSS), podcasts, tags, social bookmarks, and social networking software for knowledge sharing, learning, social interaction, and the production of collective intelligence in a virtual environment. Web 2.0 is also becoming increasingly popular in e-learning and e-social communities. Objectives The objectives were to investigate how Web 2.0 tools can be applied for knowledge sharing, learning, social interaction, and the production of collective intelligence in the nursing domain and to investigate what behavioral perceptions are involved in the adoption of Web 2.0 tools by nurses. Methods The decomposed technology acceptance model was applied to construct the research model on which the hypotheses were based. A questionnaire was developed based on the model and data from nurses (n = 388) were collected from late January 2009 until April 30, 2009. Pearson’s correlation analysis and t tests were used for data analysis. Results Intention toward using Web 2.0 tools was positively correlated with usage behavior (r = .60, P < .05). Behavioral intention was positively correlated with attitude (r = .72, P < .05), perceived behavioral control (r = .58, P < .05), and subjective norm (r = .45, P < .05). In their decomposed constructs, perceived usefulness (r = .7, P < .05), relative advantage (r = .64, P < .05), and compatibility (r = .60, P < .05) were positively correlated with attitude, but perceived ease of use was not significantly correlated (r = .004, P < .05) with it. Peer (r = .47, P < .05), senior management (r = .24, P < .05), and hospital (r = .45, P < .05) influences had positive correlations with subjective norm. Resource (r = .41, P < .05) and technological (r = .69, P < .05) conditions were positively correlated with perceived behavioral control. Conclusions The identified behavioral perceptions may further health policy makers’ understanding of nurses’ concerns regarding and barriers to the adoption of Web 2.0 tools and enable them to better plan the strategy of implementation of Web 2.0 tools for knowledge sharing, learning, social interaction, and the production of collective intelligence. PMID:22079851
Autonomous Sensorweb Operations for Integrated Space, In-Situ Monitoring of Volcanic Activity
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Doubleday, Joshua; Kedar, Sharon; Davies, Ashley G.; Lahusen, Richard; Song, Wenzhan; Shirazi, Behrooz; Mandl, Daniel; Frye, Stuart
2010-01-01
We have deployed and demonstrated operations of an integrated space in-situ sensorweb for monitoring volcanic activity. This sensorweb includes a network of ground sensors deployed to the Mount Saint Helens volcano as well as the Earth Observing One spacecraft. The ground operations and space operations are interlinked in that ground-based intelligent event detections can cause the space segment to acquire additional data via observation requests and space-based data acquisitions (thermal imagery) can trigger reconfigurations of the ground network to allocate increased bandwidth to areas of the network best situated to observe the activity. The space-based operations are enabled by an automated mission planning and tasking capability which utilizes several Opengeospatial Consortium (OGC) Sensorweb Enablement (SWE) standards which enable acquiring data, alerts, and tasking using web services. The ground-based segment also supports similar protocols to enable seamless tasking and data delivery. The space-based segment also supports onboard development of data products (thermal summary images indicating areas of activity, quicklook context images, and thermal activity alerts). These onboard developed products have reduced data volume (compared to the complete images) which enables them to be transmitted to the ground more rapidly in engineering channels.
Automated Knowledge Generation with Persistent Surveillance Video
2008-03-26
5 2.1 Artificial Intelligence . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Formal Logic . . . . . . . . . . . . . . . . . . . 6 2.1.2...background of Artificial Intelligence and the reasoning engines that will be applied to generate knowledge from data. Section 2.2 discusses background on...generation from persistent video. 4 II. Background In this chapter, we will discuss the background of Artificial Intelligence, Semantic Web, image
Development and Evaluation of Intelligent Agent-Based Teaching Assistant in e-Learning Portals
ERIC Educational Resources Information Center
Rouhani, Saeed; Mirhosseini, Seyed Vahid
2015-01-01
Today, several educational portals established by organizations to enhance web E-learning. Intelligence agent's usage is necessary to improve the system's quality and cover limitations such as face-to-face relation. In this research, after finding two main approaches in this field that are fundamental use of intelligent agents in systems design…
Interoperability And Value Added To Earth Observation Data
NASA Astrophysics Data System (ADS)
Gasperi, J.
2012-04-01
Geospatial web services technology has provided a new means for geospatial data interoperability. Open Geospatial Consortium (OGC) services such as Web Map Service (WMS) to request maps on the Internet, Web Feature Service (WFS) to exchange vectors or Catalog Service for the Web (CSW) to search for geospatialized data have been widely adopted in the Geosciences community in general and in the remote sensing community in particular. These services make Earth Observation data available to a wider range of public users than ever before. The mapshup web client offers an innovative and efficient user interface that takes advantage of the power of interoperability. This presentation will demonstrate how mapshup can be effectively used in the context of natural disasters management.
Bioinformatics data distribution and integration via Web Services and XML.
Li, Xiao; Zhang, Yizheng
2003-11-01
It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.
Sensor Webs with a Service-Oriented Architecture for On-demand Science Products
NASA Technical Reports Server (NTRS)
Mandl, Daniel; Ungar, Stephen; Ames, Troy; Justice, Chris; Frye, Stuart; Chien, Steve; Tran, Daniel; Cappelaere, Patrice; Derezinsfi, Linda; Paules, Granville;
2007-01-01
This paper describes the work being managed by the NASA Goddard Space Flight Center (GSFC) Information System Division (ISD) under a NASA Earth Science Technology Ofice (ESTO) Advanced Information System Technology (AIST) grant to develop a modular sensor web architecture which enables discovery of sensors and workflows that can create customized science via a high-level service-oriented architecture based on Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) web service standards. These capabilities serve as a prototype to a user-centric architecture for Global Earth Observing System of Systems (GEOSS). This work builds and extends previous sensor web efforts conducted at NASA/GSFC using the Earth Observing 1 (EO-1) satellite and other low-earth orbiting satellites.
HTML5: a new standard for the Web.
Hoy, Matthew B
2011-01-01
HTML5 is the newest revision of the HTML standard developed by the World Wide Web Consortium (W3C). This new standard adds several exciting news features and capabilities to HTML. This article will briefly discuss the history of HTML standards, explore what changes are in the new HTML5 standard, and what implications it has for information professionals. A list of HTML5 resources and examples will also be provided.
Intelligent Web-Based Learning System with Personalized Learning Path Guidance
ERIC Educational Resources Information Center
Chen, C. M.
2008-01-01
Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths…
The Potential Transformative Impact of Web 2.0 Technology on the Intelligence Community
2008-12-01
wikis, mashups and folksonomies .24 As the web is considered a platform, web 2.0 lacks concrete boundaries; instead, it possesses a gravitational...engagement and marketing Folksonomy The practice and method of collaboratively creating and managing tags147 to annotate and categorize content
Triggers and monitoring in intelligent personal health record.
Luo, Gang
2012-10-01
Although Web-based personal health records (PHRs) have been widely deployed, the existing ones have limited intelligence. Previously, we introduced expert system technology and Web search technology into the PHR domain and proposed the concept of an intelligent PHR (iPHR). iPHR provides personalized healthcare information to facilitate users' daily activities of living. The current iPHR is passive and follows the pull model of information distribution. This paper introduces triggers and monitoring into iPHR to make iPHR become active. Our idea is to let medical professionals pre-compile triggers and store them in iPHR's knowledge base. Each trigger corresponds to an abnormal event that may have potential medical impact. iPHR keeps collecting, processing, and analyzing the user's medical data from various sources such as wearable sensors. Whenever an abnormal event is detected from the user's medical data, the corresponding trigger fires and the related personalized healthcare information is pushed to the user using natural language generation technology, expert system technology, and Web search technology.
NASA Technical Reports Server (NTRS)
Pellionisz, Andras J.; Jorgensen, Charles C.; Werbos, Paul J.
1992-01-01
A key question is how to utilize civilian government agencies along with an industrial consortium to successfully complement the so far primarily defense-oriented neural network research. Civilian artificial neural system projects, such as artificial cerebellar neurocontrollers aimed at duplicating nature's existing neural network solutions for adaptive sensorimotor coordination, are proposed by such a synthesis. The cerebellum provides an intelligent interface between higher possibly symbolic levels of human intelligence and repetitious demands of real world conventional controllers. The generation of such intelligent interfaces could be crucial to the economic feasibility of the human settlement of space and an improvement in telerobotics techniques to permit the cost-effective exploitation of nonterrestrial materials and planetary exploration and monitoring. The authors propose a scientific framework within which such interagency activities could effectively cooperate.
The Multidimensional Integrated Intelligent Imaging project (MI-3)
NASA Astrophysics Data System (ADS)
Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P. M.; Faruqi, W.; French, M.; Gow, J.; Greenshaw, T.; Greig, T.; Guerrini, N.; Harris, E. J.; Henderson, R.; Holland, A.; Jeyasundra, G.; Karadaglic, D.; Konstantinidis, A.; Liang, H. X.; Maini, K. M. S.; McMullen, G.; Olivo, A.; O'Shea, V.; Osmond, J.; Ott, R. J.; Prydderch, M.; Qiang, L.; Riley, G.; Royle, G.; Segneri, G.; Speller, R.; Symonds-Tayler, J. R. N.; Triger, S.; Turchetta, R.; Venanzi, C.; Wells, K.; Zha, X.; Zin, H.
2009-06-01
MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)—designed for in-pixel intelligence; FPN—designed to develop novel techniques for reducing fixed pattern noise; HDR—designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS—with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)—a novel, stitched LAS; and eLeNA—which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.
Matsu: An Elastic Cloud Connected to a SensorWeb for Disaster Response
NASA Technical Reports Server (NTRS)
Mandl, Daniel
2011-01-01
This slide presentation reviews the use of cloud computing combined with the SensorWeb in aiding disaster response planning. Included is an overview of the architecture of the SensorWeb, and overviews of the phase 1 of the EO-1 system and the steps to improve it to transform it to an On-demand product cloud as part of the Open Cloud Consortium (OCC). The effectiveness of this system is demonstrated in the SensorWeb for the Namibia flood in 2010, using information blended from MODIS, TRMM, River Gauge data, and the Google Earth version of Namibia the system enabled river surge predictions and could enable planning for future disaster responses.
ERIC Educational Resources Information Center
Kreie, Jennifer; Hashemi, Shohreh
2012-01-01
Data is a vital resource for businesses; therefore, it is important for businesses to manage and use their data effectively. Because of this, businesses value college graduates with an understanding of and hands-on experience working with databases, data warehouses and data analysis theories and tools. Faculty in many business disciplines try to…
The EMBRACE web service collection
Pettifer, Steve; Ison, Jon; Kalaš, Matúš; Thorne, Dave; McDermott, Philip; Jonassen, Inge; Liaquat, Ali; Fernández, José M.; Rodriguez, Jose M.; Partners, INB-; Pisano, David G.; Blanchet, Christophe; Uludag, Mahmut; Rice, Peter; Bartaseviciute, Edita; Rapacki, Kristoffer; Hekkelman, Maarten; Sand, Olivier; Stockinger, Heinz; Clegg, Andrew B.; Bongcam-Rudloff, Erik; Salzemann, Jean; Breton, Vincent; Attwood, Teresa K.; Cameron, Graham; Vriend, Gert
2010-01-01
The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection and its associated recommendations and standards definitions. PMID:20462862
Fifty Years of Silent Service: A Peek Inside the CIA Library.
ERIC Educational Resources Information Center
Newlen, Robert R.
1998-01-01
Describes the CIA (Central Intelligence Agency) library. Highlights include security measures, a day in the life of two CIA librarians, sample reference questions, collection development, the Historical Intelligence Collection, the CIA Web site, and library modernization. (JAK)
ERIC Educational Resources Information Center
Huang, Yueh-Min; Liu, Chien-Hung
2009-01-01
One of the key challenges in the promotion of web-based learning is the development of effective collaborative learning environments. We posit that the structuration process strongly influences the effectiveness of technology used in web-based collaborative learning activities. In this paper, we propose an ant swarm collaborative learning (ASCL)…
Towards an Intelligent Possibilistic Web Information Retrieval Using Multiagent System
ERIC Educational Resources Information Center
Elayeb, Bilel; Evrard, Fabrice; Zaghdoud, Montaceur; Ahmed, Mohamed Ben
2009-01-01
Purpose: The purpose of this paper is to make a scientific contribution to web information retrieval (IR). Design/methodology/approach: A multiagent system for web IR is proposed based on new technologies: Hierarchical Small-Worlds (HSW) and Possibilistic Networks (PN). This system is based on a possibilistic qualitative approach which extends the…
Interactive analysis of geodata based intelligence
NASA Astrophysics Data System (ADS)
Wagner, Boris; Eck, Ralf; Unmüessig, Gabriel; Peinsipp-Byma, Elisabeth
2016-05-01
When a spatiotemporal events happens, multi-source intelligence data is gathered to understand the problem, and strategies for solving the problem are investigated. The difficulties arising from handling spatial and temporal intelligence data represent the main problem. The map might be the bridge to visualize the data and to get the most understand model for all stakeholders. For the analysis of geodata based intelligence data, a software was developed as a working environment that combines geodata with optimized ergonomics. The interaction with the common operational picture (COP) is so essentially facilitated. The composition of the COP is based on geodata services, which are normalized by international standards of the Open Geospatial Consortium (OGC). The basic geodata are combined with intelligence data from images (IMINT) and humans (HUMINT), stored in a NATO Coalition Shared Data Server (CSD). These intelligence data can be combined with further information sources, i.e., live sensors. As a result a COP is generated and an interaction suitable for the specific workspace is added. This allows the users to work interactively with the COP, i.e., searching with an on board CSD client for suitable intelligence data and integrate them into the COP. Furthermore, users can enrich the scenario with findings out of the data of interactive live sensors and add data from other sources. This allows intelligence services to contribute effectively to the process by what military and disaster management are organized.
Intelligent web image retrieval system
NASA Astrophysics Data System (ADS)
Hong, Sungyong; Lee, Chungwoo; Nah, Yunmook
2001-07-01
Recently, the web sites such as e-business sites and shopping mall sites deal with lots of image information. To find a specific image from these image sources, we usually use web search engines or image database engines which rely on keyword only retrievals or color based retrievals with limited search capabilities. This paper presents an intelligent web image retrieval system. We propose the system architecture, the texture and color based image classification and indexing techniques, and representation schemes of user usage patterns. The query can be given by providing keywords, by selecting one or more sample texture patterns, by assigning color values within positional color blocks, or by combining some or all of these factors. The system keeps track of user's preferences by generating user query logs and automatically add more search information to subsequent user queries. To show the usefulness of the proposed system, some experimental results showing recall and precision are also explained.
ERIC Educational Resources Information Center
Scharf, Davida
2002-01-01
Discussion of improving accessibility to copyrighted electronic content focuses on the Digital Object Identifier (DOI) and the Open URL standard and linking software. Highlights include work of the World Wide Web consortium; URI (Uniform Resource Identifier); URL (Uniform Resource Locator); URN (Uniform Resource Name); OCLC's (Online Computer…
ERIC Educational Resources Information Center
Lewis, John D.
1998-01-01
Describes XML (extensible markup language), a new language classification submitted to the World Wide Web Consortium that is defined in terms of both SGML (Standard Generalized Markup Language) and HTML (Hypertext Markup Language), specifically designed for the Internet. Limitations of PDF (Portable Document Format) files for electronic journals…
A Multi-Agent System for Intelligent Online Education.
ERIC Educational Resources Information Center
O'Riordan, Colm; Griffith, Josephine
1999-01-01
Describes the system architecture of an intelligent Web-based education system that includes user modeling agents, information filtering agents for automatic information gathering, and the multi-agent interaction. Discusses information management; user interaction; support for collaborative peer-peer learning; implementation; testing; and future…
ERIC Educational Resources Information Center
Sargeant, Hope
2000-01-01
The parent of an extremely intelligent child discusses what it is like to live with a child who exhibits a different web of cognition, perception, intuition, and mental processing; the necessity of educational acceleration for learning to achieve and develop self-esteem; and the importance of challenging material in learning the satisfaction of…
The Society of Brains: How Alan Turing and Marvin Minsky Were Both Right
NASA Astrophysics Data System (ADS)
Struzik, Zbigniew R.
2015-04-01
In his well-known prediction, Alan Turing stated that computer intelligence would surpass human intelligence by the year 2000. Although the Turing Test, as it became known, was devised to be played by one human against one computer, this is not a fair setup. Every human is a part of a social network, and a fairer comparison would be a contest between one human at the console and a network of computers behind the console. Around the year 2000, the number of web pages on the WWW overtook the number of neurons in the human brain. But these websites would be of little use without the ability to search for knowledge. By the year 2000 Google Inc. had become the search engine of choice, and the WWW became an intelligent entity. This was not without good reason. The basis for the search engine was the analysis of the ’network of knowledge’. The PageRank algorithm, linking information on the web according to the hierarchy of ‘link popularity’, continues to provide the basis for all of Google's web search tools. While PageRank was developed by Larry Page and Sergey Brin in 1996 as part of a research project about a new kind of search engine, PageRank is in its essence the key to representing and using static knowledge in an emergent intelligent system. Here I argue that Alan Turing was right, as hybrid human-computer internet machines have already surpassed our individual intelligence - this was done around the year 2000 by the Internet - the socially-minded, human-computer hybrid Homo computabilis-socialis. Ironically, the Internet's intelligence also emerged to a large extent from ‘exploiting’ humans - the key to the emergence of machine intelligence has been discussed by Marvin Minsky in his work on the foundations of intelligence through interacting agents’ knowledge. As a consequence, a decade and a half decade into the 21st century, we appear to be much better equipped to tackle the problem of the social origins of humanity - in particular thanks to the power of the intelligent partner-in-the-quest machine, however, we should not wait too long...
A Framework for the Systematic Collection of Open Source Intelligence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pouchard, Line Catherine; Trien, Joseph P; Dobson, Jonathan D
2009-01-01
Following legislative directions, the Intelligence Community has been mandated to make greater use of Open Source Intelligence (OSINT). Efforts are underway to increase the use of OSINT but there are many obstacles. One of these obstacles is the lack of tools helping to manage the volume of available data and ascertain its credibility. We propose a unique system for selecting, collecting and storing Open Source data from the Web and the Open Source Center. Some data management tasks are automated, document source is retained, and metadata containing geographical coordinates are added to the documents. Analysts are thus empowered to search,more » view, store, and analyze Web data within a single tool. We present ORCAT I and ORCAT II, two implementations of the system.« less
Development of intelligent semantic search system for rubber research data in Thailand
NASA Astrophysics Data System (ADS)
Kaewboonma, Nattapong; Panawong, Jirapong; Pianhanuruk, Ekkawit; Buranarach, Marut
2017-10-01
The rubber production of Thailand increased not only by strong demand from the world market, but was also stimulated strongly through the replanting program of the Thai Government from 1961 onwards. With the continuous growth of rubber research data volume on the Web, the search for information has become a challenging task. Ontologies are used to improve the accuracy of information retrieval from the web by incorporating a degree of semantic analysis during the search. In this context, we propose an intelligent semantic search system for rubber research data in Thailand. The research methods included 1) analyzing domain knowledge, 2) ontologies development, and 3) intelligent semantic search system development to curate research data in trusted digital repositories may be shared among the wider Thailand rubber research community.
Northeast Artificial Intelligence Consortium (NAIC) Review of Technical Tasks. Volume 2, Part 1.
1987-07-01
34- . 6.2 Transformation Invariant Attributes for S Digitized Object Outlines ................................. 469 6.3 Design of an Inference Engine for an...Attributes for Digital Object Outlines ...................................... 597 7 SPEECH UNDERSTANDING RESEARCH ( Rochester Institute of Technology...versatile maintenance expert system ES) for trouble-shooting--’ digital circuits. +" Some diagnosis systems, such as MYCLN [19] for medical diagnosis and CRIB
An Object-Oriented Architecture for a Web-Based CAI System.
ERIC Educational Resources Information Center
Nakabayashi, Kiyoshi; Hoshide, Takahide; Seshimo, Hitoshi; Fukuhara, Yoshimi
This paper describes the design and implementation of an object-oriented World Wide Web-based CAI (Computer-Assisted Instruction) system. The goal of the design is to provide a flexible CAI/ITS (Intelligent Tutoring System) framework with full extendibility and reusability, as well as to exploit Web-based software technologies such as JAVA, ASP (a…
An Immune Agent for Web-Based AI Course
ERIC Educational Resources Information Center
Gong, Tao; Cai, Zixing
2006-01-01
To overcome weakness and faults of a web-based e-learning course such as Artificial Intelligence (AI), an immune agent was proposed, simulating a natural immune mechanism against a virus. The immune agent was built on the multi-dimension education agent model and immune algorithm. The web-based AI course was comprised of many files, such as HTML…
Microsoft or Google Web 2.0 Tools for Course Management
ERIC Educational Resources Information Center
Rienzo, Thomas; Han, Bernard
2009-01-01
While Web 2.0 has no universal definition, it always refers to online interactions in which user groups both provide and receive content with the aim of collective intelligence. Since 2005, online software has provided Web 2.0 collaboration technologies, for little or no charge, that were formerly available only to wealthy organizations. Academic…
Multi-Agent Framework for Virtual Learning Spaces.
ERIC Educational Resources Information Center
Sheremetov, Leonid; Nunez, Gustavo
1999-01-01
Discussion of computer-supported collaborative learning, distributed artificial intelligence, and intelligent tutoring systems focuses on the concept of agents, and describes a virtual learning environment that has a multi-agent system. Describes a model of interactions in collaborative learning and discusses agents for Web-based virtual…
Competitive Intelligence on the Internet-Going for the Gold.
ERIC Educational Resources Information Center
Kassler, Helene
2000-01-01
Discussion of competitive intelligence (CI) focuses on recent Web sties and several search techniques that provide valuable CI information. Highlights include links that display business relationships; information from vendors; general business sites; search engine strategies; local business newspapers; job postings; patent and trademark…
ERIC Educational Resources Information Center
Pappas, Marjorie L.
2003-01-01
Virtual library? Electronic library? Digital library? Online information network? These all apply to the growing number of Web-based resource collections managed by consortiums of state library entities. Some, like "INFOhio" and "KYVL" ("Kentucky Virtual Library"), have been available for a few years, but others are just starting. Searching for…
SABB, F. W.; BURGGREN, A. C.; HIGIER, R. G.; FOX, J.; HE, J.; PARKER, D. S.; POLDRACK, R. A.; CHU, W.; CANNON, T. D.; FREIMER, N. B.; BILDER, R. M.
2009-01-01
Refining phenotypes for the study of neuropsychiatric disorders is of paramount importance in neuroscience. Poor phenotype definition provides the greatest obstacle for making progress in disorders like schizophrenia, bipolar disorder, Attention Deficit/Hyperactivity Disorder (ADHD), and autism. Using freely available informatics tools developed by the Consortium for Neuropsychiatric Phenomics (CNP), we provide a framework for defining and refining latent constructs used in neuroscience research and then apply this strategy to review known genetic contributions to memory and intelligence in healthy individuals. This approach can help us begin to build multi-level phenotype models that express the interactions between constructs necessary to understand complex neuropsychiatric diseases. PMID:19450667
A novel AIDS/HIV intelligent medical consulting system based on expert systems.
Ebrahimi, Alireza Pour; Toloui Ashlaghi, Abbas; Mahdavy Rad, Maryam
2013-01-01
The purpose of this paper is to propose a novel intelligent model for AIDS/HIV data based on expert system and using it for developing an intelligent medical consulting system for AIDS/HIV. In this descriptive research, 752 frequently asked questions (FAQs) about AIDS/HIV are gathered from numerous websites about this disease. To perform the data mining and extracting the intelligent model, the 6 stages of Crisp method has been completed for FAQs. The 6 stages include: Business understanding, data understanding, data preparation, modelling, evaluation and deployment. C5.0 Tree classification algorithm is used for modelling. Also, rational unified process (RUP) is used to develop the web-based medical consulting software. Stages of RUP are as follows: Inception, elaboration, construction and transition. The intelligent developed model has been used in the infrastructure of the software and based on client's inquiry and keywords related FAQs are displayed to the client, according to the rank. FAQs' ranks are gradually determined considering clients reading it. Based on displayed FAQs, test and entertainment links are also displayed. The accuracy of the AIDS/HIV intelligent web-based medical consulting system is estimated to be 78.76%. AIDS/HIV medical consulting systems have been developed using intelligent infrastructure. Being equipped with an intelligent model, providing consulting services on systematic textual data and providing side services based on client's activities causes the implemented system to be unique. The research has been approved by Iranian Ministry of Health and Medical Education for being practical.
Standards-based sensor interoperability and networking SensorWeb: an overview
NASA Astrophysics Data System (ADS)
Bolling, Sam
2012-06-01
The War fighter lacks a unified Intelligence, Surveillance, and Reconnaissance (ISR) environment to conduct mission planning, command and control (C2), tasking, collection, exploitation, processing, and data discovery of disparate sensor data across the ISR Enterprise. Legacy sensors and applications are not standardized or integrated for assured, universal access. Existing tasking and collection capabilities are not unified across the enterprise, inhibiting robust C2 of ISR including near-real time, cross-cueing operations. To address these critical needs, the National Measurement and Signature Intelligence (MASINT) Office (NMO), and partnering Combatant Commands and Intelligence Agencies are developing SensorWeb, an architecture that harmonizes heterogeneous sensor data to a common standard for users to discover, access, observe, subscribe to and task sensors. The SensorWeb initiative long term goal is to establish an open commercial standards-based, service-oriented framework to facilitate plug and play sensors. The current development effort will produce non-proprietary deliverables, intended as a Government off the Shelf (GOTS) solution to address the U.S. and Coalition nations' inability to quickly and reliably detect, identify, map, track, and fully understand security threats and operational activities.
Emotional intelligence education in pre-registration nursing programmes: an integrative review.
Foster, Kim; McCloughen, Andrea; Delgado, Cynthia; Kefalas, Claudia; Harkness, Emily
2015-03-01
To investigate the state of knowledge on emotional intelligence (EI) education in pre-registration nursing programmes. Integrative literature review. CINAHL, Medline, Scopus, ERIC, and Web of Knowledge electronic databases were searched for abstracts published in English between 1992-2014. Data extraction and constant comparative analysis of 17 articles. Three categories were identified: Constructs of emotional intelligence; emotional intelligence curricula components; and strategies for emotional intelligence education. A wide range of emotional intelligence constructs were found, with a predominance of trait-based constructs. A variety of strategies to enhance students' emotional intelligence skills were identified, but limited curricula components and frameworks reported in the literature. An ability-based model for curricula and learning and teaching approaches is recommended. Copyright © 2014. Published by Elsevier Ltd.
Research and Conceptualization of Ontologies in Intelligent Learning Systems
ERIC Educational Resources Information Center
Deliyska, Boryana; Manoilov, Peter
2010-01-01
The intelligent learning systems provide direct customized instruction to the learners without the intervention of human tutors on the basis of Semantic Web resources. Principal roles use ontologies as instruments for modeling learning processes, learners, learning disciplines and resources. This paper examines the variety, relationships, and…
Hill, W.D.; Davies, G.; Liewald, D.C.; Payton, A.; McNeil, C.J.; Whalley, L.J.; Horan, M.; Ollier, W.; Starr, J.M.; Pendleton, N.; Hansel, N.K.; Montgomery, G.W.; Medland, S.E.; Martin, N.G.; Wright, M.J.; Bates, T.C.; Deary, I.J.
2016-01-01
Two themes are emerging regarding the molecular genetic aetiology of intelligence. The first is that intelligence is influenced by many variants and those that are tagged by common single nucleotide polymorphisms account for around 30% of the phenotypic variation. The second, in line with other polygenic traits such as height and schizophrenia, is that these variants are not randomly distributed across the genome but cluster in genes that work together. Less clear is whether the very low range of cognitive ability (intellectual disability) is simply one end of the normal distribution describing individual differences in cognitive ability across a population. Here, we examined 40 genes with a known association with non-syndromic autosomal recessive intellectual disability (NS-ARID) to determine if they are enriched for common variants associated with the normal range of intelligence differences. The current study used the 3511 individuals of the Cognitive Ageing Genetics in England and Scotland (CAGES) consortium. In addition, a text mining analysis was used to identify gene sets biologically related to the NS-ARID set. Gene-based tests indicated that genes implicated in NS-ARID were not significantly enriched for quantitative trait loci (QTL) associated with intelligence. These findings suggest that genes in which mutations can have a large and deleterious effect on intelligence are not associated with variation across the range of intelligence differences. PMID:26912939
Hill, W D; Davies, G; Liewald, D C; Payton, A; McNeil, C J; Whalley, L J; Horan, M; Ollier, W; Starr, J M; Pendleton, N; Hansel, N K; Montgomery, G W; Medland, S E; Martin, N G; Wright, M J; Bates, T C; Deary, I J
2016-01-01
Two themes are emerging regarding the molecular genetic aetiology of intelligence. The first is that intelligence is influenced by many variants and those that are tagged by common single nucleotide polymorphisms account for around 30% of the phenotypic variation. The second, in line with other polygenic traits such as height and schizophrenia, is that these variants are not randomly distributed across the genome but cluster in genes that work together. Less clear is whether the very low range of cognitive ability (intellectual disability) is simply one end of the normal distribution describing individual differences in cognitive ability across a population. Here, we examined 40 genes with a known association with non-syndromic autosomal recessive intellectual disability (NS-ARID) to determine if they are enriched for common variants associated with the normal range of intelligence differences. The current study used the 3511 individuals of the Cognitive Ageing Genetics in England and Scotland (CAGES) consortium. In addition, a text mining analysis was used to identify gene sets biologically related to the NS-ARID set. Gene-based tests indicated that genes implicated in NS-ARID were not significantly enriched for quantitative trait loci (QTL) associated with intelligence. These findings suggest that genes in which mutations can have a large and deleterious effect on intelligence are not associated with variation across the range of intelligence differences.
2016-05-01
Sharik 1.0: User Needs and System Requirements for a Web -Based Tool to Support Collaborative Sensemaking Shadi Ghajar-Khosravi...share the new intelligence items with their peers. In this report, the authors describe Sharik (SHAring Resources, Information, and Knowledge), a web ...SHAring Resources, Information and Knowledge, soit le partage des ressources, de l’information et des connaissances), un outil Web qui facilite le
Boulos, Maged N Kamel; Honda, Kiyoshi
2006-01-01
Open Source Web GIS software systems have reached a stage of maturity, sophistication, robustness and stability, and usability and user friendliness rivalling that of commercial, proprietary GIS and Web GIS server products. The Open Source Web GIS community is also actively embracing OGC (Open Geospatial Consortium) standards, including WMS (Web Map Service). WMS enables the creation of Web maps that have layers coming from multiple different remote servers/sources. In this article we present one easy to implement Web GIS server solution that is based on the Open Source University of Minnesota (UMN) MapServer. By following the accompanying step-by-step tutorial instructions, interested readers running mainstream Microsoft® Windows machines and with no prior technical experience in Web GIS or Internet map servers will be able to publish their own health maps on the Web and add to those maps additional layers retrieved from remote WMS servers. The 'digital Asia' and 2004 Indian Ocean tsunami experiences in using free Open Source Web GIS software are also briefly described. PMID:16420699
Intelligent Information Fusion in the Aviation Domain: A Semantic-Web based Approach
NASA Technical Reports Server (NTRS)
Ashish, Naveen; Goforth, Andre
2005-01-01
Information fusion from multiple sources is a critical requirement for System Wide Information Management in the National Airspace (NAS). NASA and the FAA envision creating an "integrated pool" of information originally coming from different sources, which users, intelligent agents and NAS decision support tools can tap into. In this paper we present the results of our initial investigations into the requirements and prototype development of such an integrated information pool for the NAS. We have attempted to ascertain key requirements for such an integrated pool based on a survey of DSS tools that will benefit from this integrated pool. We then advocate key technologies from computer science research areas such as the semantic web, information integration, and intelligent agents that we believe are well suited to achieving the envisioned system wide information management capabilities.
1989-03-01
KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection
1989-10-01
weight based on how powerful the corresponding feature is for object recognition and discrimination. For example, consider an arbitrary weight, denoted...quality of the segmentation, how powerful the features and spatial constraints in the knowledge base are (as far as object recognition is concern...that are powerful for object recognition and discrimination. At this point, this selection is performed heuristically through trial-and-error. As a
1990-12-01
Improvements to Research Environment ............... 6 14.3 Overview of Research ....... .......................... 7 14.3.1 An Experimental Study of...efficient inference methods. The critical issue we have studied is the effectiveness of retrieval. By this, we mean how well the system does at...locating objects that are judged relevant by the user . Designing effective retrieval strategies is difficult because in real environments the query
106-17 Telemetry Standards Metadata Configuration Chapter 23
2017-07-01
23-1 23.2 Metadata Description Language ...Chapter 23, July 2017 iii Acronyms HTML Hypertext Markup Language MDL Metadata Description Language PCM pulse code modulation TMATS Telemetry...Attributes Transfer Standard W3C World Wide Web Consortium XML eXtensible Markup Language XSD XML schema document Telemetry Network Standard
A Kansas Integrated Commercialization Information Network (KICIN).
ERIC Educational Resources Information Center
Ambler, C.; And Others
A consortium of Kansas economic development service providers is building a web of virtual satellite offices that will demonstrate the delivery of economic development services in all areas of Kansas. These "offices" will use the Internet and a novel information delivery system to reach small and medium-sized businesses and individuals…
XML Schema Languages: Beyond DTD.
ERIC Educational Resources Information Center
Ioannides, Demetrios
2000-01-01
Discussion of XML (extensible markup language) and the traditional DTD (document type definition) format focuses on efforts of the World Wide Web Consortium's XML schema working group to develop a schema language to replace DTD that will be capable of defining the set of constraints of any possible data resource. (Contains 14 references.) (LRW)
An Introduction to the Resource Description Framework.
ERIC Educational Resources Information Center
Miller, Eric
1998-01-01
Explains the Resource Description Framework (RDF), an infrastructure developed under the World Wide Web Consortium that enables the encoding, exchange, and reuse of structured metadata. It is an application of Extended Markup Language (XML), which is a subset of Standard Generalized Markup Language (SGML), and helps with expressing semantics.…
Consortial IT Services: Collaborating To Reduce the Pain.
ERIC Educational Resources Information Center
Klonoski, Ed
The Connecticut Distance Learning Consortium (CTDLC) provides its 32 members with Information Technologies (IT) services including a portal Web site, course management software, course hosting and development, faculty training, a help desk, online assessment, and a student financial aid database. These services are supplied to two- and four-year…
2004-03-01
Data Communication , http://www.iec.org/, last accessed December 2003. 13. Klaus Witrisal, “Orthogonal Frequency Division Multiplexing (OFDM) for...http://ieeexplore.ieee.org, last accessed 26 February 2003. 12. The International Engineering Consortium, Web Forum Tutorials, OFDM for Mobile
ERIC Educational Resources Information Center
Hollister, James; Richie, Sam; Weeks, Arthur
2010-01-01
This study investigated the various methods involved in creating an intelligent tutor for the University of Central Florida Web Applets (UCF Web Applets), an online environment where student can perform and/or practice experiments. After conducting research into various methods, two major models emerged. These models include: 1) solving the…
New Perspectives on Intelligence Collection and Processing
2016-06-01
gained attention in recent years with applications in areas such as web advertising , classification, and decision making. In this thesis, we develop a...research that has gained attention in recent years with applications in areas such as web advertising , classification, and decision making. In this
A Dynamic Recommender System for Improved Web Usage Mining and CRM Using Swarm Intelligence.
Alphy, Anna; Prabakaran, S
2015-01-01
In modern days, to enrich e-business, the websites are personalized for each user by understanding their interests and behavior. The main challenges of online usage data are information overload and their dynamic nature. In this paper, to address these issues, a WebBluegillRecom-annealing dynamic recommender system that uses web usage mining techniques in tandem with software agents developed for providing dynamic recommendations to users that can be used for customizing a website is proposed. The proposed WebBluegillRecom-annealing dynamic recommender uses swarm intelligence from the foraging behavior of a bluegill fish. It overcomes the information overload by handling dynamic behaviors of users. Our dynamic recommender system was compared against traditional collaborative filtering systems. The results show that the proposed system has higher precision, coverage, F1 measure, and scalability than the traditional collaborative filtering systems. Moreover, the recommendations given by our system overcome the overspecialization problem by including variety in recommendations.
A Dynamic Recommender System for Improved Web Usage Mining and CRM Using Swarm Intelligence
Alphy, Anna; Prabakaran, S.
2015-01-01
In modern days, to enrich e-business, the websites are personalized for each user by understanding their interests and behavior. The main challenges of online usage data are information overload and their dynamic nature. In this paper, to address these issues, a WebBluegillRecom-annealing dynamic recommender system that uses web usage mining techniques in tandem with software agents developed for providing dynamic recommendations to users that can be used for customizing a website is proposed. The proposed WebBluegillRecom-annealing dynamic recommender uses swarm intelligence from the foraging behavior of a bluegill fish. It overcomes the information overload by handling dynamic behaviors of users. Our dynamic recommender system was compared against traditional collaborative filtering systems. The results show that the proposed system has higher precision, coverage, F1 measure, and scalability than the traditional collaborative filtering systems. Moreover, the recommendations given by our system overcome the overspecialization problem by including variety in recommendations. PMID:26229978
Space Physics Data Facility Web Services
NASA Technical Reports Server (NTRS)
Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.
2005-01-01
The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.
Hyam, Roger; Hagedorn, Gregor; Chagnoux, Simon; Röpert, Dominik; Casino, Ana; Droege, Gabi; Glöckler, Falko; Gödderz, Karsten; Groom, Quentin; Hoffmann, Jana; Holleman, Ayco; Kempa, Matúš; Koivula, Hanna; Marhold, Karol; Nicolson, Nicky; Smith, Vincent S.; Triebel, Dagmar
2017-01-01
With biodiversity research activities being increasingly shifted to the web, the need for a system of persistent and stable identifiers for physical collection objects becomes increasingly pressing. The Consortium of European Taxonomic Facilities agreed on a common system of HTTP-URI-based stable identifiers which is now rolled out to its member organizations. The system follows Linked Open Data principles and implements redirection mechanisms to human-readable and machine-readable representations of specimens facilitating seamless integration into the growing semantic web. The implementation of stable identifiers across collection organizations is supported with open source provider software scripts, best practices documentations and recommendations for RDF metadata elements facilitating harmonized access to collection information in web portals. Database URL: http://cetaf.org/cetaf-stable-identifiers PMID:28365724
A novel AIDS/HIV intelligent medical consulting system based on expert systems
Ebrahimi, Alireza Pour; Toloui Ashlaghi, Abbas; Mahdavy Rad, Maryam
2013-01-01
Background: The purpose of this paper is to propose a novel intelligent model for AIDS/HIV data based on expert system and using it for developing an intelligent medical consulting system for AIDS/HIV. Materials and Methods: In this descriptive research, 752 frequently asked questions (FAQs) about AIDS/HIV are gathered from numerous websites about this disease. To perform the data mining and extracting the intelligent model, the 6 stages of Crisp method has been completed for FAQs. The 6 stages include: Business understanding, data understanding, data preparation, modelling, evaluation and deployment. C5.0 Tree classification algorithm is used for modelling. Also, rational unified process (RUP) is used to develop the web-based medical consulting software. Stages of RUP are as follows: Inception, elaboration, construction and transition. The intelligent developed model has been used in the infrastructure of the software and based on client's inquiry and keywords related FAQs are displayed to the client, according to the rank. FAQs’ ranks are gradually determined considering clients reading it. Based on displayed FAQs, test and entertainment links are also displayed. Result: The accuracy of the AIDS/HIV intelligent web-based medical consulting system is estimated to be 78.76%. Conclusion: AIDS/HIV medical consulting systems have been developed using intelligent infrastructure. Being equipped with an intelligent model, providing consulting services on systematic textual data and providing side services based on client's activities causes the implemented system to be unique. The research has been approved by Iranian Ministry of Health and Medical Education for being practical. PMID:24251290
Web-enabling technologies for the factory floor: a web-enabling strategy for emanufacturing
NASA Astrophysics Data System (ADS)
Velez, Ricardo; Lastra, Jose L. M.; Tuokko, Reijo O.
2001-10-01
This paper is intended to address the different technologies available for Web-enabling of the factory floor. It will give an overview of the importance of Web-enabling of the factory floor, in the application of the concepts of flexible and intelligent manufacturing, in conjunction with e-commerce. As a last section, it will try to define a Web-enabling strategy for the application in eManufacturing. This is made under the scope of the electronics manufacturing industry, so every application, technology or related matter is presented under such scope.
Web-video-mining-supported workflow modeling for laparoscopic surgeries.
Liu, Rui; Zhang, Xiaoli; Zhang, Hao
2016-11-01
As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.
A novel adaptive Cuckoo search for optimal query plan generation.
Gomathi, Ramalingam; Sharmila, Dhandapani
2014-01-01
The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.
NASA Astrophysics Data System (ADS)
Plessel, T.; Szykman, J.; Freeman, M.
2012-12-01
EPA's Remote Sensing Information Gateway (RSIG) is a widely used free applet and web service for quickly and easily retrieving, visualizing and saving user-specified subsets of atmospheric data - by variable, geographic domain and time range. Petabytes of available data include thousands of variables from a set of NASA and NOAA satellites, aircraft, ground stations and EPA air-quality models. The RSIG applet is used by atmospheric researchers and uses the rsigserver web service to obtain data and images. The rsigserver web service is compliant with the Open Geospatial Consortium Web Coverage Service (OGC-WCS) standard to facilitate data discovery and interoperability. Since rsigserver is publicly accessible, it can be (and is) used by other applications. This presentation describes the architecture and technical implementation details of this successful system with an emphasis on achieving convenience, high-performance, data integrity and security.
2010-09-01
absorption, limiting the effectiveness of intelligence collection and weapon systems that operate in those portions of the spectrum by reducing the amount of... Intelligence Agency Web site in NITF 2.0 format. This study used basic imagery from DigitalGlobe (QuickBird, WorldView-1). This imagery is not...databases. Militarily, FASTEC could enable in-scene correction in intelligence collection and possibly influence electro- optical targeting decisions
Increasing Parent Engagement in Student Learning Using an Intelligent Tutoring System
ERIC Educational Resources Information Center
Broderick, Zachary; O'Connor, Christine; Mulcahy, Courtney; Heffernan, Neil; Heffernan, Christina
2011-01-01
This study demonstrates the ability of an Intelligent Tutoring System (ITS) to increase parental engagement in student learning. A parent notification feature was developed for the web-based ASSISTment ITS that allows parents to log into their own accounts and access detailed data about their students' performance. Parents from a local middle…
77 FR 69491 - Privacy Act of 1974: System of Records; Secure Flight Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-19
... page at http://www.regulations.gov ; (2) Accessing the Government Printing Office's Web page at http...) of the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA),\\4\\ Congress directed TSA and... Intelligence Agency, the Secretary of the Treasury, and the Secretary of Defense. The Attorney General, acting...
An intelligent remote monitoring system for artificial heart.
Choi, Jaesoon; Park, Jun W; Chung, Jinhan; Min, Byoung G
2005-12-01
A web-based database system for intelligent remote monitoring of an artificial heart has been developed. It is important for patients with an artificial heart implant to be discharged from the hospital after an appropriate stabilization period for better recovery and quality of life. Reliable continuous remote monitoring systems for these patients with life support devices are gaining practical meaning. The authors have developed a remote monitoring system for this purpose that consists of a portable/desktop monitoring terminal, a database for continuous recording of patient and device status, a web-based data access system with which clinicians can access real-time patient and device status data and past history data, and an intelligent diagnosis algorithm module that noninvasively estimates blood pump output and makes automatic classification of the device status. The system has been tested with data generation emulators installed on remote sites for simulation study, and in two cases of animal experiments conducted at remote facilities. The system showed acceptable functionality and reliability. The intelligence algorithm also showed acceptable practicality in an application to animal experiment data.
Designing a patient monitoring system for bipolar disorder using Semantic Web technologies.
Thermolia, Chryssa; Bei, Ekaterini S; Petrakis, Euripides G M; Kritsotakis, Vangelis; Tsiknakis, Manolis; Sakkalis, Vangelis
2015-01-01
The new movement to personalize treatment plans and improve prediction capabilities is greatly facilitated by intelligent remote patient monitoring and risk prevention. This paper focuses on patients suffering from bipolar disorder, a mental illness characterized by severe mood swings. We exploit the advantages of Semantic Web and Electronic Health Record Technologies to develop a patient monitoring platform to support clinicians. Relying on intelligently filtering of clinical evidence-based information and individual-specific knowledge, we aim to provide recommendations for treatment and monitoring at appropriate time or concluding into alerts for serious shifts in mood and patients' non response to treatment.
Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S
2009-11-01
Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.
Software Template for Instruction in Mathematics
NASA Technical Reports Server (NTRS)
Shelton, Robert O.; Moebes, Travis A.; Beall, Anna
2005-01-01
Intelligent Math Tutor (IMT) is a software system that serves as a template for creating software for teaching mathematics. IMT can be easily connected to artificial-intelligence software and other analysis software through input and output of files. IMT provides an easy-to-use interface for generating courses that include tests that contain both multiple-choice and fill-in-the-blank questions, and enables tracking of test scores. IMT makes it easy to generate software for Web-based courses or to manufacture compact disks containing executable course software. IMT also can function as a Web-based application program, with features that run quickly on the Web, while retaining the intelligence of a high-level language application program with many graphics. IMT can be used to write application programs in text, graphics, and/or sound, so that the programs can be tailored to the needs of most handicapped persons. The course software generated by IMT follows a "back to basics" approach of teaching mathematics by inducing the student to apply creative mathematical techniques in the process of learning. Students are thereby made to discover mathematical fundamentals and thereby come to understand mathematics more deeply than they could through simple memorization.
Availability of the OGC geoprocessing standard: March 2011 reality check
NASA Astrophysics Data System (ADS)
Lopez-Pellicer, Francisco J.; Rentería-Agualimpia, Walter; Béjar, Rubén; Muro-Medrano, Pedro R.; Zarazaga-Soria, F. Javier
2012-10-01
This paper presents an investigation about the servers available in March 2011 conforming to the Web Processing Service interface specification published by the geospatial standards organization Open Geospatial Consortium (OGC) in 2007. This interface specification gives support to standard Web-based geoprocessing. The data used in this research were collected using a focused crawler configured for finding OGC Web services. The research goals are (i) to provide a reality check of the availability of Web Processing Service servers, (ii) to provide quantitative data about the use of different features defined in the standard that are relevant for a scalable Geoprocessing Web (e.g. long-running processes, Web-accessible data outputs), and (iii) to test if the advances in the use of search engines and focused crawlers for finding Web services can be applied for finding geoscience processing systems. Research results show the feasibility of the discovery approach and provide data about the implementation of the Web Processing Service specification. These results also show extensive use of features related to scalability, except for those related to technical and semantic interoperability.
WWW.cell Biology Education: Evolution Web Sites
ERIC Educational Resources Information Center
Liu, Dennis
2005-01-01
The debate over teaching evolution has once again reached a fever pitch in the United States. Earnest nineteenth-century clashes between scientific and religious worldviews have given way to the politically charged arguments of creation science and now intelligent design. The Web site of the National Center for Science Education (NCSE;…
An Intelligent Semantic E-Learning Framework Using Context-Aware Semantic Web Technologies
ERIC Educational Resources Information Center
Huang, Weihong; Webster, David; Wood, Dawn; Ishaya, Tanko
2006-01-01
Recent developments of e-learning specifications such as Learning Object Metadata (LOM), Sharable Content Object Reference Model (SCORM), Learning Design and other pedagogy research in semantic e-learning have shown a trend of applying innovative computational techniques, especially Semantic Web technologies, to promote existing content-focused…
Helping Students Choose Tools To Search the Web.
ERIC Educational Resources Information Center
Cohen, Laura B.; Jacobson, Trudi E.
2000-01-01
Describes areas where faculty members can aid students in making intelligent use of the Web in their research. Differentiates between subject directories and search engines. Describes an engine's three components: spider, index, and search engine. Outlines two misconceptions: that Yahoo! is a search engine and that search engines contain all the…
ERIC Educational Resources Information Center
Smith, Peter, Ed.
Topics addressed by the papers including in this proceedings include: multimedia in the classroom; World Wide Web site development; the evolution of academic library services; a Web-based literature course; development of a real-time intelligent network environment; serving grades over the Internet; e-mail over a Web browser; using technology to…
Conferences as Information Grounds: Web Site Evaluation with a Mobile Usability Laboratory
ERIC Educational Resources Information Center
Bossaller, Jenny S.; Paul, Anindita; Hill, Heather; Wang, Jiazhen; Erdelez, Sanda
2008-01-01
This article describes an "on-the-road" usability study and explains the study's methodological challenges, solutions, and recommendations. The study concerned a library-consortium website, which is a communication and educational tool for librarians who are physically dispersed throughout the state, and an intranet for remote users.…
The CTSA Consortium's Catalog of Assets for Translational and Clinical Health Research (CATCHR)
Mapes, Brandy; Basford, Melissa; Zufelt, Anneliese; Wehbe, Firas; Harris, Paul; Alcorn, Michael; Allen, David; Arnim, Margaret; Autry, Susan; Briggs, Michael S.; Carnegie, Andrea; Chavis‐Keeling, Deborah; De La Pena, Carlos; Dworschak, Doris; Earnest, Julie; Grieb, Terri; Guess, Marilyn; Hafer, Nathaniel; Johnson, Tesheia; Kasper, Amanda; Kopp, Janice; Lockie, Timothy; Lombardo, Vincetta; McHale, Leslie; Minogue, Andrea; Nunnally, Beth; O'Quinn, Deanna; Peck, Kelly; Pemberton, Kieran; Perry, Cheryl; Petrie, Ginny; Pontello, Andria; Posner, Rachel; Rehman, Bushra; Roth, Deborah; Sacksteder, Paulette; Scahill, Samantha; Schieri, Lorri; Simpson, Rosemary; Skinner, Anne; Toussant, Kim; Turner, Alicia; Van der Put, Elaine; Wasser, June; Webb, Chris D.; Williams, Maija; Wiseman, Lori; Yasko, Laurel; Pulley, Jill
2014-01-01
Abstract The 61 CTSA Consortium sites are home to valuable programs and infrastructure supporting translational science and all are charged with ensuring that such investments translate quickly to improved clinical care. Catalog of Assets for Translational and Clinical Health Research (CATCHR) is the Consortium's effort to collect and make available information on programs and resources to maximize efficiency and facilitate collaborations. By capturing information on a broad range of assets supporting the entire clinical and translational research spectrum, CATCHR aims to provide the necessary infrastructure and processes to establish and maintain an open‐access, searchable database of consortium resources to support multisite clinical and translational research studies. Data are collected using rigorous, defined methods, with the resulting information made visible through an integrated, searchable Web‐based tool. Additional easy‐to‐use Web tools assist resource owners in validating and updating resource information over time. In this paper, we discuss the design and scope of the project, data collection methods, current results, and future plans for development and sustainability. With increasing pressure on research programs to avoid redundancy, CATCHR aims to make available information on programs and core facilities to maximize efficient use of resources. PMID:24456567
Hu, Xiangen; Graesser, Arthur C
2004-05-01
The Human Use Regulatory Affairs Advisor (HURAA) is a Web-based facility that provides help and training on the ethical use of human subjects in research, based on documents and regulations in United States federal agencies. HURAA has a number of standard features of conventional Web facilities and computer-based training, such as hypertext, multimedia, help modules, glossaries, archives, links to other sites, and page-turning didactic instruction. HURAA also has these intelligent features: (1) an animated conversational agent that serves as a navigational guide for the Web facility, (2) lessons with case-based and explanation-based reasoning, (3) document retrieval through natural language queries, and (4) a context-sensitive Frequently Asked Questions segment, called Point & Query. This article describes the functional learning components of HURAA, specifies its computational architecture, and summarizes empirical tests of the facility on learners.
ERIC Educational Resources Information Center
Wijekumar, Kausalai; Meyer, Bonnie J. F.; Lei, Pui-Wa; Lin, Yu-Chu; Johnson, Lori A.; Spielvogel, James A.; Shurmatz, Kathryn M.; Ray, Melissa; Cook, Michael
2014-01-01
This article reports on a large scale randomized controlled trial to study the efficacy of a web-based intelligent tutoring system for the structure strategy designed to improve content area reading comprehension. The research was conducted with 128 fifth-grade classrooms within 12 school districts in rural and suburban settings. Classrooms within…
ERIC Educational Resources Information Center
Erdemir, Mustafa; Ingeç, Sebnem Kandil
2016-01-01
The purpose of this study is to identify pre-service primary mathematics teachers' views regarding on Web-based Intelligent Tutoring Systems (WBITS) in relation to its usability and influence on teaching. A survey method was used. The study was conducted with 43 students attending the mathematics teaching program under the department of elementary…
The Social Semantic Web in Intelligent Learning Environments: State of the Art and Future Challenges
ERIC Educational Resources Information Center
Jovanovic, Jelena; Gasevic, Dragan; Torniai, Carlo; Bateman, Scott; Hatala, Marek
2009-01-01
Today's technology-enhanced learning practices cater to students and teachers who use many different learning tools and environments and are used to a paradigm of interaction derived from open, ubiquitous, and socially oriented services. In this context, a crucial issue for education systems in general, and for Intelligent Learning Environments…
Analysing Student Programs in the PHP Intelligent Tutoring System
ERIC Educational Resources Information Center
Weragama, Dinesha; Reye, Jim
2014-01-01
Programming is a subject that many beginning students find difficult. The PHP Intelligent Tutoring System (PHP ITS) has been designed with the aim of making it easier for novices to learn the PHP language in order to develop dynamic web pages. Programming requires practice. This makes it necessary to include practical exercises in any ITS that…
NASA Astrophysics Data System (ADS)
Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.
1997-12-01
This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.
The Semantic Web: From Representation to Realization
NASA Astrophysics Data System (ADS)
Thórisson, Kristinn R.; Spivack, Nova; Wissner, James M.
A semantically-linked web of electronic information - the Semantic Web - promises numerous benefits including increased precision in automated information sorting, searching, organizing and summarizing. Realizing this requires significantly more reliable meta-information than is readily available today. It also requires a better way to represent information that supports unified management of diverse data and diverse Manipulation methods: from basic keywords to various types of artificial intelligence, to the highest level of intelligent manipulation - the human mind. How this is best done is far from obvious. Relying solely on hand-crafted annotation and ontologies, or solely on artificial intelligence techniques, seems less likely for success than a combination of the two. In this paper describe an integrated, complete solution to these challenges that has already been implemented and tested with hundreds of thousands of users. It is based on an ontological representational level we call SemCards that combines ontological rigour with flexible user interface constructs. SemCards are machine- and human-readable digital entities that allow non-experts to create and use semantic content, while empowering machines to better assist and participate in the process. SemCards enable users to easily create semantically-grounded data that in turn acts as examples for automation processes, creating a positive iterative feedback loop of metadata creation and refinement between user and machine. They provide a holistic solution to the Semantic Web, supporting powerful management of the full lifecycle of data, including its creation, retrieval, classification, sorting and sharing. We have implemented the SemCard technology on the semantic Web site Twine.com, showing that the technology is indeed versatile and scalable. Here we present the key ideas behind SemCards and describe the initial implementation of the technology.
Gene Ontology-Based Analysis of Zebrafish Omics Data Using the Web Tool Comparative Gene Ontology.
Ebrahimie, Esmaeil; Fruzangohar, Mario; Moussavi Nik, Seyyed Hani; Newman, Morgan
2017-10-01
Gene Ontology (GO) analysis is a powerful tool in systems biology, which uses a defined nomenclature to annotate genes/proteins within three categories: "Molecular Function," "Biological Process," and "Cellular Component." GO analysis can assist in revealing functional mechanisms underlying observed patterns in transcriptomic, genomic, and proteomic data. The already extensive and increasing use of zebrafish for modeling genetic and other diseases highlights the need to develop a GO analytical tool for this organism. The web tool Comparative GO was originally developed for GO analysis of bacterial data in 2013 ( www.comparativego.com ). We have now upgraded and elaborated this web tool for analysis of zebrafish genetic data using GOs and annotations from the Gene Ontology Consortium.
Building asynchronous geospatial processing workflows with web services
NASA Astrophysics Data System (ADS)
Zhao, Peisheng; Di, Liping; Yu, Genong
2012-02-01
Geoscience research and applications often involve a geospatial processing workflow. This workflow includes a sequence of operations that use a variety of tools to collect, translate, and analyze distributed heterogeneous geospatial data. Asynchronous mechanisms, by which clients initiate a request and then resume their processing without waiting for a response, are very useful for complicated workflows that take a long time to run. Geospatial contents and capabilities are increasingly becoming available online as interoperable Web services. This online availability significantly enhances the ability to use Web service chains to build distributed geospatial processing workflows. This paper focuses on how to orchestrate Web services for implementing asynchronous geospatial processing workflows. The theoretical bases for asynchronous Web services and workflows, including asynchrony patterns and message transmission, are examined to explore different asynchronous approaches to and architecture of workflow code for the support of asynchronous behavior. A sample geospatial processing workflow, issued by the Open Geospatial Consortium (OGC) Web Service, Phase 6 (OWS-6), is provided to illustrate the implementation of asynchronous geospatial processing workflows and the challenges in using Web Services Business Process Execution Language (WS-BPEL) to develop them.
Crowdteaching: Supporting Teaching as Designing in Collective Intelligence Communities
ERIC Educational Resources Information Center
Recker, Mimi; Yuan, Min; Ye, Lei
2014-01-01
The widespread availability of high-quality Web-based content offers new potential for supporting teachers as designers of curricula and classroom activities. When coupled with a participatory Web culture and infrastructure, teachers can share their creations as well as leverage from the best that their peers have to offer to support a collective…
"CrowdTeaching": Supporting Teacher Collective Intelligence Communities
ERIC Educational Resources Information Center
Recker, Mimi M.; Yuan, Min; Ye, Lei
2013-01-01
The widespread availability of high-quality Web-based tools and content offers new promise and potential for supporting teachers as creators of instructional activities. When coupled with a participatory Web culture and infrastructure, teachers can share their creations as well as leverage from the best that their peers have to offer to support a…
W.E.B. DuBois's Challenge to Scientific Racism.
ERIC Educational Resources Information Center
Taylor, Carol M.
1981-01-01
Proposes that a direct and authoritative challenge to the scientific racism of the late eighteenth and early twentieth centuries was urgently needed, and was one of the leading rhetorical contributions of W.E.B. DuBois. Specifically examines three issues: social Darwinism, the eugenics movement, and psychologists' measurement of intelligence.…
Information Design for Visualizing History Museum Artifacts
ERIC Educational Resources Information Center
Chen, Yulin; Lai, Tingsheng; Yasuda, Takami; Yokoi, Shigeki
2011-01-01
In the past few years, museum visualization systems have become a hot topic that attracts many researchers' interests. Several systems provide Web services for browsing museum collections through the Web. In this paper, we proposed an intelligent museum system for history museum artifacts, and described a study in which we enable access to China…
The Future of the Web, Intelligent Devices, and Education.
ERIC Educational Resources Information Center
Strauss, Howard
1999-01-01
Examines past trends in hardware, software, networking, and education, in an attempt to determine where they are going and what their broad implications might be. Speculates on what will replace the World Wide Web. Describes new applications and telematons along with a new paradigm for education called SMILE (Software-Managed Instruction,…
Features: Real-Time Adaptive Feature and Document Learning for Web Search.
ERIC Educational Resources Information Center
Chen, Zhixiang; Meng, Xiannong; Fowler, Richard H.; Zhu, Binhai
2001-01-01
Describes Features, an intelligent Web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Explains how Features learns from users' document relevance feedback and automatically extracts and suggests indexing keywords relevant to a search query, and learns from users' keyword relevance feedback…
In Pursuit of Alternatives in ELT Methodology: WebQuests
ERIC Educational Resources Information Center
Sen, Ayfer; Neufeld, Steve
2006-01-01
Although the Internet has opened up a vast new source of information for university students to use and explore, many students lack the skills to find, critically evaluate and intelligently exploit web-based resources. This problem is accentuated in English-medium universities where students learn and use English as a foreign language. In these…
Collaborative Learning and Knowledge-Construction through a Knowledge-Based WWW Authoring Tool.
ERIC Educational Resources Information Center
Haugsjaa, Erik
This paper outlines hurdles to using the World Wide Web for learning, specifically in a collaborative knowledge-construction environment. Theoretical solutions based directly on existing Web environments, as well as on research and system prototypes in the areas of Intelligent Tutoring Systems (ITS) and ITS authoring systems, are suggested. Topics…
ERIC Educational Resources Information Center
Kim, Paul; Hong, Ji-Seong; Bonk, Curtis; Lim, Gloria
2011-01-01
A Web 2.0 environment that is coupled with emerging multimodal interaction tools can have considerable influence on team learning outcomes. Today, technologies supporting social networking, collective intelligence, emotional interaction, and virtual communication are introducing new forms of collaboration that are profoundly impacting education.…
Automatic home medical product recommendation.
Luo, Gang; Thomas, Selena B; Tang, Chunqiang
2012-04-01
Web-based personal health records (PHRs) are being widely deployed. To improve PHR's capability and usability, we proposed the concept of intelligent PHR (iPHR). In this paper, we use automatic home medical product recommendation as a concrete application to demonstrate the benefits of introducing intelligence into PHRs. In this new application domain, we develop several techniques to address the emerging challenges. Our approach uses treatment knowledge and nursing knowledge, and extends the language modeling method to (1) construct a topic-selection input interface for recommending home medical products, (2) produce a global ranking of Web pages retrieved by multiple queries, and (3) provide diverse search results. We demonstrate the effectiveness of our techniques using USMLE medical exam cases.
NASA Astrophysics Data System (ADS)
Zhang, Z.; Toyer, S.; Brizhinev, D.; Ledger, M.; Taylor, K.; Purss, M. B. J.
2016-12-01
We are witnessing a rapid proliferation of geoscientific and geospatial data from an increasing variety of sensors and sensor networks. This data presents great opportunities to resolve cross-disciplinary problems. However, working with it often requires an understanding of file formats and protocols seldom used outside of scientific computing, potentially limiting the data's value to other disciplines. In this paper, we present a new approach to serving satellite coverage data on the web, which improves ease-of-access using the principles of linked data. Linked data adapts the concepts and protocols of the human-readable web to machine-readable data; the number of developers familiar with web technologies makes linked data a natural choice for bringing coverages to a wider audience. Our approach to using linked data also makes it possible to efficiently service high-level SPARQL queries: for example, "Retrieve all Landsat ETM+ observations of San Francisco between July and August 2016" can easily be encoded in a single query. We validate the new approach, which we call QBCov, with a reference implementation of the entire stack, including a simple web-based client for interacting with Landsat observations. In addition to demonstrating the utility of linked data for publishing coverages, we investigate the heretofore unexplored relationship between Discrete Global Grid Systems (DGGS) and linked data. Our conclusions are informed by the aforementioned reference implementation of QBCov, which is backed by a hierarchical file format designed around the rHEALPix DGGS. Not only does the choice of a DGGS-based representation provide an efficient mechanism for accessing large coverages at multiple scales, but the ability of DGGS to produce persistent, unique identifiers for spatial regions is especially valuable in a linked data context. This suggests that DGGS has an important role to play in creating sustainable and scalable linked data infrastructures. QBCov is being developed as a contribution to the Spatial Data on the Web working group--a joint activity of the Open Geospatial Consortium and World Wide Web Consortium.
The transcription factor encyclopedia.
Yusuf, Dimas; Butland, Stefanie L; Swanson, Magdalena I; Bolotin, Eugene; Ticoll, Amy; Cheung, Warren A; Zhang, Xiao Yu Cindy; Dickman, Christopher T D; Fulton, Debra L; Lim, Jonathan S; Schnabl, Jake M; Ramos, Oscar H P; Vasseur-Cognet, Mireille; de Leeuw, Charles N; Simpson, Elizabeth M; Ryffel, Gerhart U; Lam, Eric W-F; Kist, Ralf; Wilson, Miranda S C; Marco-Ferreres, Raquel; Brosens, Jan J; Beccari, Leonardo L; Bovolenta, Paola; Benayoun, Bérénice A; Monteiro, Lara J; Schwenen, Helma D C; Grontved, Lars; Wederell, Elizabeth; Mandrup, Susanne; Veitia, Reiner A; Chakravarthy, Harini; Hoodless, Pamela A; Mancarelli, M Michela; Torbett, Bruce E; Banham, Alison H; Reddy, Sekhar P; Cullum, Rebecca L; Liedtke, Michaela; Tschan, Mario P; Vaz, Michelle; Rizzino, Angie; Zannini, Mariastella; Frietze, Seth; Farnham, Peggy J; Eijkelenboom, Astrid; Brown, Philip J; Laperrière, David; Leprince, Dominique; de Cristofaro, Tiziana; Prince, Kelly L; Putker, Marrit; del Peso, Luis; Camenisch, Gieri; Wenger, Roland H; Mikula, Michal; Rozendaal, Marieke; Mader, Sylvie; Ostrowski, Jerzy; Rhodes, Simon J; Van Rechem, Capucine; Boulay, Gaylor; Olechnowicz, Sam W Z; Breslin, Mary B; Lan, Michael S; Nanan, Kyster K; Wegner, Michael; Hou, Juan; Mullen, Rachel D; Colvin, Stephanie C; Noy, Peter John; Webb, Carol F; Witek, Matthew E; Ferrell, Scott; Daniel, Juliet M; Park, Jason; Waldman, Scott A; Peet, Daniel J; Taggart, Michael; Jayaraman, Padma-Sheela; Karrich, Julien J; Blom, Bianca; Vesuna, Farhad; O'Geen, Henriette; Sun, Yunfu; Gronostajski, Richard M; Woodcroft, Mark W; Hough, Margaret R; Chen, Edwin; Europe-Finner, G Nicholas; Karolczak-Bayatti, Magdalena; Bailey, Jarrod; Hankinson, Oliver; Raman, Venu; LeBrun, David P; Biswal, Shyam; Harvey, Christopher J; DeBruyne, Jason P; Hogenesch, John B; Hevner, Robert F; Héligon, Christophe; Luo, Xin M; Blank, Marissa Cathleen; Millen, Kathleen Joyce; Sharlin, David S; Forrest, Douglas; Dahlman-Wright, Karin; Zhao, Chunyan; Mishima, Yuriko; Sinha, Satrajit; Chakrabarti, Rumela; Portales-Casamar, Elodie; Sladek, Frances M; Bradley, Philip H; Wasserman, Wyeth W
2012-01-01
Here we present the Transcription Factor Encyclopedia (TFe), a new web-based compendium of mini review articles on transcription factors (TFs) that is founded on the principles of open access and collaboration. Our consortium of over 100 researchers has collectively contributed over 130 mini review articles on pertinent human, mouse and rat TFs. Notable features of the TFe website include a high-quality PDF generator and web API for programmatic data retrieval. TFe aims to rapidly educate scientists about the TFs they encounter through the delivery of succinct summaries written and vetted by experts in the field. TFe is available at http://www.cisreg.ca/tfe.
ERIC Educational Resources Information Center
Godby, Carol Jean
2013-01-01
This document describes a proposed alignment between BIBFRAME (Bibliographic Framework) and a model being explored by the Online Computer Library Center (OCLC) with extensions proposed by the Schema Bib Extend project, a Worldwide Web Consortium sponsored (W3C-sponsored) community group tasked with enhancing Schema.org to the description of…
2015-07-01
Acronyms ASCII American Standard Code for Information Interchange DAU data acquisition unit DDML data display markup language IHAL...Transfer Standard URI uniform resource identifier W3C World Wide Web Consortium XML extensible markup language XSD XML schema definition XML Style...Style Guide, RCC 125-15, July 2015 1 Introduction The next generation of telemetry systems will rely heavily on extensible markup language (XML
2018-02-01
Defense Actions Against Test-Set Attacks”, In Proceedings of the Conference on Artificial Intelligence (AAAI), San Francisco, CA, February, 2017...Scott Alfeld, Jerry Zhu and Paul Barford. “Data Poisoning Attacks against Autoregressive Models”, In Proceedings of the Conference on Artificial ... Intelligence (AAAI), Phoenix, AZ, February, 2016. 7) Aaron Cahn, Scot Alfeld, Paul Barford and S. Muthukrishnan. “An Empirical Study of Web Cookies
Polite Web-Based Intelligent Tutors: Can They Improve Learning in Classrooms?
ERIC Educational Resources Information Center
McLaren, Bruce M.; DeLeeuw, Krista E.; Mayer, Richard E.
2011-01-01
Should an intelligent software tutor be polite, in an effort to motivate and cajole students to learn, or should it use more direct language? If it should be polite, under what conditions? In a series of studies in different contexts (e.g., lab versus classroom) with a variety of students (e.g., low prior knowledge versus high prior knowledge),…
Source Update Capture in Information Agents
NASA Technical Reports Server (NTRS)
Ashish, Naveen; Kulkarni, Deepak; Wang, Yao
2003-01-01
In this paper we present strategies for successfully capturing updates at Web sources. Web-based information agents provide integrated access to autonomous Web sources that can get updated. For many information agent applications we are interested in knowing when a Web source to which the application provides access, has been updated. We may also be interested in capturing all the updates at a Web source over a period of time i.e., detecting the updates and, for each update retrieving and storing the new version of data. Previous work on update and change detection by polling does not adequately address this problem. We present strategies for intelligently polling a Web source for efficiently capturing changes at the source.
Common ground: the HealthWeb project as a model for Internet collaboration.
Redman, P M; Kelly, J A; Albright, E D; Anderson, P F; Mulder, C; Schnell, E H
1997-01-01
The establishment of the HealthWeb project by twelve health sciences libraries provides a collaborative means of organizing and enhancing access to Internet resources for the international health sciences community. The project is based on the idea that the Internet is common ground for all libraries and that through collaboration a more comprehensive, robust, and long-lasting information product can be maintained. The participants include more than seventy librarians from the health sciences libraries of the Committee on Institutional Cooperation (CIC), an academic consortium of twelve major research universities. The Greater Midwest Region of the National Network of Libraries of Medicine serves as a cosponsor. HealthWeb is an information resource that provides access to evaluated, annotated Internet resources via the World Wide Web. The project vision as well as the progress reported on its implementation may serve as a model for other collaborative Internet projects. PMID:9431420
Generalized Intelligent Framework for Tutoring (GIFT) Cloud/Virtual Open Campus Quick-Start Guide
2016-03-01
distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This document serves as the quick-start guide for GIFT Cloud, the web -based...to users with a GIFT Account at no cost. GIFT Cloud is a new implementation of GIFT. This web -based application allows learners, authors, and...distribution is unlimited. 3 3. Requirements for GIFT Cloud GIFT Cloud is accessed via a web browser. Officially, GIFT Cloud has been tested to work on
NASA Astrophysics Data System (ADS)
Sargis, J. C.; Gray, W. A.
1999-03-01
The APWS allows user friendly access to several legacy systems which would normally each demand domain expertise for proper utilization. The generalized model, including objects, classes, strategies and patterns is presented. The core components of the APWS are the Microsoft Windows 95 Operating System, Oracle, Oracle Power Objects, Artificial Intelligence tools, a medical hyperlibrary and a web site. The paper includes a discussion of how could be automated by taking advantage of the expert system, object oriented programming and intelligent relational database tools within the APWS.
Plugin free remote visualization in the browser
NASA Astrophysics Data System (ADS)
Tamm, Georg; Slusallek, Philipp
2015-01-01
Today, users access information and rich media from anywhere using the web browser on their desktop computers, tablets or smartphones. But the web evolves beyond media delivery. Interactive graphics applications like visualization or gaming become feasible as browsers advance in the functionality they provide. However, to deliver large-scale visualization to thin clients like mobile devices, a dedicated server component is necessary. Ideally, the client runs directly within the browser the user is accustomed to, requiring no installation of a plugin or native application. In this paper, we present the state-of-the-art of technologies which enable plugin free remote rendering in the browser. Further, we describe a remote visualization system unifying these technologies. The system transfers rendering results to the client as images or as a video stream. We utilize the upcoming World Wide Web Consortium (W3C) conform Web Real-Time Communication (WebRTC) standard, and the Native Client (NaCl) technology built into Chrome, to deliver video with low latency.
Novel inter and intra prediction tools under consideration for the emerging AV1 video codec
NASA Astrophysics Data System (ADS)
Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe; Wang, Yunqing; Bankoski, Jim; Wang, Chen; Keyder, Emil
2017-09-01
Google started the WebM Project in 2010 to develop open source, royalty- free video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec AV1, in a consortium of major tech companies called the Alliance for Open Media, that achieves at least a generational improvement in coding efficiency over VP9. In this paper, we focus primarily on new tools in AV1 that improve the prediction of pixel blocks before transforms, quantization and entropy coding are invoked. Specifically, we describe tools and coding modes that improve intra, inter and combined inter-intra prediction. Results are presented on standard test sets.
Personalization of Rule-based Web Services.
Choi, Okkyung; Han, Sang Yong
2008-04-04
Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.
ERIC Educational Resources Information Center
Wijekumar, Kausalai; Meyer, Bonnie J. F.; Lei, Puiwa
2017-01-01
Reading comprehension in the content areas is a challenge for many middle grade students. Text structure-based instruction has yielded positive outcomes in reading comprehension at all grade levels in small and large studies. The text structure strategy delivered via the web, called Intelligent Tutoring System for the Text Structure Strategy…
Mining Social Media and Web Searches For Disease Detection
Yang, Y. Tony; Horneffer, Michael; DiLisio, Nicole
2013-01-01
Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate. PMID:25170475
Mining social media and web searches for disease detection.
Yang, Y Tony; Horneffer, Michael; DiLisio, Nicole
2013-04-28
Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate.
NASA Astrophysics Data System (ADS)
Morin, P. J.; Pundsack, J. W.; Carbotte, S. M.; Tweedie, C. E.; Grunow, A.; Lazzara, M. A.; Carpenter, P.; Sjunneskog, C. M.; Yarmey, L.; Bauer, R.; Adrian, B. M.; Pettit, J.
2014-12-01
The U.S. National Science Foundation Antarctic & Arctic Data Consortium (a2dc) is a collaboration of research centers and support organizations that provide polar scientists with data and tools to complete their research objectives. From searching historical weather observations to submitting geologic samples, polar researchers utilize the a2dc to search andcontribute to the wealth of polar scientific and geospatial data.The goals of the Antarctic & Arctic Data Consortium are to increase visibility in the research community of the services provided by resource and support facilities. Closer integration of individual facilities into a "one stop shop" will make it easier for researchers to take advantage of services and products provided by consortium members. The a2dc provides a common web portal where investigators can go to access data and samples needed to build research projects, develop student projects, or to do virtual field reconnaissance without having to utilize expensive logistics to go into the field.Participation by the international community is crucial for the success of a2dc. There are 48 nations that are signatories of the Antarctic Treaty, and 8 sovereign nations in the Arctic. Many of these organizations have unique capabilities and data that would benefit US funded polar science and vice versa.We'll present an overview of the Antarctic & Arctic Data Consortium, current participating organizations, challenges & opportunities, and plans to better coordinate data through a geospatial strategy and infrastructure.
1990-12-01
personality of other cognitive entities in the world. POPLAR is a step towards an AI system whose behavior is psychologically justified and can provide the...natural language capability (i.e. the ver- bal behavior of 1) and 5) are not addressed). Nor do we tackle in any complete ard principled manner the...human cognitive behavior . This property makes POPLAR 1.3 personality-oriented, i.e. provision is made in the present model for introducing personality
1989-10-01
Vol. 18, No. 5, 1975, pp. 253-263. [CAR84] D.B. Carlin, J.P. Bednarz, CJ. Kaiser, J.C. Connolly, M.G. Harvey , "Multichannel optical recording using... Kellog [31] takes a similar approach as ILEX in the sense that it uses existing systems rather than developing specialized hardwares (the Xerox 1100...parallel complexity. In Proceedings of the International Conference on Database Theory, pages 1-30, September 1986. [31] C. Kellog . From data management to
1990-12-01
Implementation of Coupled System 18 15.4. CASE STUDIES & IMPLEMENTATION EXAMPLES 24 15.4.1. The Case Studies of Coupled System 24 15.4.2. Example: Coupled System...occurs during specific phases of the problem-solving process. By decomposing the coupling process into its component layers we effectively study the nature...by the qualitative model, appropriate mathematical model is invoked. 5) The results are verified. If successful, stop. Else go to (2) and use an
1990-12-01
expected values. However, because the same good /bad output pattern of a device always gives rise to the same initial ordering, the method has its limitation...For any device and good /bad output pattern, it is easy to come up with an example on which the method does poorly in the sense that the actual...submodule is hss likely to be faulty if it is connec d to more good primary outputs. Initially, candidates are ordered according to their relat -nships with
Price comparisons on the internet based on computational intelligence.
Kim, Jun Woo; Ha, Sung Ho
2014-01-01
Information-intensive Web services such as price comparison sites have recently been gaining popularity. However, most users including novice shoppers have difficulty in browsing such sites because of the massive amount of information gathered and the uncertainty surrounding Web environments. Even conventional price comparison sites face various problems, which suggests the necessity of a new approach to address these problems. Therefore, for this study, an intelligent product search system was developed that enables price comparisons for online shoppers in a more effective manner. In particular, the developed system adopts linguistic price ratings based on fuzzy logic to accommodate user-defined price ranges, and personalizes product recommendations based on linguistic product clusters, which help online shoppers find desired items in a convenient manner.
Price Comparisons on the Internet Based on Computational Intelligence
Kim, Jun Woo; Ha, Sung Ho
2014-01-01
Information-intensive Web services such as price comparison sites have recently been gaining popularity. However, most users including novice shoppers have difficulty in browsing such sites because of the massive amount of information gathered and the uncertainty surrounding Web environments. Even conventional price comparison sites face various problems, which suggests the necessity of a new approach to address these problems. Therefore, for this study, an intelligent product search system was developed that enables price comparisons for online shoppers in a more effective manner. In particular, the developed system adopts linguistic price ratings based on fuzzy logic to accommodate user-defined price ranges, and personalizes product recommendations based on linguistic product clusters, which help online shoppers find desired items in a convenient manner. PMID:25268901
Design and realization of intelligent tourism service system based on voice interaction
NASA Astrophysics Data System (ADS)
Hu, Lei-di; Long, Yi; Qian, Cheng-yang; Zhang, Ling; Lv, Guo-nian
2008-10-01
Voice technology is one of the important contents to improve the intelligence and humanization of tourism service system. Combining voice technology, the paper concentrates on application needs and the composition of system to present an overall intelligent tourism service system's framework consisting of presentation layer, Web services layer, and tourism application service layer. On the basis, the paper further elaborated the implementation of the system and its key technologies, including intelligent voice interactive technology, seamless integration technology of multiple data sources, location-perception-based guides' services technology, and tourism safety control technology. Finally, according to the situation of Nanjing tourism, a prototype of Tourism Services System is realized.
NASA Astrophysics Data System (ADS)
Vuorinen, Tommi; Korja, Annakaisa
2017-04-01
FIN-EPOS consortium is a joint community of Finnish national research institutes tasked with operating and maintaining solid-earth geophysical and geological observatories and laboratories in Finland. These national research infrastructures (NRIs) seek to join EPOS research infrastructure (EPOS RI) and further pursue Finland's participation as a founding member in EPOS ERIC (European Research Infrastructure Consortium). Current partners of FIN-EPOS are the University of Helsinki (UH), the University of and Oulu (UO), Finnish Geospatial Research Institute (FGI) of the National Land Survey (NLS), Finnish Meteorological Institute (FMI), Geological Survey of Finland (GTK), CSC - IT Center for Science and MIKES Metrology at VTT Technical Research Centre of Finland Ltd. The consortium is hosted by the Institute of Seismology, UH (ISUH). The primary purpose of the consortium is to act as a coordinating body between various NRIs and the EPOS RI. FIN-EPOS engages in planning and development of the national EPOS RI and will provide support in EPOS implementation phase (IP) for the partner NRIs. FIN-EPOS also promotes the awareness of EPOS in Finland and is open to new partner NRIs that would benefit from participating in EPOS. The consortium additionally seeks to advance solid Earth science education, technologies and innovations in Finland and is actively engaging in Nordic co-operation and collaboration of solid Earth RIs. The main short term objective of FIN-EPOS is to make Finnish geoscientific data provided by NRIs interoperable with the Thematic Core Services (TCS) in the EPOS IP. Consortium partners commit into applying and following metadata and data format standards provided by EPOS. FIN-EPOS will also provide a national Finnish language web portal where users are identified and their user rights for EPOS resources are defined.
Using URIs to effectively transmit sensor data and metadata
NASA Astrophysics Data System (ADS)
Kokkinaki, Alexandra; Buck, Justin; Darroch, Louise; Gardner, Thomas
2017-04-01
Autonomous ocean observation is massively increasing the number of sensors in the ocean. Accordingly, the continuing increase in datasets produced, makes selecting sensors that are fit for purpose a growing challenge. Decision making on selecting quality sensor data, is based on the sensor's metadata, i.e. manufacturer specifications, history of calibrations etc. The Open Geospatial Consortium (OGC) has developed the Sensor Web Enablement (SWE) standards to facilitate integration and interoperability of sensor data and metadata. The World Wide Web Consortium (W3C) Semantic Web technologies enable machine comprehensibility promoting sophisticated linking and processing of data published on the web. Linking the sensor's data and metadata according to the above-mentioned standards can yield practical difficulties, because of internal hardware bandwidth restrictions and a requirement to constrain data transmission costs. Our approach addresses these practical difficulties by uniquely identifying sensor and platform models and instances through URIs, which resolve via content negotiation to either OGC's sensor meta language, sensorML or W3C's Linked Data. Data transmitted by a sensor incorporate the sensor's unique URI to refer to its metadata. Sensor and platform model URIs and descriptions are created and hosted by the British Oceanographic Data Centre (BODC) linked systems service. The sensor owner creates the sensor and platform instance URIs prior and during sensor deployment, through an updatable web form, the Sensor Instance Form (SIF). SIF enables model and instance URI association but also platform and sensor linking. The use of URIs, which are dynamically generated through the SIF, offers both practical and economical benefits to the implementation of SWE and Linked Data standards in near real time systems. Data can be linked to metadata dynamically in-situ while saving on the costs associated to the transmission of long metadata descriptions. The transmission of short URIs also enables the implementation of standards on systems where it is impractical, such as legacy hardware.
International Cancer Genome Consortium Data Portal--a one-stop shop for cancer genomics data.
Zhang, Junjun; Baran, Joachim; Cros, A; Guberman, Jonathan M; Haider, Syed; Hsu, Jack; Liang, Yong; Rivkin, Elena; Wang, Jianxin; Whitty, Brett; Wong-Erasmus, Marie; Yao, Long; Kasprzyk, Arek
2011-01-01
The International Cancer Genome Consortium (ICGC) is a collaborative effort to characterize genomic abnormalities in 50 different cancer types. To make this data available, the ICGC has created the ICGC Data Portal. Powered by the BioMart software, the Data Portal allows each ICGC member institution to manage and maintain its own databases locally, while seamlessly presenting all the data in a single access point for users. The Data Portal currently contains data from 24 cancer projects, including ICGC, The Cancer Genome Atlas (TCGA), Johns Hopkins University, and the Tumor Sequencing Project. It consists of 3478 genomes and 13 cancer types and subtypes. Available open access data types include simple somatic mutations, copy number alterations, structural rearrangements, gene expression, microRNAs, DNA methylation and exon junctions. Additionally, simple germline variations are available as controlled access data. The Data Portal uses a web-based graphical user interface (GUI) to offer researchers multiple ways to quickly and easily search and analyze the available data. The web interface can assist in constructing complicated queries across multiple data sets. Several application programming interfaces are also available for programmatic access. Here we describe the organization, functionality, and capabilities of the ICGC Data Portal.
WaterML: an XML Language for Communicating Water Observations Data
NASA Astrophysics Data System (ADS)
Maidment, D. R.; Zaslavsky, I.; Valentine, D.
2007-12-01
One of the great impediments to the synthesis of water information is the plethora of formats used to publish such data. Each water agency uses its own approach. XML (eXtended Markup Languages) are generalizations of Hypertext Markup Language to communicate specific kinds of information via the internet. WaterML is an XML language for water observations data - streamflow, water quality, groundwater levels, climate, precipitation and aquatic biology data, recorded at fixed, point locations as a function of time. The Hydrologic Information System project of the Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has defined WaterML and prepared a set of web service functions called WaterOneFLow that use WaterML to provide information about observation sites, the variables measured there and the values of those measurments. WaterML has been submitted to the Open GIS Consortium for harmonization with its standards for XML languages. Academic investigators at a number of testbed locations in the WATERS network are providing data in WaterML format using WaterOneFlow web services. The USGS and other federal agencies are also working with CUAHSI to similarly provide access to their data in WaterML through WaterOneFlow services.
Using business intelligence for efficient inter-facility patient transfer.
Haque, Waqar; Derksen, Beth Ann; Calado, Devin; Foster, Lee
2015-01-01
In the context of inter-facility patient transfer, a transfer operator must be able to objectively identify a destination which meets the needs of a patient, while keeping in mind each facility's limitations. We propose a solution which uses Business Intelligence (BI) techniques to analyze data related to healthcare infrastructure and services, and provides a web based system to identify optimal destination(s). The proposed inter-facility transfer system uses a single data warehouse with an Online Analytical Processing (OLAP) cube built on top that supplies analytical data to multiple reports embedded in web pages. The data visualization tool includes map based navigation of the health authority as well as an interactive filtering mechanism which finds facilities meeting the selected criteria. The data visualization is backed by an intuitive data entry web form which safely constrains the data, ensuring consistency and a single version of truth. The overall time required to identify the destination for inter-facility transfers is reduced from hours to a few minutes with this interactive solution.
An Intelligent Web-Based System for Diagnosing Student Learning Problems Using Concept Maps
ERIC Educational Resources Information Center
Acharya, Anal; Sinha, Devadatta
2017-01-01
The aim of this article is to propose a method for development of concept map in web-based environment for identifying concepts a student is deficient in after learning using traditional methods. Direct Hashing and Pruning algorithm was used to construct concept map. Redundancies within the concept map were removed to generate a learning sequence.…
Bae, Jeongyee
2013-04-01
The purpose of this project was to develop an international web-based expert system using principals of artificial intelligence and user-centered design for management of mental health by Korean emigrants. Using this system, anyone can access the system via computer access to the web. Our design process utilized principles of user-centered design with 4 phases: needs assessment, analysis, design/development/testing, and application release. A survey was done with 3,235 Korean emigrants. Focus group interviews were also conducted. Survey and analysis results guided the design of the web-based expert system. With this system, anyone can check their mental health status by themselves using a personal computer. The system analyzes facts based on answers to automated questions, and suggests solutions accordingly. A history tracking mechanism enables monitoring and future analysis. In addition, this system will include intervention programs to promote mental health status. This system is interactive and accessible to anyone in the world. It is expected that this management system will contribute to Korean emigrants' mental health promotion and allow researchers and professionals to share information on mental health.
Design and development of an IoT-based web application for an intelligent remote SCADA system
NASA Astrophysics Data System (ADS)
Kao, Kuang-Chi; Chieng, Wei-Hua; Jeng, Shyr-Long
2018-03-01
This paper presents a design of an intelligent remote electrical power supervisory control and data acquisition (SCADA) system based on the Internet of Things (IoT), with Internet Information Services (IIS) for setting up web servers, an ASP.NET model-view- controller (MVC) for establishing a remote electrical power monitoring and control system by using responsive web design (RWD), and a Microsoft SQL Server as the database. With the web browser connected to the Internet, the sensing data is sent to the client by using the TCP/IP protocol, which supports mobile devices with different screen sizes. The users can provide instructions immediately without being present to check the conditions, which considerably reduces labor and time costs. The developed system incorporates a remote measuring function by using a wireless sensor network and utilizes a visual interface to make the human-machine interface (HMI) more instinctive. Moreover, it contains an analog input/output and a basic digital input/output that can be applied to a motor driver and an inverter for integration with a remote SCADA system based on IoT, and thus achieve efficient power management.
An intelligent tool for activity data collection.
Sarkar, A M Jehad
2011-01-01
Activity recognition systems using simple and ubiquitous sensors require a large variety of real-world sensor data for not only evaluating their performance but also training the systems for better functioning. However, a tremendous amount of effort is required to setup an environment for collecting such data. For example, expertise and resources are needed to design and install the sensors, controllers, network components, and middleware just to perform basic data collections. It is therefore desirable to have a data collection method that is inexpensive, flexible, user-friendly, and capable of providing large and diverse activity datasets. In this paper, we propose an intelligent activity data collection tool which has the ability to provide such datasets inexpensively without physically deploying the testbeds. It can be used as an inexpensive and alternative technique to collect human activity data. The tool provides a set of web interfaces to create a web-based activity data collection environment. It also provides a web-based experience sampling tool to take the user's activity input. The tool generates an activity log using its activity knowledge and the user-given inputs. The activity knowledge is mined from the web. We have performed two experiments to validate the tool's performance in producing reliable datasets.
The Transcription Factor Encyclopedia
2012-01-01
Here we present the Transcription Factor Encyclopedia (TFe), a new web-based compendium of mini review articles on transcription factors (TFs) that is founded on the principles of open access and collaboration. Our consortium of over 100 researchers has collectively contributed over 130 mini review articles on pertinent human, mouse and rat TFs. Notable features of the TFe website include a high-quality PDF generator and web API for programmatic data retrieval. TFe aims to rapidly educate scientists about the TFs they encounter through the delivery of succinct summaries written and vetted by experts in the field. TFe is available at http://www.cisreg.ca/tfe. PMID:22458515
NASA Astrophysics Data System (ADS)
Yang, C.; Wong, D. W.; Phillips, T.; Wright, R. A.; Lindsey, S.; Kafatos, M.
2005-12-01
As a teamed partnership of the Center for Earth Observing and Space Research (CEOSR) at George Mason University (GMU), Virginia Department of Transportation (VDOT), Bureau of Transportation Statistics at the Department of Transportation (BTS/DOT), and Intergraph, we established Transportation Framework Data Services using Open Geospatial Consortium (OGC)'s Web Feature Service (WFS) Specification to enable the sharing of transportation data among the federal level with data from BTS/DOT, the state level through VDOT, the industries through Intergraph. CEOSR develops WFS solutions using Intergraph software. Relevant technical documents are also developed and disseminated through the partners. The WFS is integrated with operational geospatial systems at CEOSR and VDOT. CEOSR works with Intergraph on developing WFS solutions and technical documents. GeoMedia WebMap WFS toolkit is used with software and technical support from Intergraph. ESRI ArcIMS WFS connector is used with GMU's campus license of ESRI products. Tested solutions are integrated with framework data service operational systems, including 1) CEOSR's interoperable geospatial information services, FGDC clearinghouse Node, Geospatial One Stop (GOS) portal, and WMS services, 2) VDOT's state transportation data and GIS infrastructure, and 3)BTS/DOT's national transportation data. The project presents: 1) develop and deploy an operational OGC WFS 1.1 interfaces at CEOSR for registering with FGDC/GOS Portal and responding to Web ``POST'' requests for transportation Framework data as listed in Table 1; 2) build the WFS service that can return the data that conform to the drafted ANSI/INCITS L1 Standard (when available) for each identified theme in the format given by OGC Geography Markup Language (GML) Version 3.0 or higher; 3) integrate the OGC WFS with CEOSR's clearinghouse nodes, 4) establish a formal partnership to develop and share WFS-based geospatial interoperability technology among GMU, VDOT, BTS/DOT, and Intergraph; and 5) develop WFS-based solutions and technical documents using the GeoMedia WebMap WFS toolkit. Geospatial Web Feature Service is demonstrated to be more efficient in sharing vector data and supports direct Internet access transportation data. Developed WFS solutions also enhanced the interoperable service provided by CEOSR through the FGDC clearinghouse node and the GOS Portal.
Wolfe, Christopher R.; Reyna, Valerie F.; Widmer, Colin L.; Cedillos, Elizabeth M.; Fisher, Christopher R.; Brust-Renck, Priscila G.; Weil, Audrey M.
2014-01-01
Background Many healthy women consider genetic testing for breast cancer risk, yet BRCA testing issues are complex. Objective Determining whether an intelligent tutor, BRCA Gist, grounded in fuzzy-trace theory (FTT), increases gist comprehension and knowledge about genetic testing for breast cancer risk, improving decision-making. Design In two experiments, 410 healthy undergraduate women were randomly assigned to one of three groups: an online module using a web-based tutoring system (BRCA Gist) that uses artificial intelligence technology, a second group read highly similar content from the NCI web site, and a third completed an unrelated tutorial. Intervention BRCA Gist applied fuzzy trace theory and was designed to help participants develop gist comprehension of topics relevant to decisions about BRCA genetic testing, including how breast cancer spreads, inherited genetic mutations, and base rates. Measures We measured content knowledge, gist comprehension of decision-relevant information, interest in testing, and genetic risk and testing judgments. Results Control knowledge scores ranged from 54% to 56%, NCI improved significantly to 65% and 70%, and BRCA Gist improved significantly more to 75% and 77%, p<.0001. BRCA Gist scored higher on gist comprehension than NCI and control, p<.0001. Control genetic risk-assessment mean was 48% correct; BRCA Gist (61%), and NCI (56%) were significantly higher, p<.0001. BRCA Gist participants recommended less testing for women without risk factors (not good candidates), (24% and 19%) than controls (50%, both experiments) and NCI, (32%) Experiment 2, p<.0001. BRCA Gist testing interest was lower than controls, p<.0001. Limitations BRCA Gist has not been tested with older women from diverse groups. Conclusions Intelligent tutors, such as BRCA Gist, are scalable, cost effective ways of helping people understand complex issues, improving decision-making. PMID:24829276
Innovating the Standard Procurement System Utilizing Intelligent Agent Technologies
1999-12-01
36 C. STANDARD PROCUREMENT SYSTEM 36 1. OVERVIEW 36 2. SPS FUNCTIONS , 37 3. SPS ADVANTAGES 39 4. SPS DISADVANTAGES 40 5. SPS SUMMARY 41 D...PROCUREMENT PROCESS INNOVATION RESULTS ’. 52 E. INTELLIGENT AGENT (IA) TECHNOLOGY 53 1. OVERVIEW 54 viii 2. ADVANTAGES 58 3. DISADVANTAGES 58 F...Electronic Mall (EMALL), GSA Advantage , etc. • Web invoicing Electronic Funds Transfer (EFT) • • International Merchant Purchase Authorization Card (IMPAC
The FaceBase Consortium: A comprehensive program to facilitate craniofacial research
Hochheiser, Harry; Aronow, Bruce J.; Artinger, Kristin; Beaty, Terri H.; Brinkley, James F.; Chai, Yang; Clouthier, David; Cunningham, Michael L.; Dixon, Michael; Donahue, Leah Rae; Fraser, Scott E.; Hallgrimsson, Benedikt; Iwata, Junichi; Klein, Ophir; Marazita, Mary L.; Murray, Jeffrey C.; Murray, Stephen; de Villena, Fernando Pardo-Manuel; Postlethwait, John; Potter, Steven; Shapiro, Linda; Spritz, Richard; Visel, Axel; Weinberg, Seth M.; Trainor, Paul A.
2012-01-01
The FaceBase Consortium consists of ten interlinked research and technology projects whose goal is to generate craniofacial research data and technology for use by the research community through a central data management and integrated bioinformatics hub. Funded by the National Institute of Dental and Craniofacial Research (NIDCR) and currently focused on studying the development of the middle region of the face, the Consortium will produce comprehensive datasets of global gene expression patterns, regulatory elements and sequencing; will generate anatomical and molecular atlases; will provide human normative facial data and other phenotypes; conduct follow up studies of a completed genome-wide association study; generate independent data on the genetics of craniofacial development, build repositories of animal models and of human samples and data for community access and analysis; and will develop software tools and animal models for analyzing and functionally testing and integrating these data. The FaceBase website (http://www.facebase.org) will serve as a web home for these efforts, providing interactive tools for exploring these datasets, together with discussion forums and other services to support and foster collaboration within the craniofacial research community. PMID:21458441
Intelligent personal health record: experience and open issues.
Luo, Gang; Tang, Chunqiang; Thomas, Selena B
2012-08-01
Web-based personal health records (PHRs) are under massive deployment. To improve PHR's capability and usability, we previously proposed the concept of intelligent PHR (iPHR). By introducing and extending expert system technology and Web search technology into the PHR domain, iPHR can automatically provide users with personalized healthcare information to facilitate their daily activities of living. Our iPHR system currently provides three functions: guided search for disease information, recommendation of home nursing activities, and recommendation of home medical products. This paper discusses our experience with iPHR as well as the open issues, including both enhancements to the existing functions and potential new functions. We outline some preliminary solutions, whereas a main purpose of this paper is to stimulate future research work in the area of consumer health informatics.
Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard
2003-12-01
Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.
An open source Java web application to build self-contained Web GIS sites
NASA Astrophysics Data System (ADS)
Zavala Romero, O.; Ahmed, A.; Chassignet, E.; Zavala-Hidalgo, J.
2014-12-01
This work describes OWGIS, an open source Java web application that creates Web GIS sites by automatically writing HTML and JavaScript code. OWGIS is configured by XML files that define which layers (geographic datasets) will be displayed on the websites. This project uses several Open Geospatial Consortium standards to request data from typical map servers, such as GeoServer, and is also able to request data from ncWMS servers. The latter allows for the displaying of 4D data stored using the NetCDF file format (widely used for storing environmental model datasets). Some of the features available on the sites built with OWGIS are: multiple languages, animations, vertical profiles and vertical transects, color palettes, color ranges, and the ability to download data. OWGIS main users are scientists, such as oceanographers or climate scientists, who store their data in NetCDF files and want to analyze, visualize, share, or compare their data using a website.
A strategy for providing electronic library services to members of the AGATE Consortium
NASA Technical Reports Server (NTRS)
Thompson, J. Garth
1995-01-01
In November, 1992, NASA Administrator Daniel Goldin established a Task Force to evaluate conditions which have lead to the precipitous decline of the US General Aviation System and to recommend actions needed to re-establish US leadership in General Aviation. The Task Force Report and a report by Dr. Bruce J. Holmes, Manager of the General Aviation/Commuter Office at NASA Langley Research Center provided the directions for the formation of the Advanced General Aviation Transport Experiments (AGATE), a consortium of government, industry and university committed to the revitalization of the US General Aviation Industry. One of the recommendations of the Task Force Report was that 'a central repository of information should be created to disseminate NASA research as well as other domestic and foreign aeronautical research that has been accomplished, is ongoing or is planned... A user friendly environment should be created.' This paper describes technical and logistic issues and recommends a plan for providing technical information to members of the AGATE Consortium. It is recommended that the General Aviation office establish and maintain an electronic literature page on the AGATE server. This page should provide a user friendly interface to existing technical report and index servers identified in the report and listed in the Recommendations section. A page should also be provided which gives links to Web resources. A list of specific resources is provided in the Recommendations section. Links should also be provided to a page with tips on searching, a form to provide for feedback and suggestions from users for other resources. Finally, a page should be maintained which provides pointers to other resources like the LaRCsim workstation simulation software which is avail from LaRC at no cost. The developments of the Web is very dynamic. These developments should be monitored regularly by the GA staff and links to additional resources should be provided on the server as they become available. An recommendation to NASA Headquarters should be made to establish a logically central access to all of the NASA Technical Libraries, to make these resources available both to all NASA employees and to the AGATE Consortium.
1989-10-01
apiots to rerlliii their los1 ’ at act ions iii Ilie rouist ’lirt iii of thle plain. Th’lis fast piece of iiiforiat liolu is plrovidied tlioiigli Ow 1use of...maximum compatible sets and delete subsets otherwise for every plan fragment pf, for g,. tile first goal in goals, if p.f- does not exceed resource... deletes an non-default assumption. 4.5.3.2 Data Structures The MATMS is a frame-based system in which there are five basic types of objects: beliefs
1989-03-01
Toys, is a model of the dinosaur Tyrannosaurus Rex . This particular test case is characterized by sharply discontinuous depths varying over a wide...are not shown in these figures). 7B-C-13 Figure 7: T. Rex Scene - Figure 8: T. Rex Scene - Left Image of Tinker Right Image Toy Object (j 1/’.) C...8217: Figure 9: T. Rex Scene - Figure 10: T. Rex Scene - Connected Contours Extracted Connected Contours Extracted from Left Image from Right Image 7B-C-14 400
Intelligence assessments for children with cerebral palsy: a systematic review.
Yin Foo, Rebecca; Guppy, Max; Johnston, Leanne M
2013-10-01
Cerebral palsy (CP) is defined as a primary disorder of posture and movement; however, approximately 45% of children with CP also have an intellectual impairment. Prevalence estimates are limited by a lack of guidelines for intelligence testing. This systematic review aims to identify and examine intelligence assessments for children with CP. Electronic databases (PubMed, PsycINFO, Web of Science, CINAHL, EMBASE, and ERIC) were searched to identify assessments that (1) measured intellectual function, (2) in children aged 4 to 18 years, (3) with CP, and (4) with psychometrics available. Searches yielded 48 assessments, of which nine provided psychometric data for children with CP. The included tests were the Columbia Mental Maturity Scale, the Leiter International Performance Scale, the Peabody Picture Vocabulary Test, the Pictorial Test of Intelligence, the Raven's Coloured Progressive Matrices, the Stanford-Binet Intelligence Scales, the Wechsler Adult Intelligence Scale, the Wechsler Intelligence Scale for Children, and the Wechsler Preschool and Primary Scale of Intelligence. Intelligence assessments in children with CP lack reliability data, consensus regarding validity data, and population-specific norms. Research is required to establish psychometrics for children with CP. For children with higher motor involvement and/or communication and/or visual impairments, multiple options are required to assess intelligence appropriately. © 2013 Mac Keith Press.
Harvesting Intelligence in Multimedia Social Tagging Systems
NASA Astrophysics Data System (ADS)
Giannakidou, Eirini; Kaklidou, Foteini; Chatzilari, Elisavet; Kompatsiaris, Ioannis; Vakali, Athena
As more people adopt tagging practices, social tagging systems tend to form rich knowledge repositories that enable the extraction of patterns reflecting the way content semantics is perceived by the web users. This is of particular importance, especially in the case of multimedia content, since the availability of such content in the web is very high and its efficient retrieval using textual annotations or content-based automatically extracted metadata still remains a challenge. It is argued that complementing multimedia analysis techniques with knowledge drawn from web social annotations may facilitate multimedia content management. This chapter focuses on analyzing tagging patterns and combining them with content feature extraction methods, generating, thus, intelligence from multimedia social tagging systems. Emphasis is placed on using all available "tracks" of knowledge, that is tag co-occurrence together with semantic relations among tags and low-level features of the content. Towards this direction, a survey on the theoretical background and the adopted practices for analysis of multimedia social content are presented. A case study from Flickr illustrates the efficiency of the proposed approach.
NASA SensorWeb and OGC Standards for Disaster Management
NASA Technical Reports Server (NTRS)
Mandl, Dan
2010-01-01
I. Goal: Enable user to cost-effectively find and create customized data products to help manage disasters; a) On-demand; b) Low cost and non-specialized tools such as Google Earth and browsers; c) Access via open network but with sufficient security. II. Use standards to interface various sensors and resultant data: a) Wrap sensors in Open Geospatial Consortium (OGC) standards; b) Wrap data processing algorithms and servers with OGC standards c) Use standardized workflows to orchestrate and script the creation of these data; products. III. Target Web 2.0 mass market: a) Make it simple and easy to use; b) Leverage new capabilities and tools that are emerging; c) Improve speed and responsiveness.
An Intelligent Tool for Activity Data Collection
Jehad Sarkar, A. M.
2011-01-01
Activity recognition systems using simple and ubiquitous sensors require a large variety of real-world sensor data for not only evaluating their performance but also training the systems for better functioning. However, a tremendous amount of effort is required to setup an environment for collecting such data. For example, expertise and resources are needed to design and install the sensors, controllers, network components, and middleware just to perform basic data collections. It is therefore desirable to have a data collection method that is inexpensive, flexible, user-friendly, and capable of providing large and diverse activity datasets. In this paper, we propose an intelligent activity data collection tool which has the ability to provide such datasets inexpensively without physically deploying the testbeds. It can be used as an inexpensive and alternative technique to collect human activity data. The tool provides a set of web interfaces to create a web-based activity data collection environment. It also provides a web-based experience sampling tool to take the user’s activity input. The tool generates an activity log using its activity knowledge and the user-given inputs. The activity knowledge is mined from the web. We have performed two experiments to validate the tool’s performance in producing reliable datasets. PMID:22163832
Caballero, Víctor; Vernet, David; Zaballos, Agustín; Corral, Guiomar
2018-01-30
Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid's Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.
Vernet, David; Corral, Guiomar
2018-01-01
Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid’s Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29385748
A New User Interface for On-Demand Customizable Data Products for Sensors in a SensorWeb
NASA Technical Reports Server (NTRS)
Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Sullivan, Don
2011-01-01
A SensorWeb is a set of sensors, which can consist of ground, airborne and space-based sensors interoperating in an automated or autonomous collaborative manner. The NASA SensorWeb toolbox, developed at NASA/GSFC in collaboration with NASA/JPL, NASA/Ames and other partners, is a set of software and standards that (1) enables users to create virtual private networks of sensors over open networks; (2) provides the capability to orchestrate their actions; (3) provides the capability to customize the output data products and (4) enables automated delivery of the data products to the users desktop. A recent addition to the SensorWeb Toolbox is a new user interface, together with web services co-resident with the sensors, to enable rapid creation, loading and execution of new algorithms for processing sensor data. The web service along with the user interface follows the Open Geospatial Consortium (OGC) standard called Web Coverage Processing Service (WCPS). This presentation will detail the prototype that was built and how the WCPS was tested against a HyspIRI flight testbed and an elastic computation cloud on the ground with EO-1 data. HyspIRI is a future NASA decadal mission. The elastic computation cloud stores EO-1 data and runs software similar to Amazon online shopping.
2005-06-01
need for user-defined dashboard • automated monitoring of web data sources • task driven data aggregation and display Working toward automated processing of task, resource, and intelligence updates
Final report : PATTON Alliance gazetteer evaluation project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bleakly, Denise Rae
2007-08-01
In 2005 the National Ground Intelligence Center (NGIC) proposed that the PATTON Alliance provide assistance in evaluating and obtaining the Integrated Gazetteer Database (IGDB), developed for the Naval Space Warfare Command Research group (SPAWAR) under Advance Research and Development Activity (ARDA) funds by MITRE Inc., fielded to the text-based search tool GeoLocator, currently in use by NGIC. We met with the developers of GeoLocator and identified their requirements for a better gazetteer. We then validated those requirements by reviewing the technical literature, meeting with other members of the intelligence community (IC), and talking with both the United States Geologic Surveymore » (USGS) and the National Geospatial Intelligence Agency (NGA), the authoritative sources for official geographic name information. We thus identified 12 high-level requirements from users and the broader intelligence community. The IGDB satisfies many of these requirements. We identified gaps and proposed ways of closing these gaps. Three important needs have not been addressed but are critical future needs for the broader intelligence community. These needs include standardization of gazetteer data, a web feature service for gazetteer information that is maintained by NGA and USGS but accessible to users, and a common forum that brings together IC stakeholders and federal agency representatives to provide input to these activities over the next several years. Establishing a robust gazetteer web feature service that is available to all IC users may go a long way toward resolving the gazetteer needs within the IC. Without a common forum to provide input and feedback, community adoption may take significantly longer than anticipated with resulting risks to the war fighter.« less
Infrastructure for the Geospatial Web
NASA Astrophysics Data System (ADS)
Lake, Ron; Farley, Jim
Geospatial data and geoprocessing techniques are now directly linked to business processes in many areas. Commerce, transportation and logistics, planning, defense, emergency response, health care, asset management and many other domains leverage geospatial information and the ability to model these data to achieve increased efficiencies and to develop better, more comprehensive decisions. However, the ability to deliver geospatial data and the capacity to process geospatial information effectively in these domains are dependent on infrastructure technology that facilitates basic operations such as locating data, publishing data, keeping data current and notifying subscribers and others whose applications and decisions are dependent on this information when changes are made. This chapter introduces the notion of infrastructure technology for the Geospatial Web. Specifically, the Geography Markup Language (GML) and registry technology developed using the ebRIM specification delivered from the OASIS consortium are presented as atomic infrastructure components in a working Geospatial Web.
P-MartCancer-Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets.
Webb-Robertson, Bobbie-Jo M; Bramer, Lisa M; Jensen, Jeffrey L; Kobold, Markus A; Stratton, Kelly G; White, Amanda M; Rodland, Karin D
2017-11-01
P-MartCancer is an interactive web-based software environment that enables statistical analyses of peptide or protein data, quantitated from mass spectrometry-based global proteomics experiments, without requiring in-depth knowledge of statistical programming. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification, and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access and the capability to analyze multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium at the peptide, gene, and protein levels. P-MartCancer is deployed as a web service (https://pmart.labworks.org/cptac.html), alternatively available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/). Cancer Res; 77(21); e47-50. ©2017 AACR . ©2017 American Association for Cancer Research.
The QuakeSim Project: Web Services for Managing Geophysical Data and Applications
NASA Astrophysics Data System (ADS)
Pierce, Marlon E.; Fox, Geoffrey C.; Aktas, Mehmet S.; Aydin, Galip; Gadgil, Harshawardhan; Qi, Zhigang; Sayar, Ahmet
2008-04-01
We describe our distributed systems research efforts to build the “cyberinfrastructure” components that constitute a geophysical Grid, or more accurately, a Grid of Grids. Service-oriented computing principles are used to build a distributed infrastructure of Web accessible components for accessing data and scientific applications. Our data services fall into two major categories: Archival, database-backed services based around Geographical Information System (GIS) standards from the Open Geospatial Consortium, and streaming services that can be used to filter and route real-time data sources such as Global Positioning System data streams. Execution support services include application execution management services and services for transferring remote files. These data and execution service families are bound together through metadata information and workflow services for service orchestration. Users may access the system through the QuakeSim scientific Web portal, which is built using a portlet component approach.
An Open Source Web Map Server Implementation For California and the Digital Earth: Lessons Learned
NASA Technical Reports Server (NTRS)
Sullivan, D. V.; Sheffner, E. J.; Skiles, J. W.; Brass, J. A.; Condon, Estelle (Technical Monitor)
2000-01-01
This paper describes an Open Source implementation of the Open GIS Consortium's Web Map interface. It is based on the very popular Apache WWW Server, the Sun Microsystems Java ServIet Development Kit, and a C language shared library interface to a spatial datastore. This server was initially written as a proof of concept, to support a National Aeronautics and Space Administration (NASA) Digital Earth test bed demonstration. It will also find use in the California Land Science Information Partnership (CaLSIP), a joint program between NASA and the state of California. At least one WebMap enabled server will be installed in every one of the state's 58 counties. This server will form a basis for a simple, easily maintained installation for those entities that do not yet require one of the larger, more expensive, commercial offerings.
Tagliaferri, Luca; Kovács, György; Autorino, Rosa; Budrukkar, Ashwini; Guinot, Jose Luis; Hildebrand, Guido; Johansson, Bengt; Monge, Rafael Martìnez; Meyer, Jens E; Niehoff, Peter; Rovirosa, Angeles; Takàcsi-Nagy, Zoltàn; Dinapoli, Nicola; Lanzotti, Vito; Damiani, Andrea; Soror, Tamer; Valentini, Vincenzo
2016-08-01
Aim of the COBRA (Consortium for Brachytherapy Data Analysis) project is to create a multicenter group (consortium) and a web-based system for standardized data collection. GEC-ESTRO (Groupe Européen de Curiethérapie - European Society for Radiotherapy & Oncology) Head and Neck (H&N) Working Group participated in the project and in the implementation of the consortium agreement, the ontology (data-set) and the necessary COBRA software services as well as the peer reviewing of the general anatomic site-specific COBRA protocol. The ontology was defined by a multicenter task-group. Eleven centers from 6 countries signed an agreement and the consortium approved the ontology. We identified 3 tiers for the data set: Registry (epidemiology analysis), Procedures (prediction models and DSS), and Research (radiomics). The COBRA-Storage System (C-SS) is not time-consuming as, thanks to the use of "brokers", data can be extracted directly from the single center's storage systems through a connection with "structured query language database" (SQL-DB), Microsoft Access(®), FileMaker Pro(®), or Microsoft Excel(®). The system is also structured to perform automatic archiving directly from the treatment planning system or afterloading machine. The architecture is based on the concept of "on-purpose data projection". The C-SS architecture is privacy protecting because it will never make visible data that could identify an individual patient. This C-SS can also benefit from the so called "distributed learning" approaches, in which data never leave the collecting institution, while learning algorithms and proposed predictive models are commonly shared. Setting up a consortium is a feasible and practicable tool in the creation of an international and multi-system data sharing system. COBRA C-SS seems to be well accepted by all involved parties, primarily because it does not influence the center's own data storing technologies, procedures, and habits. Furthermore, the method preserves the privacy of all patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, M; Robertson, S; Moore, J
Purpose: Advancement in Radiation Oncology (RO) practice develops through evidence based medicine and clinical trial. Knowledge usable for treatment planning, decision support and research is contained in our clinical data, stored in an Oncospace database. This data store and the tools for populating and analyzing it are compatible with standard RO practice and are shared with collaborating institutions. The question is - what protocol for system development and data sharing within an Oncospace Consortium? We focus our example on the technology and data meaning necessary to share across the Consortium. Methods: Oncospace consists of a database schema, planning and outcomemore » data import and web based analysis tools.1) Database: The Consortium implements a federated data store; each member collects and maintains its own data within an Oncospace schema. For privacy, PHI is contained within a single table, accessible to the database owner.2) Import: Spatial dose data from treatment plans (Pinnacle or DICOM) is imported via Oncolink. Treatment outcomes are imported from an OIS (MOSAIQ).3) Analysis: JHU has built a number of webpages to answer analysis questions. Oncospace data can also be analyzed via MATLAB or SAS queries.These materials are available to Consortium members, who contribute enhancements and improvements. Results: 1) The Oncospace Consortium now consists of RO centers at JHU, UVA, UW and the University of Toronto. These members have successfully installed and populated Oncospace databases with over 1000 patients collectively.2) Members contributing code and getting updates via SVN repository. Errors are reported and tracked via Redmine. Teleconferences include strategizing design and code reviews.3) Successfully remotely queried federated databases to combine multiple institutions’ DVH data for dose-toxicity analysis (see below – data combined from JHU and UW Oncospace). Conclusion: RO data sharing can and has been effected according to the Oncospace Consortium model: http://oncospace.radonc.jhmi.edu/ . John Wong - SRA from Elekta; Todd McNutt - SRA from Elekta; Michael Bowers - funded by Elekta.« less
78 FR 64048 - Intelligent Transportation Systems Program Advisory Committee; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-25
.... Members of the public who wish to participate in the web conference must request approval from Mr. Stephen... Transportation, Research and Innovative Technology Administration, ITS Joint Program Office, Attention: Stephen...
78 FR 9748 - Removal of Postal Product
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-11
... function into Intelligent Mail barcode (IMb) Tracing, which is available at no fee as part of the classes... Commission's Web site ( http://www.prc.gov ). James F. Callow is designated as the Public Representative to...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-25
... Mitchell, Office of Intelligence and Analysis (OIA), TSA-10, Transportation Security Administration, 601... Management System (FDMS) Web page at http://www.regulations.gov ; (2) Accessing the Government Printing...
Choi, Okkyung; Han, SangYong
2007-01-01
Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.
NASA Astrophysics Data System (ADS)
Boler, F.; Meertens, C.
2012-04-01
The UNAVCO Data Center in Boulder, Colorado, archives for preservation and distributes geodesy data and products in the GNSS, InSAR, and LiDAR domains to the scientific and education community. The GNSS data, which in addition to geodesy are useful for tectonic, volcanologic, ice mass, glacial isostatic adjustment, meteorological and other studies, come from 2,500 continuously operating stations and 8000 survey-mode observation points around the globe that are operated by over 100 U.S. and international members of the UNAVCO consortium. SAR data, which are in many ways complementary to the GNSS data collection have been acquired in concert with the WInSAR Consortium activities and with EarthScope, with a focus on the western United States. UNAVCO also holds a growing collection of terrestrial laser scanning data. Several partner US geodesy data centers, along with UNAVCO, have developed and are in the process of implementing the Geodesy Seamless Archive Centers, a web services based technology to facilitate the exchange of metadata and delivery of data and products to users. These services utilize a repository layer implemented at each data center, and a service layer to identify and present any data center-specific services and capabilities, allowing simplified vertical federation of metadata from independent data centers. UNAVCO also has built web services for SAR data discovery and delivery, and will partner with other SAR data centers and institutions to provide access for the InSAR scientist to SAR data and ancillary data sets, web services to produce interferograms, and mechanisms to archive and distribute resulting higher level products. Improved access to LiDAR data from space-based, airborne, and terrestrial platforms through utilization of web services is similarly currently under development. These efforts in cyberinfrastructure, while initially aimed at intra-domain data sharing and providing products for research and education, are envisioned as potentially serving as the basis for leveraging integrated access across a broad set of Earth science domains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downing, M.
America's desire for energy independence places a new demand on alternative fuel production. Additional interest and emphasis are being placed on alternatives such as solar, wind, biofuels and nuclear energy. The nuclear fuel production option brings a new look at risk and residual waste management for a number of communities that have traditionally remained outside the energy debate. With the Federal requirements for environmental justice and public participation in energy and environmental decision-making, proponents of alternative energy production facilities will find themselves participating in discussions of risk, production, storage and disposal of hazardous materials and waste matters with low incomemore » and minority members in communities where these facilities are located or wish to locate. The fundamental principal of environmental justice is that all residents should have meaningful and intelligent participation in all aspects of environmental decision-making that could affect their community. Impacted communities must have the resources and ability to effectively marshall data and other information in order to make informed and intelligent decisions. Traditionally, many low-income and minority communities have lacked access to the required information, decision-makers and technical advisers to make informed decisions with respect to various risks that accompany alternative energy production, hazardous materials storage and nuclear waste management. In order to provide the necessary assistance to these communities, the Departments of Energy and Agriculture have teamed with others to cerate the Alternative Energy Consortium. The Alternative Energy Consortium is a collaboration of non-profit organizations, Federal agencies, Historically Black Colleges and Universities and Minority Serving Institutions (HBCU/MSIs), and private sector corporations (energy industry specialists) designed to explore and develop opportunities that empower minorities to own and work in all aspects of the field of alternative energy. The Consortium's primary objectives are to find ways to: - Include minorities in the development and ownership of infrastructure in the alternative energy industry; - Promote research and education programs to inform the public about risks and benefits of various forms of alternative energy; - Build a Mentor/Protege Program between HBCU/MSIs and industry leaders to enhance minority participation in ownership and career success in alternative energy production and distribution. The Consortium will work together to create a process whereby minorities and low income individuals will be recruited, educated, and mentored to maximize alternative energy ownership and job opportunities. Industry specialists and government representatives will work with academicians and others to: 1. research areas and methods where minorities and rural communities can engage in the industry; 2. invest in minorities by serving as mentors to minority serving institutions by offering hands-on experience through apprenticeships; 3. work to identify ownership opportunities for minorities; and 4. work to develop legislation that supports economic development and participation for minorities and rural communities in the industry. To accomplish this goal, the Consortium has set out a three-phase plan. Phase I organized a meeting of professionals to discuss the concept, explore the fundamentals, identify key players, and draft next steps. The group took a critical look at the energy industry: 1) trends, 2) economics, 3) limited number of minorities; and 4) infrastructure. Through that process the group identified four areas that would greatly impact economic development for minorities and rural communities: I Energy; II Broadband Communications; III Education; IV Labor Resources. Phase II presented a roundtable panel discussion that continued to refine the Consortium. The goal of these discussions is to produce a well-balanced Consortium committed to working together to produce effective solutions that bridge the gap between alternative energy and minorities and rural communities. Phase III is the implementation stage that will put the consortium plans into action. This will include the Mentor/Protege Program between HBCU/MSIs and industry leaders, and any additional actions that come from the Phase II roundtable discussion. Phase III will also include a panel discussion at the State of Environmental Justice in America 2008 Conference in Washington, DC in March, 2008. The Consortium's work should facilitate the siting and management of alternative energy production facilities in communities that include a significant number of minority and/or low income individuals. This effort should increase America's prospects for energy independence. (authors)« less
Resource-Bounded Information Acquisition and Learning
2012-05-01
candidate features arrive one at a time, and the learner’s task is to select a ‘best so far’ set of features from streaming features. Krause et al...on Artificial Intelligence. [31] Gatterbauer, Wolfgang . Estimating required recall for successful knowledge acquisition from the web. In Proceedings of...the 15th international conference on World Wide Web (New York, NY, USA, 2006), WWW ’06, ACM, pp. 969– 970. [32] Gatterbauer, Wolfgang . Rules of thumb
NASA Astrophysics Data System (ADS)
Bogden, P.; Partners, S.
2008-12-01
The Web 2.0 has helped globalize the economy and change social interactions, but the full impact on coastal sciences has yet to be realized. The SCOOP program (www.OpenIOOS.org/about/sura.html), an initiative of the Coastal Research Committee of the Southeastern Universities Research Association (SURA), has been using Web 2.0 technologies to create infrastructure for a multi-disciplinary Distributed Coastal Laboratory (DCL). In the spirit of the Web 2.0, SCOOP strives to provide an open-access virtual facility where "virtual visiting" scientists can log in, perform experiments (e.g., evaluate new wetting/drying algorithms in several different inundation models), potentially contribute to the assembly of resources (e.g., leave their algorithms for others), and then move on. The SCOOP prototype has focused on storm surge and waves (the initial science focus), and integrates a real-time data network to evaluate the predictions. The multi-purpose SCOOP components support a sensor-web initiative (www.OOSTethys.org) that is co-led by SURA. SCOOP also includes portals with real-time visualization, workflow configuration and decision-tool prototypes (www.OpenIOOS.org), powered by distributed computing resources from multiple universities across the nation (www.sura.org/SURAgrid). Based on our experience, we propose three key ingredients for initiatives to have the biggest impact on coastal science: (1) standards, (2) working prototypes and (3) communities of interest. We strongly endorse the Open Geospatial Consortium - a geospatial analog of the World Wide Web consortium - and other international consensus-standards bodies that engage government, private sector and academic involvement. But these standards are often highly complex, which can be an impediment to their use. We have overcome such hurdles with the second key ingredient: a focused working prototype. The prototype should include guides and resources that make it easy for others to apply, test, and revise the prototype, all without need to understand the standards in their overwhelming complexity. In addition, the prototype should support direct involvement of the third key ingredient: communities of interest that assess functional relevance. We expect that any two of these ingredients alone, without the third, will severely limit applicability and impact of any initiative.
System interface for an integrated intelligent safety system (ISS) for vehicle applications.
Hannan, Mahammad A; Hussain, Aini; Samad, Salina A
2010-01-01
This paper deals with the interface-relevant activity of a vehicle integrated intelligent safety system (ISS) that includes an airbag deployment decision system (ADDS) and a tire pressure monitoring system (TPMS). A program is developed in LabWindows/CVI, using C for prototype implementation. The prototype is primarily concerned with the interconnection between hardware objects such as a load cell, web camera, accelerometer, TPM tire module and receiver module, DAQ card, CPU card and a touch screen. Several safety subsystems, including image processing, weight sensing and crash detection systems, are integrated, and their outputs are combined to yield intelligent decisions regarding airbag deployment. The integrated safety system also monitors tire pressure and temperature. Testing and experimentation with this ISS suggests that the system is unique, robust, intelligent, and appropriate for in-vehicle applications.
System Interface for an Integrated Intelligent Safety System (ISS) for Vehicle Applications
Hannan, Mahammad A.; Hussain, Aini; Samad, Salina A.
2010-01-01
This paper deals with the interface-relevant activity of a vehicle integrated intelligent safety system (ISS) that includes an airbag deployment decision system (ADDS) and a tire pressure monitoring system (TPMS). A program is developed in LabWindows/CVI, using C for prototype implementation. The prototype is primarily concerned with the interconnection between hardware objects such as a load cell, web camera, accelerometer, TPM tire module and receiver module, DAQ card, CPU card and a touch screen. Several safety subsystems, including image processing, weight sensing and crash detection systems, are integrated, and their outputs are combined to yield intelligent decisions regarding airbag deployment. The integrated safety system also monitors tire pressure and temperature. Testing and experimentation with this ISS suggests that the system is unique, robust, intelligent, and appropriate for in-vehicle applications. PMID:22205861
NCI Launches Proteomics Assay Portal | Office of Cancer Clinical Proteomics Research
In a paper recently published by the journal Nature Methods, Investigators from the National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (NCI-CPTAC) announced the launch of a proteomics Assay Portal for multiple reaction monitoring-mass spectrometry (MRM-MS) assays. This community web-based repository for well-characterized quantitative proteomic assays currently consists of 456 unique peptide assays to 282 unique proteins and ser
Evolving telemedicine/ehealth technology.
Ferrante, Frank E
2005-06-01
This paper describes emerging technologies to support a rapidly changing and expanding scope of telemedicine/telehealth applications. Of primary interest here are wireless systems, emerging broadband, nanotechnology, intelligent agent applications, and grid computing. More specifically, the paper describes the changes underway in wireless designs aimed at enhancing security; some of the current work involving the development of nanotechnology applications and research into the use of intelligent agents/artificial intelligence technology to establish what are termed "Knowbots"; and a sampling of the use of Web services, such as grid computing capabilities, to support medical applications. In addition, the expansion of these technologies and the need for cost containment to sustain future health care for an increasingly mobile and aging population is discussed.
Emotional intelligence and affective events in nurse education: A narrative review.
Lewis, Gillian M; Neville, Christine; Ashkanasy, Neal M
2017-06-01
To investigate the current state of knowledge about emotional intelligence and affective events that arise during nursing students' clinical placement experiences. Narrative literature review. CINAHL, MEDLINE, PsycINFO, Scopus, Web of Science, ERIC and APAIS-Health databases published in English between 1990 and 2016. Data extraction from and constant comparative analysis of ten (10) research articles. We found four main themes: (1) emotional intelligence buffers stress; (2) emotional intelligence reduces anxiety associated with end of life care; (3) emotional intelligence promotes effective communication; and (4) emotional intelligence improves nursing performance. The articles we analysed adopted a variety of emotional intelligence models. Using the Ashkanasy and Daus "three-stream" taxonomy (Stream 1: ability models; 2: self-report; 3: mixed models), we found that Stream 2 self-report measures were the most popular followed by Stream 3 mixed model measures. None of the studies we surveyed used the Stream 1 approach. Findings nonetheless indicated that emotional intelligence was important in maintaining physical and psychological well-being. We concluded that developing emotional intelligence should be a useful adjunct to improve academic and clinical performance and to reduce the risk of emotional distress during clinical placement experiences. We call for more consistency in the use of emotional intelligence tests as a means to create an empirical evidence base in the field of nurse education. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wolfe, Christopher R; Reyna, Valerie F; Widmer, Colin L; Cedillos, Elizabeth M; Fisher, Christopher R; Brust-Renck, Priscila G; Weil, Audrey M
2015-01-01
. Many healthy women consider genetic testing for breast cancer risk, yet BRCA testing issues are complex. . To determine whether an intelligent tutor, BRCA Gist, grounded in fuzzy-trace theory (FTT), increases gist comprehension and knowledge about genetic testing for breast cancer risk, improving decision making. . In 2 experiments, 410 healthy undergraduate women were randomly assigned to 1 of 3 groups: an online module using a Web-based tutoring system (BRCA Gist) that uses artificial intelligence technology, a second group read highly similar content from the National Cancer Institute (NCI) Web site, and a third that completed an unrelated tutorial. . BRCA Gist applied FTT and was designed to help participants develop gist comprehension of topics relevant to decisions about BRCA genetic testing, including how breast cancer spreads, inherited genetic mutations, and base rates. . We measured content knowledge, gist comprehension of decision-relevant information, interest in testing, and genetic risk and testing judgments. . Control knowledge scores ranged from 54% to 56%, NCI improved significantly to 65% and 70%, and BRCA Gist improved significantly more to 75% and 77%, P < 0.0001. BRCA Gist scored higher on gist comprehension than NCI and control, P < 0.0001. Control genetic risk-assessment mean was 48% correct; BRCA Gist (61%) and NCI (56%) were significantly higher, P < 0.0001. BRCA Gist participants recommended less testing for women without risk factors (not good candidates; 24% and 19%) than controls (50%, both experiments) and NCI (32%), experiment 2, P < 0.0001. BRCA Gist testing interest was lower than in controls, P < 0.0001. . BRCA Gist has not been tested with older women from diverse groups. . Intelligent tutors, such as BRCA Gist, are scalable, cost-effective ways of helping people understand complex issues, improving decision making. © The Author(s) 2014.
Top 10 "Smart" Technologies for Schools.
ERIC Educational Resources Information Center
Fodeman, Doug; Holzberg, Carol S.; Kennedy, Kristen; McIntire, Todd; McLester, Susan; Ohler, Jason; Parham, Charles; Poftak, Amy; Schrock, Kathy; Warlick, David
2002-01-01
Describes 10 smart technologies for education, including voice to text software; mobile computing; hybrid computing; virtual reality; artificial intelligence; telementoring; assessment methods; digital video production; fingerprint recognition; and brain functions. Lists pertinent Web sites for each technology. (LRW)
75 FR 29466 - Prohibition Against Certain Flights Within the Territory and Airspace of Afghanistan
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-26
... access to and the use of intelligence; Operational security (OPSEC), including handling, storage, and...://www.faa.gov/regulations_policies or Accessing the Government Printing Office's Web page at: http://www...
Ganguli, Sayak; Gupta, Manoj Kumar; Basu, Protip; Banik, Rahul; Singh, Pankaj Kumar; Vishal, Vineet; Bera, Abhisek Ranjan; Chakraborty, Hirak Jyoti; Das, Sasti Gopal
2014-01-01
With the advent of age of big data and advances in high throughput technology accessing data has become one of the most important step in the entire knowledge discovery process. Most users are not able to decipher the query result that is obtained when non specific keywords or a combination of keywords are used. Intelligent access to sequence and structure databases (IASSD) is a desktop application for windows operating system. It is written in Java and utilizes the web service description language (wsdl) files and Jar files of E-utilities of various databases such as National Centre for Biotechnology Information (NCBI) and Protein Data Bank (PDB). Apart from that IASSD allows the user to view protein structure using a JMOL application which supports conditional editing. The Jar file is freely available through e-mail from the corresponding author.
SensorWeb 3G: Extending On-Orbit Sensor Capabilities to Enable Near Realtime User Configurability
NASA Technical Reports Server (NTRS)
Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Tran, Daniel; Davies, Ashley; Sullivan, Don; Ames, Troy;
2010-01-01
This research effort prototypes an implementation of a standard interface, Web Coverage Processing Service (WCPS), which is an Open Geospatial Consortium(OGC) standard, to enable users to define, test, upload and execute algorithms for on-orbit sensor systems. The user is able to customize on-orbit data products that result from raw data streaming from an instrument. This extends the SensorWeb 2.0 concept that was developed under a previous Advanced Information System Technology (AIST) effort in which web services wrap sensors and a standardized Extensible Markup Language (XML) based scripting workflow language orchestrates processing steps across multiple domains. SensorWeb 3G extends the concept by providing the user controls into the flight software modules associated with on-orbit sensor and thus provides a degree of flexibility which does not presently exist. The successful demonstrations to date will be presented, which includes a realistic HyspIRI decadal mission testbed. Furthermore, benchmarks that were run will also be presented along with future demonstration and benchmark tests planned. Finally, we conclude with implications for the future and how this concept dovetails into efforts to develop "cloud computing" methods and standards.
NASA Astrophysics Data System (ADS)
Signell, R. P.; Camossi, E.
2015-11-01
Work over the last decade has resulted in standardized web-services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by: (1) making it simple for providers to enable web service access to existing output files; (2) using technology that is free, and that is easy to deploy and configure; and (3) providing tools to communicate with web services that work in existing research environments. We present a simple, local brokering approach that lets modelers continue producing custom data, but virtually aggregates and standardizes the data using NetCDF Markup Language. The THREDDS Data Server is used for data delivery, pycsw for data search, NCTOOLBOX (Matlab®1) and Iris (Python) for data access, and Ocean Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.1 Mention of trade names or commercial products does not constitute endorsement or recommendation for use by the US Government.
The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science
Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo
2008-01-01
The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570
1989-03-01
DI _1.3)))an also the wire connecting m419 (id (3))( (tp (P-PORT))(port-of rDim) (m88 ( l l ) (type (P-PORT)) (port-of ( DI -1.1))) (m428 (id (2)) (type (P...research on this project had two dis - tinct but overlapping phases: consolidation of work done during the previous two years and developing new...diagnosis when VMES notices a diagnostic short-cut from the dual device model is present; this will be dis - cussed in the section of "Dual Device Model
Data Mining of Extremely Large Ad-Hoc Data Sets to Produce Reverse Web-Link Graphs
2017-03-01
in most of the MR cases. From these studies , we also learned that computing -optimized instances should be chosen for serialized/compressed input data...maximum 200 words) Data mining can be a valuable tool, particularly in the acquisition of military intelligence. As the second study within a larger Naval...open web crawler data set Common Crawl. Similar to previous studies , this research employs MapReduce (MR) for sorting and categorizing output value
Realizing the promise of Web 2.0: engaging community intelligence.
Hesse, Bradford W; O'Connell, Mary; Augustson, Erik M; Chou, Wen-Ying Sylvia; Shaikh, Abdul R; Rutten, Lila J Finney
2011-01-01
Discussions of Health 2.0, a term first coined in 2005, were guided by three main tenets: (a) health was to involve more participation, because an evolution in the web encouraged more direct consumer engagement in their own health care; (b) data was to become the new "Intel Inside" for systems supporting the vital decisions in health; and (c) a sense of collective intelligence from the network would supplement traditional sources of knowledge in health decision making. Interests in understanding the implications of a new paradigm for patient engagement in health and health care were kindled by findings from surveys such as the National Cancer Institute's Health Information National Trends Survey, showing that patients were quick to look online for information to help them cope with disease. This article considers how these 3 facets of Health 2.0--participation, data, and collective intelligence--can be harnessed to improve the health of the nation according to Healthy People 2020 goals. The authors begin with an examination of evidence from behavioral science to understand how Web 2.0 participative technologies may influence patient processes and outcomes, for better or worse, in an era of changing communication technologies. The article then focuses specifically on the clinical implications of Health 2.0 and offers recommendations to ensure that changes in the communication environment do not detract from national (e.g., Healthy People 2020) health goals. Changes in the clinical environment, as catalyzed by the Health Information Technology for Economic and Clinical Health Act to take advantage of Health 2.0 principles in evidence-based ways, are also considered.
NASA Astrophysics Data System (ADS)
De Leon, Marlene M.; Estuar, Maria Regina E.; Lim, Hadrian Paulo; Victorino, John Noel C.; Co, Jerelyn; Saddi, Ivan Lester; Paelmo, Sharlene Mae; Dela Cruz, Bon Lemuel
2017-09-01
Environment and agriculture related applications have been gaining ground for the past several years and have been the context for researches in ubiquitous and pervasive computing. This study is a part of a bigger study that uses artificial intelligence in developing models to detect, monitor, and forecast the spread of Fusarium oxysporum cubense TR4 (FOC TR4) on Cavendish bananas cultivated in the Philippines. To implement an Intelligent Farming system, 1) wireless sensor nodes (WSNs) are deployed in Philippine banana plantations to collect soil parameter data that is considered to affect the health of Cavendish bananas, 2) a custom built smartphone application is used for collecting, storing, and transmitting soil data, plant images and plant status data to a cloud storage, and 3) a custom built web application is used to load and display results of physico-chemical analysis of soil, analysis of data models, and geographic locations of plants being monitored. This study discusses the issues, considerations, and solutions implemented in the development of an asynchronous communication channel to ensure that all data collected by WSNs and smartphone applications are transmitted with a high degree of accuracy and reliability. From a design standpoint: standard API documentation on usage of data type is required to avoid inconsistencies in parameter passing. From a technical standpoint, there is a need to include error-handling mechanisms especially for delays in transmission of data as well as generalize method of parsing thru multidimensional array of data. Strategies are presented in the paper.
NASA Astrophysics Data System (ADS)
Tisdale, M.
2017-12-01
NASA's Atmospheric Science Data Center (ASDC) is operationally using the Esri ArcGIS Platform to improve data discoverability, accessibility and interoperability to meet the diversifying user requirements from government, private, public and academic communities. The ASDC is actively working to provide their mission essential datasets as ArcGIS Image Services, Open Geospatial Consortium (OGC) Web Mapping Services (WMS), and OGC Web Coverage Services (WCS) while leveraging the ArcGIS multidimensional mosaic dataset structure. Science teams at ASDC are utilizing these services through the development of applications using the Web AppBuilder for ArcGIS and the ArcGIS API for Javascript. These services provide greater exposure of ASDC data holdings to the GIS community and allow for broader sharing and distribution to various end users. These capabilities provide interactive visualization tools and improved geospatial analytical tools for a mission critical understanding in the areas of the earth's radiation budget, clouds, aerosols, and tropospheric chemistry. The presentation will cover how the ASDC is developing geospatial web services and applications to improve data discoverability, accessibility, and interoperability.
Web-based Weather Expert System (WES) for Space Shuttle Launch
NASA Technical Reports Server (NTRS)
Bardina, Jorge E.; Rajkumar, T.
2003-01-01
The Web-based Weather Expert System (WES) is a critical module of the Virtual Test Bed development to support 'go/no go' decisions for Space Shuttle operations in the Intelligent Launch and Range Operations program of NASA. The weather rules characterize certain aspects of the environment related to the launching or landing site, the time of the day or night, the pad or runway conditions, the mission durations, the runway equipment and landing type. Expert system rules are derived from weather contingency rules, which were developed over years by NASA. Backward chaining, a goal-directed inference method is adopted, because a particular consequence or goal clause is evaluated first, and then chained backward through the rules. Once a rule is satisfied or true, then that particular rule is fired and the decision is expressed. The expert system is continuously verifying the rules against the past one-hour weather conditions and the decisions are made. The normal procedure of operations requires a formal pre-launch weather briefing held on Launch minus 1 day, which is a specific weather briefing for all areas of Space Shuttle launch operations. In this paper, the Web-based Weather Expert System of the Intelligent Launch and range Operations program is presented.
Advances on Sensor Web for Internet of Things
NASA Astrophysics Data System (ADS)
Liang, S.; Bermudez, L. E.; Huang, C.; Jazayeri, M.; Khalafbeigi, T.
2013-12-01
'In much the same way that HTML and HTTP enabled WWW, the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE), envisioned in 2001 [1] will allow sensor webs to become a reality.'. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not a simple task. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. SWE standardizes web service interfaces, sensor descriptions and data encodings as building blocks for a Sensor Web. SWE standards are now mature specifications (version 2.0) with approved OGC compliance test suites and tens of independent implementations. Many earth and space science organizations and government agencies are using the SWE standards to publish and share their sensors and observations. While SWE has been demonstrated very effective for scientific sensors, its complexity and the computational overhead may not be suitable for resource-constrained tiny sensors. In June 2012, a new OGC Standards Working Group (SWG) was formed called the Sensor Web Interface for Internet of Things (SWE-IoT) SWG. This SWG focuses on developing one or more OGC standards for resource-constrained sensors and actuators (e.g., Internet of Things devices) while leveraging the existing OGC SWE standards. In the near future, billions to trillions of small sensors and actuators will be embedded in real- world objects and connected to the Internet facilitating a concept called the Internet of Things (IoT). By populating our environment with real-world sensor-based devices, the IoT is opening the door to exciting possibilities for a variety of application domains, such as environmental monitoring, transportation and logistics, urban informatics, smart cities, as well as personal and social applications. The current SWE-IoT development aims on modeling the IoT components and defining a standard web service that makes the observations captured by IoT devices easily accessible and allows users to task the actuators on the IoT devices. The SWE IoT model links things with sensors and reuses the OGC Observation and Model (O&M) to link sensors with features of interest and observed properties Unlike most SWE standards, the SWE-IoT defines a RESTful web interface for users to perform CRUD (i.e., create, read, update, and delete) functions on resources, including Things, Sensors, Actuators, Observations, Tasks, etc. Inspired by the OASIS Open Data Protocol (OData), the SWE-IoT web service provides the multi-faceted query, which means that users can query from different entity collections and link from one entity to other related entities. This presentation will introduce the latest development of the OGC SWE-IoT standards. Potential applications and implications in Earth and Space science will also be discussed. [1] Mike Botts, Sensor Web Enablement White Paper, Open GIS Consortium, Inc. 2002
A Tale of Two Observing Systems: Interoperability in the World of Microsoft Windows
NASA Astrophysics Data System (ADS)
Babin, B. L.; Hu, L.
2008-12-01
Louisiana Universities Marine Consortium's (LUMCON) and Dauphin Island Sea Lab's (DISL) Environmental Monitoring System provide a unified coastal ocean observing system. These two systems are mirrored to maintain autonomy while offering an integrated data sharing environment. Both systems collect data via Campbell Scientific Data loggers, store the data in Microsoft SQL servers, and disseminate the data in real- time on the World Wide Web via Microsoft Internet Information Servers and Active Server Pages (ASP). The utilization of Microsoft Windows technologies presented many challenges to these observing systems as open source tools for interoperability grow. The current open source tools often require the installation of additional software. In order to make data available through common standards formats, "home grown" software has been developed. One example of this is the development of software to generate xml files for transmission to the National Data Buoy Center (NDBC). OOSTethys partners develop, test and implement easy-to-use, open-source, OGC-compliant software., and have created a working prototype of networked, semantically interoperable, real-time data systems. Partnering with OOSTethys, we are developing a cookbook to implement OGC web services. The implementation will be written in ASP, will run in a Microsoft operating system environment, and will serve data via Sensor Observation Services (SOS). This cookbook will give observing systems running Microsoft Windows the tools to easily participate in the Open Geospatial Consortium (OGC) Oceans Interoperability Experiment (OCEANS IE).
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-27
...On May 4, 2009, HUD posted its NSP2 NOFA at http:// www.hud.gov/nsp and announced the availability of the NOFA on May 7, 2009 (74 FR 21377). The NSP2 NOFA announced the availability of approximately $1.93 billion available in competitive grants authorized under the American Recovery and Reinvestment Act of 2009 (Pub. L. 111- 5, approved February 17, 2009). HUD corrected the NSP2 NOFA by Notices posted on the HUD Web site on June 11, 2009 and November 9, 2009, and announced by Federal Register publications published on June 17, 2009 (74 FR 28715) and November 16, 2009 (74 FR 58973), respectively. Today's Federal Register publication announces that HUD has posted a notice making further corrections to the NSP2 NOFA. Specifically, the Notice corrects the NSP2 NOFA to permit HUD to specify the deadline date for submission of consortium funding agreements in the transmittal letter for the NSP2 grant agreement, which allows the submission deadline to occur after obligation of grant funds. This notice only affects applications for funding that have already been submitted to HUD by consortium applicants. HUD notes that the deadline for applications was July 17, 2009, and, as a result, will not accept new applications for funding. The notice correcting the NSP2 NOFA is available on the HUD Web site at http://www.hud.gov/recovery.
NASA Technical Reports Server (NTRS)
Delin, K. A.; Harvey, R. P.; Chabot, N. A.; Jackson, S. P.; Adams, Mike; Johnson, D. W.; Britton, J. T.
2003-01-01
The most rigorous tests of the ability to detect extant life will occur where biotic activity is limited by severe environmental conditions. Cryogenic environments are among the most severe-the energy and nutrients needed for biological activity are in short supply while the climate itself is actively destructive to biological mechanisms. In such settings biological activity is often limited to brief flourishes, occurring only when and where conditions are at their most favorable. The closer that typical regional conditions approach conditions that are actively hostile , the more widely distributed biological blooms will be in both time and space. On a spatial dimension of a few meters or a time dimension of a few days, biological activity becomes much more difficult to detect. One way to overcome this difficulty is to establish a Sensor Web that can monitor microclimates over appropriate scales of time and distance, allowing a continuous virtual presence for instant recognition of favorable conditions. A more sophisticated Sensor Web, incorporating metabolic sensors, can effectively meet the challenge to be in "the right place in the right time". This is particularly of value in planetary surface missions, where limited mobility and mission timelines require extremely efficient sample and data acquisition. Sensor Webs can be an effective way to fill the gap between broad scale orbital data collection and fine-scale surface lander science. We are in the process of developing an intelligent, distributed and autonomous Sensor Web that will allow us to monitor microclimate under severe cryogenic conditions, approaching those extant on the surface of Mars. Ultimately this Sensor Web will include the ability to detect and/or establish limits on extant microbiological activity through incorporation of novel metabolic gas sensors. Here we report the results of our first deployment of a Sensor Web prototype in a previously unexplored high altitude East Antarctic Plateau "micro-oasis" at the MacAlpine Hills, Law Glacier, Antarctica.
Patel, Ashokkumar A; Gilbertson, John R; Showe, Louise C; London, Jack W; Ross, Eric; Ochs, Michael F; Carver, Joseph; Lazarus, Andrea; Parwani, Anil V; Dhir, Rajiv; Beck, J Robert; Liebman, Michael; Garcia, Fernando U; Prichard, Jeff; Wilkerson, Myra; Herberman, Ronald B; Becich, Michael J
2007-06-08
The Pennsylvania Cancer Alliance Bioinformatics Consortium (PCABC, http://www.pcabc.upmc.edu) is one of the first major project-based initiatives stemming from the Pennsylvania Cancer Alliance that was funded for four years by the Department of Health of the Commonwealth of Pennsylvania. The objective of this was to initiate a prototype biorepository and bioinformatics infrastructure with a robust data warehouse by developing a statewide data model (1) for bioinformatics and a repository of serum and tissue samples; (2) a data model for biomarker data storage; and (3) a public access website for disseminating research results and bioinformatics tools. The members of the Consortium cooperate closely, exploring the opportunity for sharing clinical, genomic and other bioinformatics data on patient samples in oncology, for the purpose of developing collaborative research programs across cancer research institutions in Pennsylvania. The Consortium's intention was to establish a virtual repository of many clinical specimens residing in various centers across the state, in order to make them available for research. One of our primary goals was to facilitate the identification of cancer-specific biomarkers and encourage collaborative research efforts among the participating centers. The PCABC has developed unique partnerships so that every region of the state can effectively contribute and participate. It includes over 80 individuals from 14 organizations, and plans to expand to partners outside the State. This has created a network of researchers, clinicians, bioinformaticians, cancer registrars, program directors, and executives from academic and community health systems, as well as external corporate partners - all working together to accomplish a common mission. The various sub-committees have developed a common IRB protocol template, common data elements for standardizing data collections for three organ sites, intellectual property/tech transfer agreements, and material transfer agreements that have been approved by each of the member institutions. This was the foundational work that has led to the development of a centralized data warehouse that has met each of the institutions' IRB/HIPAA standards. Currently, this "virtual biorepository" has over 58,000 annotated samples from 11,467 cancer patients available for research purposes. The clinical annotation of tissue samples is either done manually over the internet or semi-automated batch modes through mapping of local data elements with PCABC common data elements. The database currently holds information on 7188 cases (associated with 9278 specimens and 46,666 annotated blocks and blood samples) of prostate cancer, 2736 cases (associated with 3796 specimens and 9336 annotated blocks and blood samples) of breast cancer and 1543 cases (including 1334 specimens and 2671 annotated blocks and blood samples) of melanoma. These numbers continue to grow, and plans to integrate new tumor sites are in progress. Furthermore, the group has also developed a central web-based tool that allows investigators to share their translational (genomics/proteomics) experiment data on research evaluating potential biomarkers via a central location on the Consortium's web site. The technological achievements and the statewide informatics infrastructure that have been established by the Consortium will enable robust and efficient studies of biomarkers and their relevance to the clinical course of cancer. Studies resulting from the creation of the Consortium may allow for better classification of cancer types, more accurate assessment of disease prognosis, a better ability to identify the most appropriate individuals for clinical trial participation, and better surrogate markers of disease progression and/or response to therapy.
Technology and Web-Based Support
ERIC Educational Resources Information Center
Smith, Carol
2008-01-01
Many types of technology support caregiving: (1) Assistive devices include medicine dispensers, feeding and bathing machines, clothing with polypropylene fibers that stimulate muscles, intelligent ambulatory walkers for those with both vision and mobility impairment, medication reminders, and safety alarms; (2) Telecare devices ranging from…
NASA Astrophysics Data System (ADS)
Clark, P. E.; Rilee, M. L.; Curtis, S. A.; Bailin, S.
2012-03-01
We are developing Frontier, a highly adaptable, stably reconfigurable, web-accessible intelligent decision engine capable of optimizing design as well as the simulating operation of complex systems in response to evolving needs and environment.
ERIC Educational Resources Information Center
Online-Offline, 1998
1998-01-01
Focuses on technology, on advances in such areas as aeronautics, electronics, physics, the space sciences, as well as computers and the attendant progress in medicine, robotics, and artificial intelligence. Describes educational resources for elementary and middle school students, including Web sites, CD-ROMs and software, videotapes, books,…
Warfighter Associate: Decision Aiding and Metrics for Mission Command
2012-01-23
Distributions: highlights the Pareto Principle -- the top 20% of the mission-command staff is heavily involved in collaborations. • Our...developing “Command Web”, a web service to support thin- client functionality (Intelligent Presentation Services enables this) Thank you
ERIC Educational Resources Information Center
Online-Offline, 1999
1999-01-01
This theme issue on knowledge includes annotated listings of Web sites, CD-ROMs and computer software, videos, books, and additional resources that deal with knowledge and differences between how animals and humans learn. Sidebars discuss animal intelligence, learning proper behavior, and getting news from the Internet. (LRW)
Olsha-Yehiav, Maya; Einbinder, Jonathan S.; Jung, Eunice; Linder, Jeffrey A.; Greim, Julie; Li, Qi; Schnipper, Jeffrey L.; Middleton, Blackford
2006-01-01
Quality Dashboards (QD) is a condition-specific, actionable web-based application for quality reporting and population management that is integrated into the Electronic Health Record (EHR). Using server-based graphic web controls in a .Net environment to construct Quality Dashboards allows customization of the reporting tool without the need to rely on commercial business intelligence tool. Quality Dashboards will improve patient care and quality outcomes as clinicians utilize the reporting tool for population management. PMID:17238671
2012-10-23
Quantum Intelligence, Inc. She was principal investigator (PI) for six contracts awarded by the DoD Small Business Innovation Research (SBIR) Program. She...with at OSD? I hope you don’t mind if I indulge in a little ‘stream of consciousness ’ musing about where LLA could really add value. One of the...implemented by Quantum Intelligence, Inc. (QI, 2001–2012). The unique contribution of this architecture is to leverage a peer-to-peer agent network
Baldziki, Mike; Brown, Jeff; Chan, Hungching; Cheetham, T Craig; Conn, Thomas; Daniel, Gregory W; Hendrickson, Mark; Hilbrich, Lutz; Johnson, Ayanna; Miller, Steven B; Moore, Tom; Motheral, Brenda; Priddy, Sarah A; Raebel, Marsha A; Randhawa, Gurvaneet; Surratt, Penny; Walraven, Cheryl; White, T Jeff; Bruns, Kevin; Carden, Mary Jo; Dragovich, Charlie; Eichelberger, Bernadette; Rosato, Edith; Sega, Todd
2015-01-01
The Biologics Price Competition and Innovation Act, introduced as part of the Affordable Care Act, directed the FDA to create an approval pathway for biologic products shown to be biosimilar or interchangeable with an FDA-approved innovator drug. These biosimilars will not be chemically identical to the reference agent. Investigational studies conducted with biosimilar agents will likely provide limited real-world evidence of their effectiveness and safety. How do we best monitor effectiveness and safety of biosimilar products once approved by the FDA and used more extensively by patients? To determine the feasibility of developing a distributed research network that will use health insurance plan and health delivery system data to detect biosimilar safety and effectiveness signals early and be able to answer important managed care pharmacy questions from both the government and managed care organizations. Twenty-one members of the AMCP Task Force on Biosimilar Collective Intelligence Systems met November 12, 2013, to discuss issues involved in designing this consortium and to explore next steps. The task force concluded that a managed care biosimilars research consortium would be of significant value. Task force members agreed that it is best to use a distributed research network structurally similar to existing DARTNet, HMO Research Network, and Mini-Sentinel consortia. However, for some surveillance projects that it undertakes, the task force recognizes it may need supplemental data from managed care and other sources (i.e., a "hybrid" structure model). The task force believes that AMCP is well positioned to lead the biosimilar-monitoring effort and that the next step to developing a biosimilar-innovator collective intelligence system is to convene an advisory council to address organizational governance.
NASA Astrophysics Data System (ADS)
Cole, M.; Bambacus, M.; Lynnes, C.; Sauer, B.; Falke, S.; Yang, W.
2007-12-01
NASA's vast array of scientific data within its Distributed Active Archive Centers (DAACs) is especially valuable to both traditional research scientists as well as the emerging market of Earth Science Information Partners. For example, the air quality science and management communities are increasingly using satellite derived observations in their analyses and decision making. The Air Quality Cluster in the Federation of Earth Science Information Partners (ESIP) uses web infrastructures of interoperability, or Service Oriented Architecture (SOA), to extend data exploration, use, and analysis and provides a user environment for DAAC products. In an effort to continually offer these NASA data to the broadest research community audience, and reusing emerging technologies, both NASA's Goddard Earth Science (GES) and Land Process (LP) DAACs have engaged in a web services pilot project. Through these projects both GES and LP have exposed data through the Open Geospatial Consortiums (OGC) Web Services standards. Reusing several different existing applications and implementation techniques, GES and LP successfully exposed a variety data, through distributed systems to be ingested into multiple end-user systems. The results of this project will enable researchers world wide to access some of NASA's GES & LP DAAC data through OGC protocols. This functionality encourages inter-disciplinary research while increasing data use through advanced technologies. This paper will concentrate on the implementation and use of OGC Web Services, specifically Web Map and Web Coverage Services (WMS, WCS) at GES and LP DAACs, and the value of these services within scientific applications, including integration with the DataFed air quality web infrastructure and in the development of data analysis web applications.
NASA Technical Reports Server (NTRS)
Panangadan, Anand; Monacos, Steve; Burleigh, Scott; Joswig, Joseph; James, Mark; Chow, Edward
2012-01-01
In this paper, we describe the architecture of both the PATS and SAP systems and how these two systems interoperate with each other forming a unified capability for deploying intelligence in hostile environments with the objective of providing actionable situational awareness of individuals. The SAP system works in concert with the UICDS information sharing middleware to provide data fusion from multiple sources. UICDS can then publish the sensor data using the OGC's Web Mapping Service, Web Feature Service, and Sensor Observation Service standards. The system described in the paper is able to integrate a spatially distributed sensor system, operating without the benefit of the Web infrastructure, with a remote monitoring and control system that is equipped to take advantage of SWE.
Framework for Supporting Web-Based Collaborative Applications
NASA Astrophysics Data System (ADS)
Dai, Wei
The article proposes an intelligent framework for supporting Web-based applications. The framework focuses on innovative use of existing resources and technologies in the form of services and takes the leverage of theoretical foundation of services science and the research from services computing. The main focus of the framework is to deliver benefits to users with various roles such as service requesters, service providers, and business owners to maximize their productivity when engaging with each other via the Web. The article opens up with research motivations and questions, analyses the existing state of research in the field, and describes the approach in implementing the proposed framework. Finally, an e-health application is discussed to evaluate the effectiveness of the framework where participants such as general practitioners (GPs), patients, and health-care workers collaborate via the Web.
Flood AI: An Intelligent Systems for Discovery and Communication of Disaster Knowledge
NASA Astrophysics Data System (ADS)
Demir, I.; Sermet, M. Y.
2017-12-01
Communities are not immune from extreme events or natural disasters that can lead to large-scale consequences for the nation and public. Improving resilience to better prepare, plan, recover, and adapt to disasters is critical to reduce the impacts of extreme events. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This project presents an intelligent system, Flood AI, for flooding to improve societal preparedness by providing a knowledge engine using voice recognition, artificial intelligence, and natural language processing based on a generalized ontology for disasters with a primary focus on flooding. The knowledge engine utilizes the flood ontology and concepts to connect user input to relevant knowledge discovery channels on flooding by developing a data acquisition and processing framework utilizing environmental observations, forecast models, and knowledge bases. Communication channels of the framework includes web-based systems, agent-based chat bots, smartphone applications, automated web workflows, and smart home devices, opening the knowledge discovery for flooding to many unique use cases.
Using food-web theory to conserve ecosystems
McDonald-Madden, E.; Sabbadin, R.; Game, E. T.; Baxter, P. W. J.; Chadès, I.; Possingham, H. P.
2016-01-01
Food-web theory can be a powerful guide to the management of complex ecosystems. However, we show that indices of species importance common in food-web and network theory can be a poor guide to ecosystem management, resulting in significantly more extinctions than necessary. We use Bayesian Networks and Constrained Combinatorial Optimization to find optimal management strategies for a wide range of real and hypothetical food webs. This Artificial Intelligence approach provides the ability to test the performance of any index for prioritizing species management in a network. While no single network theory index provides an appropriate guide to management for all food webs, a modified version of the Google PageRank algorithm reliably minimizes the chance and severity of negative outcomes. Our analysis shows that by prioritizing ecosystem management based on the network-wide impact of species protection rather than species loss, we can substantially improve conservation outcomes. PMID:26776253
P-MartCancer–Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb-Robertson, Bobbie-Jo M.; Bramer, Lisa M.; Jensen, Jeffrey L.
P-MartCancer is a new interactive web-based software environment that enables biomedical and biological scientists to perform in-depth analyses of global proteomics data without requiring direct interaction with the data or with statistical software. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access to multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium (CPTAC) at the peptide, gene and protein levels. P-MartCancer is deployed using Azure technologies (http://pmart.labworks.org/cptac.html), the web-service is alternativelymore » available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/) and many statistical functions can be utilized directly from an R package available on GitHub (https://github.com/pmartR).« less
Integrated web system of geospatial data services for climate research
NASA Astrophysics Data System (ADS)
Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander
2016-04-01
Georeferenced datasets are currently actively used for modeling, interpretation and forecasting of climatic and ecosystem changes on different spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size (up to tens terabytes for a single dataset) a special software supporting studies in the climate and environmental change areas is required. An approach for integrated analysis of georefernced climatological data sets based on combination of web and GIS technologies in the framework of spatial data infrastructure paradigm is presented. According to this approach a dedicated data-processing web system for integrated analysis of heterogeneous georeferenced climatological and meteorological data is being developed. It is based on Open Geospatial Consortium (OGC) standards and involves many modern solutions such as object-oriented programming model, modular composition, and JavaScript libraries based on GeoExt library, ExtJS Framework and OpenLayers software. This work is supported by the Ministry of Education and Science of the Russian Federation, Agreement #14.613.21.0037.
A SOA-based approach to geographical data sharing
NASA Astrophysics Data System (ADS)
Li, Zonghua; Peng, Mingjun; Fan, Wei
2009-10-01
In the last few years, large volumes of spatial data have been available in different government departments in China, but these data are mainly used within these departments. With the e-government project initiated, spatial data sharing become more and more necessary. Currently, the Web has been used not only for document searching but also for the provision and use of services, known as Web services, which are published in a directory and may be automatically discovered by software agents. Particularly in the spatial domain, the possibility of accessing these large spatial datasets via Web services has motivated research into the new field of Spatial Data Infrastructure (SDI) implemented using service-oriented architecture. In this paper a Service-Oriented Architecture (SOA) based Geographical Information Systems (GIS) is proposed, and a prototype system is deployed based on Open Geospatial Consortium (OGC) standard in Wuhan, China, thus that all the departments authorized can access the spatial data within the government intranet, and also these spatial data can be easily integrated into kinds of applications.
"Concept to Classroom": Web-based Workshops for Teachers.
ERIC Educational Resources Information Center
Donlevy, James G.; Donlevy, Tia Rice
2000-01-01
Describes "Concept to Classroom", a series of free, online workshops developed by channel Thirteen/WNET New York and Disney Learning Partnerships to help teachers explore issues in education including multiple intelligences, constructivism, academic standards, cooperative and collaborative learning, assessment, curriculum redesign,…
Imagining the Digital Library in a Commercialized Internet.
ERIC Educational Resources Information Center
Heckart, Ronald J.
1999-01-01
Discusses digital library planning in light of Internet commerce and technological innovation in marketing and customer relations that are transforming user expectations about Web sites that offer products and services. Topics include user self-sufficiency; personalized service; artificial intelligence; collaborative filtering; and electronic…
Mobile internet technologies and their application to intelligent transportation systems
DOT National Transportation Integrated Search
2003-01-01
The worlds of mobile communication and the Internet are rapidly converging. This new domain, which is being touted as the "Wireless Web" or "Mobile Internet", is in its infancy and will require a number of complex technologies to mature and converge ...
2008-06-01
Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE June 2008...We introduce World Wide Web Consortium (W3C) compliant services into the planning and battle management processes where a computer can be more...which the software services comprising the command, control, and battle management (C2BM) element of the BMD system need to operate within hard real
Payne, Philip R.O.; Greaves, Andrew W.; Kipps, Thomas J.
2003-01-01
The Chronic Lymphocytic Leukemia (CLL) Research Consortium (CRC) consists of 9 geographically distributed sites conducting a program of research including both basic science and clinical components. To enable the CRC’s clinical research efforts, a system providing for real-time collaboration was required. CTMS provides such functionality, and demonstrates that the use of novel data modeling, web-application platforms, and management strategies provides for the deployment of an extensible, cost effective solution in such an environment. PMID:14728471
Xavier University CERE Program [Consortium for Environmental Risk Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Connor, Sally
1999-09-01
The workshop provided training for 20 environmental professionals and educators. The focus of instruction for two days was the use of the Internet as a communcation tool. Instructors introduced participants to email, designing and building Web pages, and conducting research using search engines. The focus for three days was learning how Geographical Information Systems (GIS) can be used in the classroom and the workplace. Participants were introducted to the GIS on the Internet and Use of ArcView software.
Data dictionary and formatting standard for dissemination of geotechnical data
Benoit, J.; Bobbitt, J.I.; Ponti, D.J.; Shimel, S.A.; ,
2004-01-01
A pilot system for archiving and web dissemination of geotechnical data collected and stored by various agencies is currently under development. Part of the scope of this project, sponsored by the Consortium of Organizations for Strong-Motion Observation Systems (COSMOS) and by the Pacific Earthquake Engineering Research Center (PEER) Lifelines Program, is the development of a data dictionary and formatting standard. This paper presents the data model along with the basic structure of the data dictionary tables for this pilot system.
A novel paradigm for telemedicine using the personal bio-monitor.
Bhatikar, Sanjay R; Mahajan, Roop L; DeGroff, Curt
2002-01-01
The foray of solid-state technology in the medical field has yielded an arsenal of sophisticated healthcare tools. Personal, portable computing power coupled with the information superhighway open up the possibility of sophisticated healthcare management that will impact the medical field just as much. The full synergistic potential of three interwoven technologies: (1) compact electronics, (2) World Wide Web, and (3) Artificial Intelligence is yet to be realized. The system presented in this paper integrates these technologies synergistically, providing a new paradigm for healthcare. Our idea is to deploy internet-enabled, intelligent, handheld personal computers for medical diagnosis. The salient features of the 'Personal Bio-Monitor' we envisage are: (1) Utilization of the peripheral signals of the body which may be acquired non-invasively and with ease, for diagnosis of medical conditions; (2) An Artificial Neural Network (ANN) based approach for diagnosis; (3) Configuration of the diagnostic device as a handheld for personal use; (4) Internet connectivity, following the emerging bluetooth protocol, for prompt conveyance of information to a patient's health care provider via the World Wide Web. The proposal is substantiated with an intelligent handheld device developed by the investigators for pediatric cardiac auscultation. This device performed accurate diagnoses of cardiac abnormalities in pediatrics using an artificial neural network to process heart sounds acquired by a low-frequency microphone and transmitted its diagnosis to a desktop PC via infrared. The idea of the personal biomonitor presented here has the potential to streamline healthcare by optimizing two valuable resources: physicians' time and sophisticated equipment time. We show that the elements of such a system are in place, with our prototype. Our novel contribution is the synergistic integration of compact electronics' technology, artificial neural network methodology and the wireless web resulting in a revolutionary new paradigm for healthcare management.
An economic and financial exploratory
NASA Astrophysics Data System (ADS)
Cincotti, S.; Sornette, D.; Treleaven, P.; Battiston, S.; Caldarelli, G.; Hommes, C.; Kirman, A.
2012-11-01
This paper describes the vision of a European Exploratory for economics and finance using an interdisciplinary consortium of economists, natural scientists, computer scientists and engineers, who will combine their expertise to address the enormous challenges of the 21st century. This Academic Public facility is intended for economic modelling, investigating all aspects of risk and stability, improving financial technology, and evaluating proposed regulatory and taxation changes. The European Exploratory for economics and finance will be constituted as a network of infrastructure, observatories, data repositories, services and facilities and will foster the creation of a new cross-disciplinary research community of social scientists, complexity scientists and computing (ICT) scientists to collaborate in investigating major issues in economics and finance. It is also considered a cradle for training and collaboration with the private sector to spur spin-offs and job creations in Europe in the finance and economic sectors. The Exploratory will allow Social Scientists and Regulators as well as Policy Makers and the private sector to conduct realistic investigations with real economic, financial and social data. The Exploratory will (i) continuously monitor and evaluate the status of the economies of countries in their various components, (ii) use, extend and develop a large variety of methods including data mining, process mining, computational and artificial intelligence and every other computer and complex science techniques coupled with economic theory and econometric, and (iii) provide the framework and infrastructure to perform what-if analysis, scenario evaluations and computational, laboratory, field and web experiments to inform decision makers and help develop innovative policy, market and regulation designs.
Designing an information search interface for younger and older adults.
Pak, Richard; Price, Margaux M
2008-08-01
The present study examined Web-based information retrieval as a function of age for two information organization schemes: hierarchical organization and one organized around tags or keywords. Older adults' performance in information retrieval tasks has traditionally been lower compared with younger adults'. The current study examined the degree to which information organization moderated age-related performance differences on an information retrieval task. The theory of fluid and crystallized intelligence may provide insight into different kinds of information architectures that may reduce age-related differences in computer-based information retrieval performance. Fifty younger (18-23 years of age) and 50 older (55-76 years of age) participants browsed a Web site for answers to specific questions. Half of the participants browsed the hierarchically organized system (taxonomy), which maintained a one-to-one relationship between menu link and page, whereas the other half browsed the tag-based interface, with a many-to-one relationship between menu and page. This difference was expected to interact with age-related differences in fluid and crystallized intelligence. Age-related differences in information retrieval performance persisted; however, a tag-based retrieval interface reduced age-related differences, as compared with a taxonomical interface. Cognitive aging theory can lead to interface interventions that reduce age-related differences in performance with technology. In an information retrieval paradigm, older adults may be able to leverage their increased crystallized intelligence to offset fluid intelligence declines in a computer-based information search task. More research is necessary, but the results suggest that information retrieval interfaces organized around keywords may reduce age-related differences in performance.
On transform coding tools under development for VP10
NASA Astrophysics Data System (ADS)
Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao
2016-09-01
Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.
Llnking the EarthScope Data Virtual Catalog to the GEON Portal
NASA Astrophysics Data System (ADS)
Lin, K.; Memon, A.; Baru, C.
2008-12-01
The EarthScope Data Portal provides a unified, single-point of access to EarthScope data and products from USArray, Plate Boundary Observatory (PBO), and San Andreas Fault Observatory at Depth (SAFOD) experiments. The portal features basic search and data access capabilities to allow users to discover and access EarthScope data using spatial, temporal, and other metadata-based (data type, station specific) search conditions. The portal search module is the user interface implementation of the EarthScope Data Search Web Service. This Web Service acts as a virtual catalog that in turn invokes Web services developed by IRIS (Incorporated Research Institutions for Seismology), UNAVCO (University NAVSTAR Consortium), and GFZ (German Research Center for Geosciences) to search for EarthScope data in the archives at each of these locations. These Web Services provide information about all resources (data) that match the specified search conditions. In this presentation we will describe how the EarthScope Data Search Web service can be integrated into the GEONsearch application in the GEON Portal (see http://portal.geongrid.org). Thus, a search request issued at the GEON Portal will also search the EarthScope virtual catalog thereby providing users seamless access to data in GEON as well as the Earthscope via a common user interface.
National Water Model: Providing the Nation with Actionable Water Intelligence
NASA Astrophysics Data System (ADS)
Aggett, G. R.; Bates, B.
2017-12-01
The National Water Model (NWM) provides national, street-level detail of water movement through time and space. Operating hourly, this flood of information offers enormous benefits in the form of water resource management, natural disaster preparedness, and the protection of life and property. The Geo-Intelligence Division at the NOAA National Water Center supplies forecasters and decision-makers with timely, actionable water intelligence through the processing of billions of NWM data points every hour. These datasets include current streamflow estimates, short and medium range streamflow forecasts, and many other ancillary datasets. The sheer amount of NWM data produced yields a dataset too large to allow for direct human comprehension. As such, it is necessary to undergo model data post-processing, filtering, and data ingestion by visualization web apps that make use of cartographic techniques to bring attention to the areas of highest urgency. This poster illustrates NWM output post-processing and cartographic visualization techniques being developed and employed by the Geo-Intelligence Division at the NOAA National Water Center to provide national actionable water intelligence.
Enhancing Access to Drought Information Using the CUAHSI Hydrologic Information System
NASA Astrophysics Data System (ADS)
Schreuders, K. A.; Tarboton, D. G.; Horsburgh, J. S.; Sen Gupta, A.; Reeder, S.
2011-12-01
The National Drought Information System (NIDIS) Upper Colorado River Basin pilot study is investigating and establishing capabilities for better dissemination of drought information for early warning and management. As part of this study we are using and extending functionality from the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) to provide better access to drought-related data in the Upper Colorado River Basin. The CUAHSI HIS is a federated system for sharing hydrologic data. It is comprised of multiple data servers, referred to as HydroServers, that publish data in a standard XML format called Water Markup Language (WaterML), using web services referred to as WaterOneFlow web services. HydroServers can also publish geospatial data using Open Geospatial Consortium (OGC) web map, feature and coverage services and are capable of hosting web and map applications that combine geospatial datasets with observational data served via web services. HIS also includes a centralized metadata catalog that indexes data from registered HydroServers and a data access client referred to as HydroDesktop. For NIDIS, we have established a HydroServer to publish drought index values as well as the input data used in drought index calculations. Primary input data required for drought index calculation include streamflow, precipitation, reservoir storages, snow water equivalent, and soil moisture. We have developed procedures to redistribute the input data to the time and space scales chosen for drought index calculation, namely half monthly time intervals for HUC 10 subwatersheds. The spatial redistribution approaches used for each input parameter are dependent on the spatial linkages for that parameter, i.e., the redistribution procedure for streamflow is dependent on the upstream/downstream connectivity of the stream network, and the precipitation redistribution procedure is dependent on elevation to account for orographic effects. A set of drought indices are then calculated from the redistributed data. We have created automated data and metadata harvesters that periodically scan and harvest new data from each of the input databases, and calculates extensions to the resulting derived data sets, ensuring that the data available on the drought server is kept up to date. This paper will describe this system, showing how it facilitates the integration of data from multiple sources to inform the planning and management of water resources during drought. The system may be accessed at http://drought.usu.edu.
Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web
Sigüenza, Álvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández
2012-01-01
Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound. PMID:22778643
Sharing human-generated observations by integrating HMI and the Semantic Sensor Web.
Sigüenza, Alvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández
2012-01-01
Current "Internet of Things" concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound.
1989-03-01
obvious categories axioms F1 , P2 assert the existence of ’points’ on of model of the basic theory, it is either either side of a rational point: start[a...of its role function; e.g., E1 is the f1 direct-component of E0. The type End never appears as a direct component of another type; nor does any type...observed event must be a component-of an End event; call these End events N 1, N2, ... Nn. Let there be a minimum covering model for F1 , ... r. in which
Multidimensional Learner Model In Intelligent Learning System
NASA Astrophysics Data System (ADS)
Deliyska, B.; Rozeva, A.
2009-11-01
The learner model in an intelligent learning system (ILS) has to ensure the personalization (individualization) and the adaptability of e-learning in an online learner-centered environment. ILS is a distributed e-learning system whose modules can be independent and located in different nodes (servers) on the Web. This kind of e-learning is achieved through the resources of the Semantic Web and is designed and developed around a course, group of courses or specialty. An essential part of ILS is learner model database which contains structured data about learner profile and temporal status in the learning process of one or more courses. In the paper a learner model position in ILS is considered and a relational database is designed from learner's domain ontology. Multidimensional modeling agent for the source database is designed and resultant learner data cube is presented. Agent's modules are proposed with corresponding algorithms and procedures. Multidimensional (OLAP) analysis guidelines on the resultant learner module for designing dynamic learning strategy have been highlighted.
DOORS to the semantic web and grid with a PORTAL for biomedical computing.
Taswell, Carl
2008-03-01
The semantic web remains in the early stages of development. It has not yet achieved the goals envisioned by its founders as a pervasive web of distributed knowledge and intelligence. Success will be attained when a dynamic synergism can be created between people and a sufficient number of infrastructure systems and tools for the semantic web in analogy with those for the original web. The domain name system (DNS), web browsers, and the benefits of publishing web pages motivated many people to register domain names and publish web sites on the original web. An analogous resource label system, semantic search applications, and the benefits of collaborative semantic networks will motivate people to register resource labels and publish resource descriptions on the semantic web. The Domain Ontology Oriented Resource System (DOORS) and Problem Oriented Registry of Tags and Labels (PORTAL) are proposed as infrastructure systems for resource metadata within a paradigm that can serve as a bridge between the original web and the semantic web. The Internet Registry Information Service (IRIS) registers [corrected] domain names while DNS publishes domain addresses with mapping of names to addresses for the original web. Analogously, PORTAL registers resource labels and tags while DOORS publishes resource locations and descriptions with mapping of labels to locations for the semantic web. BioPORT is proposed as a prototype PORTAL registry specific for the problem domain of biomedical computing.
Empowering Adaptive Lectures through Activation of Intelligent and Web 2.0 Technologies
ERIC Educational Resources Information Center
El-Ghareeb, Haitham; Riad, A.
2011-01-01
Different Learning Paradigms can be presented by different educators as a result of utilizing several types of Information and Communication Technologies in the Learning Process. The three abstract Learning Delivery Models are: "Traditional", "Distance", and "Hybrid Learning". Hybrid Learning attempts to maintain the…
Web Search Services in 1998: Trends and Challenges.
ERIC Educational Resources Information Center
Feldman, Susan
1998-01-01
Charts the trends and challenges that 1998 has brought to popular search engines such as AltaVista, Excite, HotBot, Infoseek, Lycos, and Northern Light. Highlights testing strategies used, use of real (not artificial) intelligence, innovations, online market pressures, barriers to use, and tips and recommendations. (AEF)
Bernardo, Theresa M; Malinowski, Robert P
2005-01-01
In this article, advances in the application of medical media to education, clinical care, and research are explored and illustrated with examples, and their future potential is discussed. Impact is framed in terms of the Sloan Consortium's five pillars of quality education: access; student and faculty satisfaction; learning effectiveness; and cost effectiveness. (Hiltz SR, Zhang Y, Turoff M. Studies of effectiveness of learning networks. In Bourne J, Moore J, ed. Elements of Quality Online Education. Needham, MA: Sloan-Consortium, 2002:15-45). The alternatives for converting analog media (text, photos, graphics, sound, video, animations, radiographs) to digital media and direct digital capture are covered, as are options for storing, manipulating, retrieving, and sharing digital collections. Diagnostic imaging is given particular attention, clarifying the difference between computerized radiography and digital radiography and explaining the accepted standard (DICOM) and the advantages of Web PACS. Some novel research applications of medical media are presented.
Martin, Tiphaine; Sherman, David J; Durrens, Pascal
2011-01-01
The Génolevures online database (URL: http://www.genolevures.org) stores and provides the data and results obtained by the Génolevures Consortium through several campaigns of genome annotation of the yeasts in the Saccharomycotina subphylum (hemiascomycetes). This database is dedicated to large-scale comparison of these genomes, storing not only the different chromosomal elements detected in the sequences, but also the logical relations between them. The database is divided into a public part, accessible to anyone through Internet, and a private part where the Consortium members make genome annotations with our Magus annotation system; this system is used to annotate several related genomes in parallel. The public database is widely consulted and offers structured data, organized using a REST web site architecture that allows for automated requests. The implementation of the database, as well as its associated tools and methods, is evolving to cope with the influx of genome sequences produced by Next Generation Sequencing (NGS). Copyright © 2011 Académie des sciences. Published by Elsevier SAS. All rights reserved.
A web-based approach for electrocardiogram monitoring in the home.
Magrabi, F; Lovell, N H; Celler, B G
1999-05-01
A Web-based electrocardiogram (ECG) monitoring service in which a longitudinal clinical record is used for management of patients, is described. The Web application is used to collect clinical data from the patient's home. A database on the server acts as a central repository where this clinical information is stored. A Web browser provides access to the patient's records and ECG data. We discuss the technologies used to automate the retrieval and storage of clinical data from a patient database, and the recording and reviewing of clinical measurement data. On the client's Web browser, ActiveX controls embedded in the Web pages provide a link between the various components including the Web server, Web page, the specialised client side ECG review and acquisition software, and the local file system. The ActiveX controls also implement FTP functions to retrieve and submit clinical data to and from the server. An intelligent software agent on the server is activated whenever new ECG data is sent from the home. The agent compares historical data with newly acquired data. Using this method, an optimum patient care strategy can be evaluated, a summarised report along with reminders and suggestions for action is sent to the doctor and patient by email.
Implementation of a World Wide Web server for the oil and gas industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaylock, R.E.; Martin, F.D.; Emery, R.
1995-12-31
The Gas and Oil Technology Exchange and Communication Highway, (GO-TECH), provides an electronic information system for the petroleum community for the purpose of exchanging ideas, data, and technology. The personal computer-based system fosters communication and discussion by linking oil and gas producers with resource centers, government agencies, consulting firms, service companies, national laboratories, academic research groups, and universities throughout the world. The oil and gas producers are provided access to the GO-TECH World Wide Web home page via modem links, as well as Internet. The future GO-TECH applications will include the establishment of{open_quote}Virtual corporations {close_quotes} consisting of consortiums of smallmore » companies, consultants, and service companies linked by electronic information systems. These virtual corporations will have the resources and expertise previously found only in major corporations.« less
Tagliaferri, Luca; Gobitti, Carlo; Colloca, Giuseppe Ferdinando; Boldrini, Luca; Farina, Eleonora; Furlan, Carlo; Paiar, Fabiola; Vianello, Federica; Basso, Michela; Cerizza, Lorenzo; Monari, Fabio; Simontacchi, Gabriele; Gambacorta, Maria Antonietta; Lenkowicz, Jacopo; Dinapoli, Nicola; Lanzotti, Vito; Mazzarotto, Renzo; Russi, Elvio; Mangoni, Monica
2018-07-01
The big data approach offers a powerful alternative to Evidence-based medicine. This approach could guide cancer management thanks to machine learning application to large-scale data. Aim of the Thyroid CoBRA (Consortium for Brachytherapy Data Analysis) project is to develop a standardized web data collection system, focused on thyroid cancer. The Metabolic Radiotherapy Working Group of Italian Association of Radiation Oncology (AIRO) endorsed the implementation of a consortium directed to thyroid cancer management and data collection. The agreement conditions, the ontology of the collected data and the related software services were defined by a multicentre ad hoc working-group (WG). Six Italian cancer centres were firstly started the project, defined and signed the Thyroid COBRA consortium agreement. Three data set tiers were identified: Registry, Procedures and Research. The COBRA-Storage System (C-SS) appeared to be not time-consuming and to be privacy respecting, as data can be extracted directly from the single centre's storage platforms through a secured connection that ensures reliable encryption of sensible data. Automatic data archiving could be directly performed from Image Hospital Storage System or the Radiotherapy Treatment Planning Systems. The C-SS architecture will allow "Cloud storage way" or "distributed learning" approaches for predictive model definition and further clinical decision support tools development. The development of the Thyroid COBRA data Storage System C-SS through a multicentre consortium approach appeared to be a feasible tool in the setup of complex and privacy saving data sharing system oriented to the management of thyroid cancer and in the near future every cancer type. Copyright © 2018 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
2013-01-01
Background Clinical Intelligence, as a research and engineering discipline, is dedicated to the development of tools for data analysis for the purposes of clinical research, surveillance, and effective health care management. Self-service ad hoc querying of clinical data is one desirable type of functionality. Since most of the data are currently stored in relational or similar form, ad hoc querying is problematic as it requires specialised technical skills and the knowledge of particular data schemas. Results A possible solution is semantic querying where the user formulates queries in terms of domain ontologies that are much easier to navigate and comprehend than data schemas. In this article, we are exploring the possibility of using SADI Semantic Web services for semantic querying of clinical data. We have developed a prototype of a semantic querying infrastructure for the surveillance of, and research on, hospital-acquired infections. Conclusions Our results suggest that SADI can support ad-hoc, self-service, semantic queries of relational data in a Clinical Intelligence context. The use of SADI compares favourably with approaches based on declarative semantic mappings from data schemas to ontologies, such as query rewriting and RDFizing by materialisation, because it can easily cope with situations when (i) some computation is required to turn relational data into RDF or OWL, e.g., to implement temporal reasoning, or (ii) integration with external data sources is necessary. PMID:23497556
Scales, David; Zelenev, Alexei; Brownstein, John S.
2013-01-01
Background This is the first study quantitatively evaluating the effect that media-related limitations have on data from an automated epidemic intelligence system. Methods We modeled time series of HealthMap's two main data feeds, Google News and Moreover, to test for evidence of two potential limitations: first, human resources constraints, and second, high-profile outbreaks “crowding out” coverage of other infectious diseases. Results Google News events declined by 58.3%, 65.9%, and 14.7% on Saturday, Sunday and Monday, respectively, relative to other weekdays. Events were reduced by 27.4% during Christmas/New Years weeks and 33.6% lower during American Thanksgiving week than during an average week for Google News. Moreover data yielded similar results with the addition of Memorial Day (US) being associated with a 36.2% reduction in events. Other holiday effects were not statistically significant. We found evidence for a crowd out phenomenon for influenza/H1N1, where a 50% increase in influenza events corresponded with a 4% decline in other disease events for Google News only. Other prominent diseases in this database – avian influenza (H5N1), cholera, or foodborne illness – were not associated with a crowd out phenomenon. Conclusions These results provide quantitative evidence for the limited impact of editorial biases on HealthMap's web-crawling epidemic intelligence. PMID:24206612
A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text
ERIC Educational Resources Information Center
Nguyen, Bao-An; Yang, Don-Lin
2012-01-01
An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…
Intelligent Discovery for Learning Objects Using Semantic Web Technologies
ERIC Educational Resources Information Center
Hsu, I-Ching
2012-01-01
The concept of learning objects has been applied in the e-learning field to promote the accessibility, reusability, and interoperability of learning content. Learning Object Metadata (LOM) was developed to achieve these goals by describing learning objects in order to provide meaningful metadata. Unfortunately, the conventional LOM lacks the…
WINDS: A Web-Based Intelligent Interactive Course on Data-Structures
ERIC Educational Resources Information Center
Sirohi, Vijayalaxmi
2007-01-01
The Internet has opened new ways of learning and has brought several advantages to computer-aided education. Global access, self-paced learning, asynchronous teaching, interactivity, and multimedia usage are some of these. Along with the advantages comes the challenge of designing the software using the available facilities. Integrating online…
HelpfulMed: Intelligent Searching for Medical Information over the Internet.
ERIC Educational Resources Information Center
Chen, Hsinchun; Lally, Ann M.; Zhu, Bin; Chau, Michael
2003-01-01
Discussion of the information needs of medical professionals and researchers focuses on the architecture of a Web portal designed to integrate advanced searching and indexing algorithms, an automatic thesaurus, and self-organizing map technologies to provide searchers with fine-grained results. Reports results of evaluation of spider algorithms…
Addressing Energy Poverty through Smarter Technology
ERIC Educational Resources Information Center
Oldfield, Eddie
2011-01-01
Energy poverty is a key detriment to labor productivity, economic growth, and social well-being. This article presents a qualitative review of literature on the potential role of intelligent communication technology, web-based standards, and smart grid technology to alleviate energy costs and improve access to clean distributed energy in developed…
The Cognitive Authority of Collective Intelligence
ERIC Educational Resources Information Center
Goldman, James L.
2010-01-01
Collaboration tools based on World Wide Web technologies now enable and encourage large groups of people who do not previously know one another, and who may share no other affiliation, to work together cooperatively and often anonymously on large information projects such as online encyclopedias and complex websites. Making use of information…
The Educator's Role in Preparing Visually Literate Learners
ERIC Educational Resources Information Center
Metros, Susan E.
2008-01-01
Contemporary culture has become increasingly dependent on the visual, especially for its capacity to communicate instantly and universally. Advances in technology fueled this shift. Students must learn to cope with and intelligently contribute to a culture rife with easy access to the visually rich Web, photo dependant social networks, video…
NASA Astrophysics Data System (ADS)
Kearney, K.; Aydin, K.
2016-02-01
Oceanic food webs are often depicted as network graphs, with the major organisms or functional groups displayed as nodes and the fluxes of between them as the edges. However, the large number of nodes and edges and high connectance of many management-oriented food webs coupled with graph layout algorithms poorly-suited to certain desired characteristics of food web visualizations often lead to hopelessly tangled diagrams that convey little information other than, "It's complex." Here, I combine several new graph visualization techniques- including a new node layout alorithm based on a trophic similarity (quantification of shared predator and prey) and trophic level, divided edge bundling for edge routing, and intelligent automated placement of labels- to create a much clearer visualization of the important fluxes through a food web. The technique will be used to highlight the differences in energy flow within three Alaskan Large Marine Ecosystems (the Bering Sea, Gulf of Alaska, and Aleutian Islands) that include very similar functional groups but unique energy pathways.
Story Maps as an Effective Social Medium for Data Synthesis, Communication, and Dissemination
NASA Astrophysics Data System (ADS)
Wright, D. J.; Verrill, A.; Artz, M.; Deming, R.
2014-12-01
The story map is a new medium for sharing not only data, but also photos, videos, sounds, and maps, as a way to tell a specific and compelling story by way of that content. It is emerging as a popular and effective social media too. The user may employ some fairly sophisticated cartographic functionality without advanced training in cartography or GIS. Story maps are essentially web map applications built from web maps, which in turn are built from web-accessible data (including OGC WMS, WFS). This paper will emphasize the approaches and technologies of web-based GIS to tell "stories" about important connections among scientists, resource managers, and policy makers focused on oceans and coasts within the US; and how combining the new medium of "intelligent Web maps" with text, multimedia content, and intuitive user experiences has a great potential to synthesize the data, and it primary interpretative message in order to inform, educate, and inspire about a wide variety of ocean science and policy issues.
tOWL: a temporal Web Ontology Language.
Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay
2012-02-01
Through its interoperability and reasoning capabilities, the Semantic Web opens a realm of possibilities for developing intelligent systems on the Web. The Web Ontology Language (OWL) is the most expressive standard language for modeling ontologies, the cornerstone of the Semantic Web. However, up until now, no standard way of expressing time and time-dependent information in OWL has been provided. In this paper, we present a temporal extension of the very expressive fragment SHIN(D) of the OWL Description Logic language, resulting in the temporal OWL language. Through a layered approach, we introduce three extensions: 1) concrete domains, which allow the representation of restrictions using concrete domain binary predicates; 2) temporal representation , which introduces time points, relations between time points, intervals, and Allen's 13 interval relations into the language; and 3) timeslices/fluents, which implement a perdurantist view on individuals and allow for the representation of complex temporal aspects, such as process state transitions. We illustrate the expressiveness of the newly introduced language by using an example from the financial domain.
Marcus, Alfred C; Diefenbach, Michael A; Stanton, Annette L; Miller, Suzanne M; Fleisher, Linda; Raich, Peter C; Morra, Marion E; Perocchia, Rosemarie Slevin; Tran, Zung Vu; Bright, Mary Anne
2013-01-01
The authors describe 3 large randomized trials from the Cancer Information Service Research Consortium. Three web-based multimedia programs are being tested to help newly diagnosed prostate (Project 1) and breast cancer patients (Project 2) make informed treatment decisions and breast cancer patients prepare for life after treatment (Project 3). Project 3 also tests a telephone callback intervention delivered by a cancer information specialist. All participants receive standard print material specific to each project. Preliminary results from the 2-month follow-up interviews are reported for the initial wave of enrolled participants, most of whom were recruited from the Cancer Information Service (1-800-4-CANCER) telephone information program (Project 1: n =208; Project 2: n =340; Project 3: n =792). Self-reported use of the multimedia program was 51%, 52%, and 67% for Projects 1, 2, and 3, respectively. Self-reported use of the print materials (read all, most, or some) was 90%, 85%, and 83% for Projects 1, 2, and 3, respectively. The callback intervention was completed by 92% of Project 3 participants. Among those using the Cancer Information Service Research Consortium interventions, perceived usefulness and benefit was high, and more than 90% reported that they would recommend them to other cancer patients. The authors present 5 initial lessons learned that may help inform future cancer communications research.
NASA Astrophysics Data System (ADS)
Buck, J. J. H.; Phillips, A.; Lorenzo, A.; Kokkinaki, A.; Hearn, M.; Gardner, T.; Thorne, K.
2017-12-01
The National Oceanography Centre (NOC) operate a fleet of approximately 36 autonomous marine platforms including submarine gliders, autonomous underwater vehicles, and autonomous surface vehicles. Each platform effectivity has the capability to observe the ocean and collect data akin to a small research vessel. This is creating a growth in data volumes and complexity while the amount of resource available to manage data remains static. The OceanIds Command and Control (C2) project aims to solve these issues by fully automating the data archival, processing and dissemination. The data architecture being implemented jointly by NOC and the Scottish Association for Marine Science (SAMS) includes a single Application Programming Interface (API) gateway to handle authentication, forwarding and delivery of both metadata and data. Technicians and principle investigators will enter expedition data prior to deployment of vehicles enabling automated data processing when vehicles are deployed. The system will support automated metadata acquisition from platforms as this technology moves towards operational implementation. The metadata exposure to the web builds on a prototype developed by the European Commission supported SenseOCEAN project and is via open standards including World Wide Web Consortium (W3C) RDF/XML and the use of the Semantic Sensor Network ontology and Open Geospatial Consortium (OGC) SensorML standard. Data will be delivered in the marine domain Everyone's Glider Observatory (EGO) format and OGC Observations and Measurements. Additional formats will be served by implementation of endpoints such as the NOAA ERDDAP tool. This standardised data delivery via the API gateway enables timely near-real-time data to be served to Oceanids users, BODC users, operational users and big data systems. The use of open standards will also enable web interfaces to be rapidly built on the API gateway and delivery to European research infrastructures that include aligned reference models for data infrastructure.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.
2017-12-01
Challenges are faced by both new and experienced users interested in contributing their data to community repositories, in data discovery, or engaged in potentially transformative science. The Magnetics Information Consortium (https://earthref.org/MagIC) has recently simplified its data model and developed a new containerized web application to reduce the friction in contributing, exploring, and combining valuable and complex datasets for the paleo-, geo-, and rock magnetic scientific community. The new data model more closely reflects the hierarchical workflow in paleomagnetic experiments to enable adequate annotation of scientific results and ensure reproducibility. The new open-source (https://github.com/earthref/MagIC) application includes an upload tool that is integrated with the data model to provide early data validation feedback and ease the friction of contributing and updating datasets. The search interface provides a powerful full text search of contributions indexed by ElasticSearch and a wide array of filters, including specific geographic and geological timescale filtering, to support both novice users exploring the database and experts interested in compiling new datasets with specific criteria across thousands of studies and millions of measurements. The datasets are not large, but they are complex, with many results from evolving experimental and analytical approaches. These data are also extremely valuable due to the cost in collecting or creating physical samples and the, often, destructive nature of the experiments. MagIC is heavily invested in encouraging young scientists as well as established labs to cultivate workflows that facilitate contributing their data in a consistent format. This eLightning presentation includes a live demonstration of the MagIC web application, developed as a configurable container hosting an isomorphic Meteor JavaScript application, MongoDB database, and ElasticSearch search engine. Visitors can explore the MagIC Database through maps and image or plot galleries or search and filter the raw measurements and their derived hierarchy of analytical interpretations.
Ghosh, Rajesh; Lewis, David
2015-01-01
Advent of new technologies in mobile devices and software applications is leading to an evolving change in the extent, geographies and modes for use of internet. Today, it is used not only for information gathering but for sharing of experiences, opinions and suggestions. Web-Recognizing Adverse Drug Reactions (RADR) is a groundbreaking European Union (EU) Innovative Medicines Innovation funded 3-year initiative to recommend policies, frameworks, tools and methodologies by leveraging these new developments to get new insights in drug safety. Data were gathered from prior surveys, previous initiatives and a review of relevant literature was done. New technologies provide an opportunity in the way safety information is collected, helping generate new knowledge for safety profile of drugs as well as unique insights into the evolving pharmacovigilance system in general. It is critical that these capabilities are harnessed in a way that is ethical, compliant with regulations, respecting data privacy and used responsibly. At the same time, the process for managing and interpreting this new information must be efficient and effective for sustenance, thoughtful use of resources and valuable return of knowledge. These approaches should complement the ongoing progress toward personalized medicine. This Web-RADR initiative should provide some directions on 'what and how' to use social media to further proactive pharmacovigilance and protection of public health. It is expected to also show how a multipronged expert consortium group comprising regulators, industry and academia can leverage new developments in technology and society to bring innovation in process, operations, organization and scientific approaches across its boundaries and beyond the normal realms of individual research units. These new approaches should bring insights faster, earlier, specific, actionable and moving toward the target of AE prevention. The possibilities of a blended targeted pharmacovigilance (PV) approach where boundaries between stakeholders blur and cultures mix point to very different future for better, healthier and longer lives.
A flexible geospatial sensor observation service for diverse sensor data based on Web service
NASA Astrophysics Data System (ADS)
Chen, Nengcheng; Di, Liping; Yu, Genong; Min, Min
Achieving a flexible and efficient geospatial Sensor Observation Service (SOS) is difficult, given the diversity of sensor networks, the heterogeneity of sensor data storage, and the differing requirements of users. This paper describes development of a service-oriented multi-purpose SOS framework. The goal is to create a single method of access to the data by integrating the sensor observation service with other Open Geospatial Consortium (OGC) services — Catalogue Service for the Web (CSW), Transactional Web Feature Service (WFS-T) and Transactional Web Coverage Service (WCS-T). The framework includes an extensible sensor data adapter, an OGC-compliant geospatial SOS, a geospatial catalogue service, a WFS-T, and a WCS-T for the SOS, and a geospatial sensor client. The extensible sensor data adapter finds, stores, and manages sensor data from live sensors, sensor models, and simulation systems. Abstract factory design patterns are used during design and implementation. A sensor observation service compatible with the SWE is designed, following the OGC "core" and "transaction" specifications. It is implemented using Java servlet technology. It can be easily deployed in any Java servlet container and automatically exposed for discovery using Web Service Description Language (WSDL). Interaction sequences between a Sensor Web data consumer and an SOS, between a producer and an SOS, and between an SOS and a CSW are described in detail. The framework has been successfully demonstrated in application scenarios for EO-1 observations, weather observations, and water height gauge observations.
QMachine: commodity supercomputing in web browsers.
Wilkinson, Sean R; Almeida, Jonas S
2014-06-09
Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics' "Big Data" from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running "download and install" software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments.
NASA Astrophysics Data System (ADS)
Das, I.; Oberai, K.; Sarathi Roy, P.
2012-07-01
Landslides exhibit themselves in different mass movement processes and are considered among the most complex natural hazards occurring on the earth surface. Making landslide database available online via WWW (World Wide Web) promotes the spreading and reaching out of the landslide information to all the stakeholders. The aim of this research is to present a comprehensive database for generating landslide hazard scenario with the help of available historic records of landslides and geo-environmental factors and make them available over the Web using geospatial Free & Open Source Software (FOSS). FOSS reduces the cost of the project drastically as proprietary software's are very costly. Landslide data generated for the period 1982 to 2009 were compiled along the national highway road corridor in Indian Himalayas. All the geo-environmental datasets along with the landslide susceptibility map were served through WEBGIS client interface. Open source University of Minnesota (UMN) mapserver was used as GIS server software for developing web enabled landslide geospatial database. PHP/Mapscript server-side application serve as a front-end application and PostgreSQL with PostGIS extension serve as a backend application for the web enabled landslide spatio-temporal databases. This dynamic virtual visualization process through a web platform brings an insight into the understanding of the landslides and the resulting damage closer to the affected people and user community. The landslide susceptibility dataset is also made available as an Open Geospatial Consortium (OGC) Web Feature Service (WFS) which can be accessed through any OGC compliant open source or proprietary GIS Software.
Bee Swarm Optimization for Medical Web Information Foraging.
Drias, Yassine; Kechid, Samir; Pasi, Gabriella
2016-02-01
The present work is related to Web intelligence and more precisely to medical information foraging. We present here a novel approach based on agents technology for information foraging. An architecture is proposed, in which we distinguish two important phases. The first one is a learning process for localizing the most relevant pages that might interest the user. This is performed on a fixed instance of the Web. The second takes into account the openness and the dynamicity of the Web. It consists on an incremental learning starting from the result of the first phase and reshaping the outcomes taking into account the changes that undergoes the Web. The whole system offers a tool to help the user undertaking information foraging. We implemented the system using a group of cooperative reactive agents and more precisely a colony of artificial bees. In order to validate our proposal, experiments were conducted on MedlinePlus, a benchmark dedicated for research in the domain of Health. The results are promising either for those related to Web regularities and for the response time, which is very short and hence complies the real time constraint.
RNAcentral: an international database of ncRNA sequences
Williams, Kelly Porter
2014-10-28
The field of non-coding RNA biology has been hampered by the lack of availability of a comprehensive, up-to-date collection of accessioned RNA sequences. Here we present the first release of RNAcentral, a database that collates and integrates information from an international consortium of established RNA sequence databases. The initial release contains over 8.1 million sequences, including representatives of all major functional classes. A web portal (http://rnacentral.org) provides free access to data, search functionality, cross-references, source code and an integrated genome browser for selected species.
Lights Out Operations of a Space, Ground, Sensorweb
NASA Technical Reports Server (NTRS)
Chien, Steve; Tran, Daniel; Johnston, Mark; Davies, Ashley Gerard; Castano, Rebecca; Rabideau, Gregg; Cichy, Benjamin; Doubleday, Joshua; Pieri, David; Scharenbroich, Lucas;
2008-01-01
We have been operating an autonomous, integrated sensorweb linking numerous space and ground sensors in 24/7 operations since 2004. This sensorweb includes elements of space data acquisition (MODIS, GOES, and EO-1), space asset retasking (EO-1), integration of data acquired from ground sensor networks with on-demand ground processing of data into science products. These assets are being integrated using web service standards from the Open Geospatial Consortium. Future plans include extension to fixed and mobile surface and subsurface sea assets as part of the NSF's ORION Program.
Prince, Lillian; Chappelle, Wayne L; McDonald, Kent D; Goodman, Tanya; Cowper, Sara; Thompson, William
2015-03-01
The goal of this study was to assess for the main sources of occupational stress, as well as self-reported symptoms of distress and post-traumatic stress disorder among U.S. Air Force (USAF) Distributed Common Ground System (DCGS) intelligence exploitation and support personnel. DCGS intelligence operators (n=1091) and nonintelligence personnel (n = 447) assigned to a USAF Intelligence, Surveillance, and Reconnaissance Wing responded to the web-based survey. The overall survey response rate was 31%. Study results revealed the most problematic stressors among DCGS intelligence personnel included high workload, low manning, as well as organizational leadership and shift work issues. Results also revealed 14.35% of DCGS intelligence operators' self-reported high levels of psychological distress (twice the rate of DCGS nonintelligence support personnel). Furthermore, 2.0% to 2.5% self-reported high levels of post-traumatic stress disorder symptoms, with no significant difference between groups. The implications of these findings are discussed along with recommendations for USAF medical and mental health providers, as well as operational leadership. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
Introducing the PRIDE Archive RESTful web services.
Reisinger, Florian; del-Toro, Noemi; Ternent, Tobias; Hermjakob, Henning; Vizcaíno, Juan Antonio
2015-07-01
The PRIDE (PRoteomics IDEntifications) database is one of the world-leading public repositories of mass spectrometry (MS)-based proteomics data and it is a founding member of the ProteomeXchange Consortium of proteomics resources. In the original PRIDE database system, users could access data programmatically by accessing the web services provided by the PRIDE BioMart interface. New REST (REpresentational State Transfer) web services have been developed to serve the most popular functionality provided by BioMart (now discontinued due to data scalability issues) and address the data access requirements of the newly developed PRIDE Archive. Using the API (Application Programming Interface) it is now possible to programmatically query for and retrieve peptide and protein identifications, project and assay metadata and the originally submitted files. Searching and filtering is also possible by metadata information, such as sample details (e.g. species and tissues), instrumentation (mass spectrometer), keywords and other provided annotations. The PRIDE Archive web services were first made available in April 2014. The API has already been adopted by a few applications and standalone tools such as PeptideShaker, PRIDE Inspector, the Unipept web application and the Python-based BioServices package. This application is free and open to all users with no login requirement and can be accessed at http://www.ebi.ac.uk/pride/ws/archive/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Signell, Richard; Camossi, E.
2016-01-01
Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.
NASA Astrophysics Data System (ADS)
Signell, Richard P.; Camossi, Elena
2016-05-01
Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.
Interoperability in planetary research for geospatial data analysis
NASA Astrophysics Data System (ADS)
Hare, Trent M.; Rossi, Angelo P.; Frigeri, Alessandro; Marmo, Chiara
2018-01-01
For more than a decade there has been a push in the planetary science community to support interoperable methods for accessing and working with geospatial data. Common geospatial data products for planetary research include image mosaics, digital elevation or terrain models, geologic maps, geographic location databases (e.g., craters, volcanoes) or any data that can be tied to the surface of a planetary body (including moons, comets or asteroids). Several U.S. and international cartographic research institutions have converged on mapping standards that embrace standardized geospatial image formats, geologic mapping conventions, U.S. Federal Geographic Data Committee (FGDC) cartographic and metadata standards, and notably on-line mapping services as defined by the Open Geospatial Consortium (OGC). The latter includes defined standards such as the OGC Web Mapping Services (simple image maps), Web Map Tile Services (cached image tiles), Web Feature Services (feature streaming), Web Coverage Services (rich scientific data streaming), and Catalog Services for the Web (data searching and discoverability). While these standards were developed for application to Earth-based data, they can be just as valuable for planetary domain. Another initiative, called VESPA (Virtual European Solar and Planetary Access), will marry several of the above geoscience standards and astronomy-based standards as defined by International Virtual Observatory Alliance (IVOA). This work outlines the current state of interoperability initiatives in use or in the process of being researched within the planetary geospatial community.
Borderless Geospatial Web (bolegweb)
NASA Astrophysics Data System (ADS)
Cetl, V.; Kliment, T.; Kliment, M.
2016-06-01
The effective access and use of geospatial information (GI) resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC) are frequently used within the implementations of spatial data infrastructures (SDIs) to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery) service for the web as a gateway to the GI world through the metadata defined by ISO standards, which are structurally diverse to OGC metadata. Therefore, a crosswalk needs to be implemented to bridge the OGC resources discovered on mainstream web with those documented by metadata in an SDI to enrich its information extent. A public global wide and user friendly portal of OGC resources available on the web ensures and enhances the use of GI within a multidisciplinary context and bridges the geospatial web from the end-user perspective, thus opens its borders to everybody. Project "Crosswalking the layers of geospatial information resources to enable a borderless geospatial web" with the acronym BOLEGWEB is ongoing as a postdoctoral research project at the Faculty of Geodesy, University of Zagreb in Croatia (http://bolegweb.geof.unizg.hr/). The research leading to the results of the project has received funding from the European Union Seventh Framework Programme (FP7 2007-2013) under Marie Curie FP7-PEOPLE-2011-COFUND. The project started in the November 2014 and is planned to be finished by the end of 2016. This paper provides an overview of the project, research questions and methodology, so far achieved results and future steps.
A Web-based Visualization System for Three Dimensional Geological Model using Open GIS
NASA Astrophysics Data System (ADS)
Nemoto, T.; Masumoto, S.; Nonogaki, S.
2017-12-01
A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.
NASA Astrophysics Data System (ADS)
Tisdale, M.
2016-12-01
NASA's Atmospheric Science Data Center (ASDC) is operationally using the Esri ArcGIS Platform to improve data discoverability, accessibility and interoperability to meet the diversifying government, private, public and academic communities' driven requirements. The ASDC is actively working to provide their mission essential datasets as ArcGIS Image Services, Open Geospatial Consortium (OGC) Web Mapping Services (WMS), OGC Web Coverage Services (WCS) and leveraging the ArcGIS multidimensional mosaic dataset structure. Science teams and ASDC are utilizing these services, developing applications using the Web AppBuilder for ArcGIS and ArcGIS API for Javascript, and evaluating restructuring their data production and access scripts within the ArcGIS Python Toolbox framework and Geoprocessing service environment. These capabilities yield a greater usage and exposure of ASDC data holdings and provide improved geospatial analytical tools for a mission critical understanding in the areas of the earth's radiation budget, clouds, aerosols, and tropospheric chemistry.
Owgis 2.0: Open Source Java Application that Builds Web GIS Interfaces for Desktop Andmobile Devices
NASA Astrophysics Data System (ADS)
Zavala Romero, O.; Chassignet, E.; Zavala-Hidalgo, J.; Pandav, H.; Velissariou, P.; Meyer-Baese, A.
2016-12-01
OWGIS is an open source Java and JavaScript application that builds easily configurable Web GIS sites for desktop and mobile devices. The current version of OWGIS generates mobile interfaces based on HTML5 technology and can be used to create mobile applications. The style of the generated websites can be modified using COMPASS, a well known CSS Authoring Framework. In addition, OWGIS uses several Open Geospatial Consortium standards to request datafrom the most common map servers, such as GeoServer. It is also able to request data from ncWMS servers, allowing the websites to display 4D data from NetCDF files. This application is configured by XML files that define which layers, geographic datasets, are displayed on the Web GIS sites. Among other features, OWGIS allows for animations; streamlines from vector data; virtual globe display; vertical profiles and vertical transects; different color palettes; the ability to download data; and display text in multiple languages. OWGIS users are mainly scientists in the oceanography, meteorology and climate fields.
Assuring the privacy and security of transmitting sensitive electronic health information.
Peng, Charlie; Kesarinath, Gautam; Brinks, Tom; Young, James; Groves, David
2009-11-14
The interchange of electronic health records between healthcare providers and public health organizations has become an increasingly desirable tool in reducing healthcare costs, improving healthcare quality, and protecting population health. Assuring privacy and security in nationwide sharing of Electronic Health Records (EHR) in an environment such as GRID has become a top challenge and concern. The Centers for Disease Control and Prevention's (CDC) and The Science Application International Corporation (SAIC) have jointly conducted a proof of concept study to find and build a common secure and reliable messaging platform (the SRM Platform) to handle this challenge. The SRM Platform is built on the open standards of OASIS, World Wide Web Consortium (W3C) web-services standards, and Web Services Interoperability (WS-I) specifications to provide the secure transport of sensitive EHR or electronic medical records (EMR). Transmitted data may be in any digital form including text, data, and binary files, such as images. This paper identifies the business use cases, architecture, test results, and new connectivity options for disparate health networks among PHIN, NHIN, Grid, and others.
A Cybernetic Design Methodology for 'Intelligent' Online Learning Support
NASA Astrophysics Data System (ADS)
Quinton, Stephen R.
The World Wide Web (WWW) provides learners and knowledge workers convenient access to vast stores of information, so much that present methods for refinement of a query or search result are inadequate - there is far too much potentially useful material. The problem often encountered is that users usually do not recognise what may be useful until they have progressed some way through the discovery, learning, and knowledge acquisition process. Additional support is needed to structure and identify potentially relevant information, and to provide constructive feedback. In short, support for learning is needed. The learning envisioned here is not simply the capacity to recall facts or to recognise objects. The focus is on learning that results in the construction of knowledge. Although most online learning platforms are efficient at delivering information, most do not provide tools that support learning as envisaged in this chapter. It is conceivable that Web-based learning environments can incorporate software systems that assist learners to form new associations between concepts and synthesise information to create new knowledge. This chapter details the rationale and theory behind a research study that aims to evolve Web-based learning environments into 'intelligent thinking' systems that respond to natural language human input. Rather than functioning simply as a means of delivering information, it is argued that online learning solutions will 1 day interact directly with students to support their conceptual thinking and cognitive development.
ERIC Educational Resources Information Center
García-Floriano, Andrés; Ferreira-Santiago, Angel; Yáñez-Márquez, Cornelio; Camacho-Nieto, Oscar; Aldape-Pérez, Mario; Villuendas-Rey, Yenny
2017-01-01
Social networking potentially offers improved distance learning environments by enabling the exchange of resources between learners. The existence of properly classified content results in an enhanced distance learning experience in which appropriate materials can be retrieved efficiently; however, for this to happen, metadata needs to be present.…
Intelligent Information Retrieval and Web Mining Architecture Using SOA
ERIC Educational Resources Information Center
El-Bathy, Naser Ibrahim
2010-01-01
The study of this dissertation provides a solution to a very specific problem instance in the area of data mining, data warehousing, and service-oriented architecture in publishing and newspaper industries. The research question focuses on the integration of data mining and data warehousing. The research problem focuses on the development of…
Efficacy of an ICALL Tutoring System and Process-Oriented Corrective Feedback
ERIC Educational Resources Information Center
Choi, Inn-Chull
2016-01-01
A Web-based form-focused intelligent computer-assisted language learning (ICALL) tutoring system equipped with a process-oriented corrective feedback function was developed to investigate the extent to which such a program may serve as a viable method of teaching grammar to Korean secondary and elementary students. The present study was also…
33 CFR 72.05-10 - Free distribution.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Geospatial-Intelligence Agency's Web site: (http://pollux.nss.nima.mil/pubs/USCGLL/pubs_j_uscgll_list.html). (R.S. 501, as amended, sec. 5, 38 Stat. 75; 44 U.S.C. 82, 84) [CGFR 51-15, 18 FR 13, Jan. 1, 1953, as amended by USCG-2001-10714, 69 FR 24984, May 5, 2004] ...
ERIC Educational Resources Information Center
Raitt, David I., Ed.; Jeapes, Ben, Ed.
This proceedings volume contains 68 papers. Subjects addressed include: access to information; the future of information managers/librarians; intelligent agents; changing roles of library users; disintermediation; Internet review sites; World Wide Web (WWW) search engines; Java; online searching; future of online education; integrated information…
A Pan-European Survey Leading to the Development of WITS.
ERIC Educational Resources Information Center
Mullins, Roisin; Duan, Yanqing; Hamblin, David
2001-01-01
Describes a study of the training needs of small- and medium-sized enterprises in relation to the Internet, electronic commerce, and electronic data interchange in the United Kingdom, Poland, Slovak Republic, Germany, and Portugal. Discusses the development of a Web-based intelligent training system (WITS) as a result of the study. (Author/LRW)
More Efficient Learning on Web Courseware Systems?
ERIC Educational Resources Information Center
Zufic, Janko; Kalpic, Damir
2007-01-01
The article describes a research conducted on students at the University in Pula, by which was attempted to establish whether there is a relationship between exam success and a type of online teaching material from which a student learns. Students were subjected to psychological testing that measured factors of intelligence: verbal, non-verbal and…
How the Embrace of MOOC's Could Hurt Middle America
ERIC Educational Resources Information Center
Graham, Greg
2012-01-01
Sebastian Thrun gave up tenure at Stanford University after 160,000 students signed up for his free online version of the course "Introduction to Artificial Intelligence." The experience completely changed his perspective on education, he said, so he ditched teaching at Stanford and launched the private Web site Udacity, which offers…
The "ICP OnLine": "Jeux sans frontieres" on the CyberCampus.
ERIC Educational Resources Information Center
Hutchison, Chris
1995-01-01
Focuses on an ICP (Inter-University Cooperation Programme) OnLine in the area of Informatics/Artificial Intelligence. Notes that ICP is accessed through the World Wide Web and was launched in the Summer of 1994 to provide "virtual mobility." Discusses the program's objectives, student experiences, and the risks and opportunities afforded by…
76 FR 22625 - Reporting of Security Issues
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-22
...) Accessing the Government Printing Office's Web page at http://www.gpoaccess.gov/fr/index.html ; or (3... violations, threat information or criminal activities, vulnerabilities and intelligence was put in place...://data.bls.gov/cgi-bin/print.pl/oes/2009/may/naics2_48-49.htm and http://www.bls.gov/cpi/cpid1012.pdf...
Global Test Range: Toward Airborne Sensor Webs
NASA Technical Reports Server (NTRS)
Mace, Thomas H.; Freudinger, Larry; DelFrate John H.
2008-01-01
This viewgraph presentation reviews the planned global sensor network that will monitor the Earth's climate, and resources using airborne sensor systems. The vision is an intelligent, affordable Earth Observation System. Global Test Range is a lab developing trustworthy services for airborne instruments - a specialized Internet Service Provider. There is discussion of several current and planned missions.
Schulman-Green, Dena; Ercolano, Elizabeth; Lacoursiere, Sheryl; Ma, Tony; Lazenby, Mark; McCorkle, Ruth
2011-06-01
Institute of Medicine reports have identified gaps in health care professionals' knowledge of palliative and end-of-life care, recommending improved education. Our purpose was to develop and administer a Web-based survey to identify the educational needs of multidisciplinary health care professionals who provide this care in Connecticut to inform educational initiatives. We developed an 80-item survey and recruited participants through the Internet and in person. Descriptive and correlational statistics were calculated on 602 surveys. Disciplines reported greater agreement on items related to their routine tasks. Reported needs included dealing with cultural and spiritual matters and having supportive resources at work. Focus groups confirmed results that are consistent with National Consensus Project guidelines for quality palliative care and indicate the End-of-Life Nursing Education Consortium modules for education.
EXP-PAC: providing comparative analysis and storage of next generation gene expression data.
Church, Philip C; Goscinski, Andrzej; Lefèvre, Christophe
2012-07-01
Microarrays and more recently RNA sequencing has led to an increase in available gene expression data. How to manage and store this data is becoming a key issue. In response we have developed EXP-PAC, a web based software package for storage, management and analysis of gene expression and sequence data. Unique to this package is SQL based querying of gene expression data sets, distributed normalization of raw gene expression data and analysis of gene expression data across experiments and species. This package has been populated with lactation data in the international milk genomic consortium web portal (http://milkgenomics.org/). Source code is also available which can be hosted on a Windows, Linux or Mac APACHE server connected to a private or public network (http://mamsap.it.deakin.edu.au/~pcc/Release/EXP_PAC.html). Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.
2011-06-01
Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.
The Neuroscience Information Framework: A Data and Knowledge Environment for Neuroscience
Akil, Huda; Ascoli, Giorgio A.; Bowden, Douglas M.; Bug, William; Donohue, Duncan E.; Goldberg, David H.; Grafstein, Bernice; Grethe, Jeffrey S.; Gupta, Amarnath; Halavi, Maryam; Kennedy, David N.; Marenco, Luis; Martone, Maryann E.; Miller, Perry L.; Müller, Hans-Michael; Robert, Adrian; Shepherd, Gordon M.; Sternberg, Paul W.; Van Essen, David C.; Williams, Robert W.
2009-01-01
With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience’s Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov, http://neurogateway.org, and other sites as they come on line. PMID:18946742
Gmz: a Gml Compression Model for Webgis
NASA Astrophysics Data System (ADS)
Khandelwal, A.; Rajan, K. S.
2017-09-01
Geography markup language (GML) is an XML specification for expressing geographical features. Defined by Open Geospatial Consortium (OGC), it is widely used for storage and transmission of maps over the Internet. XML schemas provide the convenience to define custom features profiles in GML for specific needs as seen in widely popular cityGML, simple features profile, coverage, etc. Simple features profile (SFP) is a simpler subset of GML profile with support for point, line and polygon geometries. SFP has been constructed to make sure it covers most commonly used GML geometries. Web Feature Service (WFS) serves query results in SFP by default. But it falls short of being an ideal choice due to its high verbosity and size-heavy nature, which provides immense scope for compression. GMZ is a lossless compression model developed to work for SFP compliant GML files. Our experiments indicate GMZ achieves reasonably good compression ratios and can be useful in WebGIS based applications.
Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.
2011-01-01
Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.
Desirable attributes of public educational websites.
Whitbeck, Caroline
2005-07-01
Certain attributes are particularly desirable for public educational websites, and websites for ethics education in particular. Among the most important of these attributes is wide accessibility through adherence to the World Wide Web Consortium (W3C) standards for HTML code. Adherence to this standard produces webpages that can be rendered by a full range of web browsers, including Braille and speech browsers. Although almost no academic websites, including ethics websites, and even fewer commercial websites are accessible by W3C standards, as illustrated by the Online Ethics Center for Engineering and Science
The neuroscience information framework: a data and knowledge environment for neuroscience.
Gardner, Daniel; Akil, Huda; Ascoli, Giorgio A; Bowden, Douglas M; Bug, William; Donohue, Duncan E; Goldberg, David H; Grafstein, Bernice; Grethe, Jeffrey S; Gupta, Amarnath; Halavi, Maryam; Kennedy, David N; Marenco, Luis; Martone, Maryann E; Miller, Perry L; Müller, Hans-Michael; Robert, Adrian; Shepherd, Gordon M; Sternberg, Paul W; Van Essen, David C; Williams, Robert W
2008-09-01
With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience's Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov , http://neurogateway.org , and other sites as they come on line.
Electronic doors to education: study of high school website accessibility in Iowa.
Klein, David; Myhill, William; Hansen, Linda; Asby, Gary; Michaelson, Susan; Blanck, Peter
2003-01-01
The Americans with Disabilities Act (ADA), and Sections 504 and 508 of the Rehabilitation Act, prohibit discrimination against people with disabilities in all aspects of daily life, including education, work, and access to places of public accommodations. Increasingly, these antidiscrimination laws are used by persons with disabilities to ensure equal access to e-commerce, and to private and public Internet websites. To help assess the impact of the anti-discrimination mandate for educational communities, this study examined 157 website home pages of Iowa public high schools (52% of high schools in Iowa) in terms of their electronic accessibility for persons with disabilities. We predicted that accessibility problems would limit students and others in obtaining information from the web pages as well as limiting ability to navigate to other web pages. Findings show that although many web pages examined included information in accessible formats, none of the home pages met World Wide Web Consortium (W3C) standards for accessibility. The most frequent accessibility problem was lack of alternative text (ALT tags) for graphics. Technical sophistication built into pages was found to reduce accessibility. Implications are discussed for schools and educational institutions, and for laws, policies, and procedures on website accessibility. Copyright 2003 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.
2016-12-01
The Magnetics Information Consortium (https://earthref.org/MagIC/) develops and maintains a database and web application for supporting the paleo-, geo-, and rock magnetic scientific community. Historically, this objective has been met with an Oracle database and a Perl web application at the San Diego Supercomputer Center (SDSC). The Oracle Enterprise Cluster at SDSC, however, was decommissioned in July of 2016 and the cost for MagIC to continue using Oracle became prohibitive. This provided MagIC with a unique opportunity to reexamine the entire technology stack and data model. MagIC has developed an open-source web application using the Meteor (http://meteor.com) framework and a MongoDB database. The simplicity of the open-source full-stack framework that Meteor provides has improved MagIC's development pace and the increased flexibility of the data schema in MongoDB encouraged the reorganization of the MagIC Data Model. As a result of incorporating actively developed open-source projects into the technology stack, MagIC has benefited from their vibrant software development communities. This has translated into a more modern web application that has significantly improved the user experience for the paleo-, geo-, and rock magnetic scientific community.
NASA Astrophysics Data System (ADS)
Oosthoek, J. H. P.; Flahaut, J.; Rossi, A. P.; Baumann, P.; Misev, D.; Campalani, P.; Unnithan, V.
2014-06-01
PlanetServer is a WebGIS system, currently under development, enabling the online analysis of Compact Reconnaissance Imaging Spectrometer (CRISM) hyperspectral data from Mars. It is part of the EarthServer project which builds infrastructure for online access and analysis of huge Earth Science datasets. Core functionality consists of the rasdaman Array Database Management System (DBMS) for storage, and the Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) for data querying. Various WCPS queries have been designed to access spatial and spectral subsets of the CRISM data. The client WebGIS, consisting mainly of the OpenLayers javascript library, uses these queries to enable online spatial and spectral analysis. Currently the PlanetServer demonstration consists of two CRISM Full Resolution Target (FRT) observations, surrounding the NASA Curiosity rover landing site. A detailed analysis of one of these observations is performed in the Case Study section. The current PlanetServer functionality is described step by step, and is tested by focusing on detecting mineralogical evidence described in earlier Gale crater studies. Both the PlanetServer methodology and its possible use for mineralogical studies will be further discussed. Future work includes batch ingestion of CRISM data and further development of the WebGIS and analysis tools.
Lederer, Carsten W; Basak, A Nazli; Aydinok, Yesim; Christou, Soteroula; El-Beshlawy, Amal; Eleftheriou, Androulla; Fattoum, Slaheddine; Felice, Alex E; Fibach, Eitan; Galanello, Renzo; Gambari, Roberto; Gavrila, Lucian; Giordano, Piero C; Grosveld, Frank; Hassapopoulou, Helen; Hladka, Eva; Kanavakis, Emmanuel; Locatelli, Franco; Old, John; Patrinos, George P; Romeo, Giovanni; Taher, Ali; Traeger-Synodinos, Joanne; Vassiliou, Panayiotis; Villegas, Ana; Voskaridou, Ersi; Wajcman, Henri; Zafeiropoulos, Anastasios; Kleanthous, Marina
2009-01-01
Hemoglobin (Hb) disorders are common, potentially lethal monogenic diseases, posing a global health challenge. With worldwide migration and intermixing of carriers, demanding flexible health planning and patient care, hemoglobinopathies may serve as a paradigm for the use of electronic infrastructure tools in the collection of data, the dissemination of knowledge, the harmonization of treatment, and the coordination of research and preventive programs. ITHANET, a network covering thalassemias and other hemoglobinopathies, comprises 26 organizations from 16 countries, including non-European countries of origin for these diseases (Egypt, Israel, Lebanon, Tunisia and Turkey). Using electronic infrastructure tools, ITHANET aims to strengthen cross-border communication and data transfer, cooperative research and treatment of thalassemia, and to improve support and information of those affected by hemoglobinopathies. Moreover, the consortium has established the ITHANET Portal, a novel web-based instrument for the dissemination of information on hemoglobinopathies to researchers, clinicians and patients. The ITHANET Portal is a growing public resource, providing forums for discussion and research coordination, and giving access to courses and databases organized by ITHANET partners. Already a popular repository for diagnostic protocols and news related to hemoglobinopathies, the ITHANET Portal also provides a searchable, extendable database of thalassemia mutations and associated background information. The experience of ITHANET is exemplary for a consortium bringing together disparate organizations from heterogeneous partner countries to face a common health challenge. The ITHANET Portal as a web-based tool born out of this experience amends some of the problems encountered and facilitates education and international exchange of data and expertise for hemoglobinopathies.
Decision Facilitator for Launch Operations using Intelligent Agents
NASA Technical Reports Server (NTRS)
Thirumalainambi, Rajkumar; Bardina, Jorge
2005-01-01
Launch operations require millions of micro-decisions which contribute to the macro decision of 'Go/No-Go' for a launch. Knowledge workers"(such as managers and technical professionals) need information in a timely precise manner as it can greatly affect mission success. The intelligent agent (web search agent) uses the words of a hypertext markup language document which is connected through the internet. The intelligent agent's actions are to determine if its goal of seeking a website containing a specified target (e.g., keyword or phrase), has been met. There are few parameters that should be defined for the keyword search like "Go" and "No-Go". Instead of visiting launch and range decision making servers individually, the decision facilitator constantly connects to all servers, accumulating decisions so the final decision can be decided in a timely manner. The facilitator agent uses the singleton design pattern, which ensures that only a single instance of the facilitator agent exists at one time. Negotiations could proceed between many agents resulting in a final decision. This paper describes details of intelligent agents and their interaction to derive an unified decision support system.
NASA Astrophysics Data System (ADS)
Wahyudin; Riza, L. S.; Putro, B. L.
2018-05-01
E-learning as a learning activity conducted online by the students with the usual tools is favoured by students. The use of computer media in learning provides benefits that are not owned by other learning media that is the ability of computers to interact individually with students. But the weakness of many learning media is to assume that all students have a uniform ability, when in reality this is not the case. The concept of Intelligent Tutorial System (ITS) combined with cyberblog application can overcome the weaknesses in neglecting diversity. An Intelligent Tutorial System-based Cyberblog application (ITS) is a web-based interactive application program that implements artificial intelligence which can be used as a learning and evaluation media in the learning process. The use of ITS-based Cyberblog in learning is one of the alternative learning media that is interesting and able to help students in measuring ability in understanding the material. This research will be associated with the improvement of logical thinking ability (logical thinking) of students, especially in algorithm subjects.
NASA Astrophysics Data System (ADS)
Esbrand, C.; Royle, G.; Griffiths, J.; Speller, R.
2009-07-01
The integration of technology with healthcare has undoubtedly propelled the medical imaging sector well into the twenty first century. The concept of digital imaging introduced during the 1970s has since paved the way for established imaging techniques where digital mammography, phase contrast imaging and CT imaging are just a few examples. This paper presents a prototype intelligent digital mammography system designed and developed by a European consortium. The final system, the I-ImaS system, utilises CMOS monolithic active pixel sensor (MAPS) technology promoting on-chip data processing, enabling the acts of data processing and image acquisition to be achieved simultaneously; consequently, statistical analysis of tissue is achievable in real-time for the purpose of x-ray beam modulation via a feedback mechanism during the image acquisition procedure. The imager implements a dual array of twenty 520 pixel × 40 pixel CMOS MAPS sensing devices with a 32μm pixel size, each individually coupled to a 100μm thick thallium doped structured CsI scintillator. This paper presents the first intelligent images of real breast tissue obtained from the prototype system of real excised breast tissue where the x-ray exposure was modulated via the statistical information extracted from the breast tissue itself. Conventional images were experimentally acquired where the statistical analysis of the data was done off-line, resulting in the production of simulated real-time intelligently optimised images. The results obtained indicate real-time image optimisation using the statistical information extracted from the breast as a means of a feedback mechanisms is beneficial and foreseeable in the near future.
Green, Cynthia L; Kligfield, Paul; George, Samuel; Gussak, Ihor; Vajdic, Branislav; Sager, Philip; Krucoff, Mitchell W
2012-03-01
The Cardiac Safety Research Consortium (CSRC) provides both "learning" and blinded "testing" digital electrocardiographic (ECG) data sets from thorough QT (TQT) studies annotated for submission to the US Food and Drug Administration (FDA) to developers of ECG analysis technologies. This article reports the first results from a blinded testing data set that examines developer reanalysis of original sponsor-reported core laboratory data. A total of 11,925 anonymized ECGs including both moxifloxacin and placebo arms of a parallel-group TQT in 181 subjects were blindly analyzed using a novel ECG analysis algorithm applying intelligent automation. Developer-measured ECG intervals were submitted to CSRC for unblinding, temporal reconstruction of the TQT exposures, and statistical comparison to core laboratory findings previously submitted to FDA by the pharmaceutical sponsor. Primary comparisons included baseline-adjusted interval measurements, baseline- and placebo-adjusted moxifloxacin QTcF changes (ddQTcF), and associated variability measures. Developer and sponsor-reported baseline-adjusted data were similar with average differences <1 ms for all intervals. Both developer- and sponsor-reported data demonstrated assay sensitivity with similar ddQTcF changes. Average within-subject SD for triplicate QTcF measurements was significantly lower for developer- than sponsor-reported data (5.4 and 7.2 ms, respectively; P < .001). The virtually automated ECG algorithm used for this analysis produced similar yet less variable TQT results compared with the sponsor-reported study, without the use of a manual core laboratory. These findings indicate that CSRC ECG data sets can be useful for evaluating novel methods and algorithms for determining drug-induced QT/QTc prolongation. Although the results should not constitute endorsement of specific algorithms by either CSRC or FDA, the value of a public domain digital ECG warehouse to provide prospective, blinded comparisons of ECG technologies applied for QT/QTc measurement is illustrated. Copyright © 2012 Mosby, Inc. All rights reserved.
Green, Cynthia L.; Kligfield, Paul; George, Samuel; Gussak, Ihor; Vajdic, Branislav; Sager, Philip; Krucoff, Mitchell W.
2013-01-01
Background The Cardiac Safety Research Consortium (CSRC) provides both “learning” and blinded “testing” digital ECG datasets from thorough QT (TQT) studies annotated for submission to the US Food and Drug Administration (FDA) to developers of ECG analysis technologies. This manuscript reports the first results from a blinded “testing” dataset that examines Developer re-analysis of original Sponsor-reported core laboratory data. Methods 11,925 anonymized ECGs including both moxifloxacin and placebo arms of a parallel-group TQT in 191 subjects were blindly analyzed using a novel ECG analysis algorithm applying intelligent automation. Developer measured ECG intervals were submitted to CSRC for unblinding, temporal reconstruction of the TQT exposures, and statistical comparison to core laboratory findings previously submitted to FDA by the pharmaceutical sponsor. Primary comparisons included baseline-adjusted interval measurements, baseline- and placebo-adjusted moxifloxacin QTcF changes (ddQTcF), and associated variability measures. Results Developer and Sponsor-reported baseline-adjusted data were similar with average differences less than 1 millisecond (ms) for all intervals. Both Developer and Sponsor-reported data demonstrated assay sensitivity with similar ddQTcF changes. Average within-subject standard deviation for triplicate QTcF measurements was significantly lower for Developer than Sponsor-reported data (5.4 ms and 7.2 ms, respectively; p<0.001). Conclusion The virtually automated ECG algorithm used for this analysis produced similar yet less variable TQT results compared to the Sponsor-reported study, without the use of a manual core laboratory. These findings indicate CSRC ECG datasets can be useful for evaluating novel methods and algorithms for determining QT/QTc prolongation by drugs. While the results should not constitute endorsement of specific algorithms by either CSRC or FDA, the value of a public domain digital ECG warehouse to provide prospective, blinded comparisons of ECG technologies applied for QT/QTc measurement is illustrated. PMID:22424006
NASA Astrophysics Data System (ADS)
Lyapin, Sergey; Kukovyakin, Alexey
Within the framework of the research program "Textaurus" an operational prototype of multifunctional library T-Libra v.4.1. has been created which makes it possible to carry out flexible parametrizable search within a full-text database. The information system is realized in the architecture Web-browser / Web-server / SQL-server. This allows to achieve an optimal combination of universality and efficiency of text processing, on the one hand, and convenience and minimization of expenses for an end user (due to applying of a standard Web-browser as a client application), on the other one. The following principles underlie the information system: a) multifunctionality, b) intelligence, c) multilingual primary texts and full-text searching, d) development of digital library (DL) by a user ("administrative client"), e) multi-platform working. A "library of concepts", i.e. a block of functional models of semantic (concept-oriented) searching, as well as a subsystem of parametrizable queries to a full-text database, which is closely connected with the "library", serve as a conceptual basis of multifunctionality and "intelligence" of the DL T-Libra v.4.1. An author's paragraph is a unit of full-text searching in the suggested technology. At that, the "logic" of an educational / scientific topic or a problem can be built in a multilevel flexible structure of a query and the "library of concepts", replenishable by the developers and experts. About 10 queries of various level of complexity and conceptuality are realized in the suggested version of the information system: from simple terminological searching (taking into account lexical and grammatical paradigms of Russian) to several kinds of explication of terminological fields and adjustable two-parameter thematic searching (a [set of terms] and a [distance between terms] within the limits of an author's paragraph are such parameters correspondingly).
NASA Astrophysics Data System (ADS)
Braaten, D. A.; Holvoet, J. F.; Gogineni, S.
2003-12-01
The Radar Systems and Remote Sensing Laboratory at the University of Kansas (KU) has implemented extensive outreach activities focusing on Polar Regions as part of the Polar Radar for Ice Sheet Measurements (PRISM) project. The PRISM project is developing advanced intelligent remote sensing technology that involves radar systems, an autonomous rover, and communications systems to measure detailed ice sheet characteristics, and to determine bed conditions (frozen or wet) below active ice sheets in both Greenland and Antarctica. These measurements will provide a better understanding of the response of polar ice sheets to global climate change and the resulting impact the ice sheets will have on sea level rise. Many of the research and technological development aspects of the PRISM project, such as robotics, radar systems, climate change and exploration of harsh environments, can kindle an excitement and interest in students about science and technology. These topics form the core of our K-12 education and training outreach initiatives, which are designed to capture the imagination of young students, and prompt them to consider an educational path that will lead them to scientific or engineering careers. The K-12 PRISM outreach initiatives are being developed and implemented in a collaboration with the Advanced Learning Technology Program (ALTec) of the High Plains Regional Technology in Education Consortium (HPR*TEC). ALTec is associated with the KU School of Education, and is a well-established educational research center that develops and hosts web tools to enable teachers nationwide to network, collaborate, and share resources with other teachers. An example of an innovative and successful web interface developed by ALTec is called TrackStar. Teachers can use TrackStar over the Web to develop interactive, resource-based lessons (called tracks) on-line for their students. Once developed, tracks are added to the TrackStar database and can be accessed and modified (if necessary) by teachers everywhere. The PRISM project has added a search engine for polar related tracks, and has developed numerous new tracks on robotics, polar exploration, and climate change under the guidance of a K-12 teacher advisory group. The PRISM project is also developing and hosting several other web-based lesson design tools and resources for K-12 educators and students on the PRISM project web page (http://www.ku-prism.org). These tools and resources include: i) "Polar Scientists and Explorers, Past and Present" covering the travels and/or unknown fate of polar explorers and scientists; ii) "Polar News" providing links to current news articles related to polar regions; iii) "Letter of Global Concern", which is a tool to help students draft a letter to a politician, government official, or business leader; iv) "Graphic Sleuth", which is an online utility that allows teachers to make lessons for student use; v) "Bears on Ice" for students in grades K - 6 that can follow the adventures of two stuffed bears that travel with scientists into polar regions; and vi) "K-12 Polar Resources," which provides teachers with images, information, TrackStar lessons, and a search engine designed to identify polar related lessons. In our presentation, we will describe and show examples of these tools and resources, and provide an assessment of their popularity with teachers nationwide.
Sensor Webs and Virtual Globes: Enabling Understanding of Changes in a partially Glaciated Watershed
NASA Astrophysics Data System (ADS)
Heavner, M.; Fatland, D. R.; Habermann, M.; Berner, L.; Hood, E.; Connor, C.; Galbraith, J.; Knuth, E.; O'Brien, W.
2008-12-01
The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). SEAMONSTER is operating in the partially glaciated Mendenhall and Lemon Creek Watersheds, in the Juneau area, on the margins of the Juneau Icefield. These watersheds are studied for both 1. long term monitoring of changes, and 2. detection and analysis of transient events (such as glacier lake outburst floods). The heterogeneous sensors (meteorologic, dual frequency GPS, water quality, lake level, etc), power and bandwidth constraints, and competing time scales of interest require autonomous reactivity of the sensor web. They also present challenges for operational management of the sensor web. The harsh conditions on the glaciers provide additional operating constraints. The tight integration of the sensor web and virtual global enabling technology enhance the project in multiple ways. We are utilizing virtual globe infrastructures to enhance both sensor web management and data access. SEAMONSTER utilizes virtual globes for education and public outreach, sensor web management, data dissemination, and enabling collaboration. Using a PosgreSQL with GIS extensions database coupled to the Open Geospatial Consortium (OGC) Geoserver, we generate near-real-time auto-updating geobrowser files of the data in multiple OGC standard formats (e.g KML, WCS). Additionally, embedding wiki pages in this database allows the development of a geospatially aware wiki describing the projects for better public outreach and education. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers and virtual globes have made this project possible.
Ontology Reuse in Geoscience Semantic Applications
NASA Astrophysics Data System (ADS)
Mayernik, M. S.; Gross, M. B.; Daniels, M. D.; Rowan, L. R.; Stott, D.; Maull, K. E.; Khan, H.; Corson-Rikert, J.
2015-12-01
The tension between local ontology development and wider ontology connections is fundamental to the Semantic web. It is often unclear, however, what the key decision points should be for new semantic web applications in deciding when to reuse existing ontologies and when to develop original ontologies. In addition, with the growth of semantic web ontologies and applications, new semantic web applications can struggle to efficiently and effectively identify and select ontologies to reuse. This presentation will describe the ontology comparison, selection, and consolidation effort within the EarthCollab project. UCAR, Cornell University, and UNAVCO are collaborating on the EarthCollab project to use semantic web technologies to enable the discovery of the research output from a diverse array of projects. The EarthCollab project is using the VIVO Semantic web software suite to increase discoverability of research information and data related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) diverse research projects informed by geodesy through the UNAVCO geodetic facility and consortium. This presentation will outline of EarthCollab use cases, and provide an overview of key ontologies being used, including the VIVO-Integrated Semantic Framework (VIVO-ISF), Global Change Information System (GCIS), and Data Catalog (DCAT) ontologies. We will discuss issues related to bringing these ontologies together to provide a robust ontological structure to support the EarthCollab use cases. It is rare that a single pre-existing ontology meets all of a new application's needs. New projects need to stitch ontologies together in ways that fit into the broader semantic web ecosystem.
QMachine: commodity supercomputing in web browsers
2014-01-01
Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605
Developing a Web-based system by integrating VGI and SDI for real estate management and marketing
NASA Astrophysics Data System (ADS)
Salajegheh, J.; Hakimpour, F.; Esmaeily, A.
2014-10-01
Property importance of various aspects, especially the impact on various sectors of the economy and the country's macroeconomic is clear. Because of the real, multi-dimensional and heterogeneous nature of housing as a commodity, the lack of an integrated system includes comprehensive information of property, the lack of awareness of some actors in this field about comprehensive information about property and the lack of clear and comprehensive rules and regulations for the trading and pricing, several problems arise for the people involved in this field. In this research implementation of a crowd-sourced Web-based real estate support system is desired. Creating a Spatial Data Infrastructure (SDI) in this system for collecting, updating and integrating all official data about property is also desired in this study. In this system a Web2.0 broker and technologies such as Web services and service composition has been used. This work aims to provide comprehensive and diverse information about property from different sources. For this purpose five-level real estate support system architecture is used. PostgreSql DBMS is used to implement the desired system. Geoserver software is also used as map server and reference implementation of OGC (Open Geospatial Consortium) standards. And Apache server is used to run web pages and user interfaces. Integration introduced methods and technologies provide a proper environment for various users to use the system and share their information. This goal is only achieved by cooperation between all involved organizations in real estate with implementation their required infrastructures in interoperability Web services format.
Gene Ontology Consortium: going forward
2015-01-01
The Gene Ontology (GO; http://www.geneontology.org) is a community-based bioinformatics resource that supplies information about gene product function using ontologies to represent biological knowledge. Here we describe improvements and expansions to several branches of the ontology, as well as updates that have allowed us to more efficiently disseminate the GO and capture feedback from the research community. The Gene Ontology Consortium (GOC) has expanded areas of the ontology such as cilia-related terms, cell-cycle terms and multicellular organism processes. We have also implemented new tools for generating ontology terms based on a set of logical rules making use of templates, and we have made efforts to increase our use of logical definitions. The GOC has a new and improved web site summarizing new developments and documentation, serving as a portal to GO data. Users can perform GO enrichment analysis, and search the GO for terms, annotations to gene products, and associated metadata across multiple species using the all-new AmiGO 2 browser. We encourage and welcome the input of the research community in all biological areas in our continued effort to improve the Gene Ontology. PMID:25428369
A User-Centric Adaptive Learning System for E-Learning 2.0
ERIC Educational Resources Information Center
Huang, Shiu-Li; Shiu, Jung-Hung
2012-01-01
The success of Web 2.0 inspires e-learning to evolve into e-learning 2.0, which exploits collective intelligence to achieve user-centric learning. However, searching for suitable learning paths and content for achieving a learning goal is time consuming and troublesome on e-learning 2.0 platforms. Therefore, introducing formal learning in these…
An Intelligent Crawler for a Virtual World
ERIC Educational Resources Information Center
Eno, Joshua
2010-01-01
Virtual worlds, which allow users to create and interact with content in a 3D, multi-user environment, growing and becoming more integrated with the traditional flat web. However, little is empirically known about the content users create in virtual world and how it can be indexed and searched effectively. In order to gain a better understanding…
A Mathematical and Sociological Analysis of Google Search Algorithm
2013-01-16
through the collective intelligence of the web to determine a page’s importance. Let v be a vector of RN with N ≥ 8 billion. Any unit vector in RN is...scrolled up by some artifical hits. Aknowledgment: The authors would like to thank Dr. John Lavery for his encouragement and support which enable them to
ERIC Educational Resources Information Center
Yang, Fan; Wang, Minjuan; Shen, Ruimin; Han, Peng
2007-01-01
Web-based (or online) learning provides an unprecedented flexibility and convenience to both learners and instructors. However, large online classes relying on instructor-centered presentations could tend to isolate many learners. The size of these classes and the wide dispersion of the learners make it challenging for instructors to interact with…
2010-06-01
Woods Hole, MA 02543, USA 3 Raytheon Intelligence and Information Systems, Aurora , CO 80011, USA 4 Scripps Institution of Oceanography, La Jolla...Amazon.com, Amazon Web Services for the Amazon Elastic Compute Cloud ( Amazon EC2). http://aws.amazon.com/ec2/. [4] M. Arrott, B. Demchak, V. Ermagan, C
Human Factors in Industrial and Consumer Products and Services
2006-03-24
modeling; simulation and intelligent agents; Transportation ; Space and aviation; Telecommunication and web applications; Consumer products; and Customer...the user feedback throughout the product development. Self-explaining and forgiving roads to improve safety Karel Brookhuis, Dick de Waard...in a simulated ambulance dispatcher’s task Ben Mulder, Anje Kruizinga & Dick de Waard University of Groningen, Experimental and Work Psychology
Scientific & Intelligence Exascale Visualization Analysis System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Money, James H.
SIEVAS provides an immersive visualization framework for connecting multiple systems in real time for data science. SIEVAS provides the ability to connect multiple COTS and GOTS products in a seamless fashion for data fusion, data analysis, and viewing. It provides this capability by using a combination of micro services, real time messaging, and web service compliant back-end system.
Web Delivery of Adaptive and Interactive Language Tutoring: Revisited
ERIC Educational Resources Information Center
Heift, Trude
2016-01-01
This commentary reconsiders the description and assessment of the design and implementation of "German Tutor," an Intelligent Language Tutoring System (ILTS) for learners of German as a foreign language, published in 2001. Based on our experience over the past 15 years with the design and real classroom use of an ILTS, we address a…
Using a Recommender System and Hyperwave Attributes To Augment an Electronic Resource Library.
ERIC Educational Resources Information Center
Fenn, B.; Lennon, J.
There has been increasing interest over the past few years in systems that help users exchange recommendations about World Wide Web documents. Programs have ranged from those that rely totally on user pre-selection, to others that are based on artificial intelligence. This paper proposes a system that falls between these two extremes, providing…
An Intelligent E-Learning System Based on Learner Profiling and Learning Resources Adaptation
ERIC Educational Resources Information Center
Tzouveli, Paraskevi; Mylonas, Phivos; Kollias, Stefanos
2008-01-01
Taking advantage of the continuously improving, web-based learning systems plays an important role for self-learning, especially in the case of working people. Nevertheless, learning systems do not generally adapt to learners' profiles. Learners have to spend a lot of time before reaching the learning goal that is compatible with their knowledge…
Flip or Flop: Are Math Teachers Using Khan Academy as Envisioned by Sal Khan?
ERIC Educational Resources Information Center
Cargile, Lori A.; Harkness, Shelly Sheats
2014-01-01
Khan Academy (KA) is a free web-based intelligent tutor, which has been featured in countless media outlets for its potential to change mathematics instruction. The founder and executive director, Salman Khan, recommends that KA be used to personalize instruction, freeing up class time for engaging high yield activities like student discourse and…
ERIC Educational Resources Information Center
Wu, Ji-Wei; Tseng, Judy C. R.; Hwang, Gwo-Jen
2015-01-01
Inquiry-Based Learning (IBL) is an effective approach for promoting active learning. When inquiry-based learning is incorporated into instruction, teachers provide guiding questions for students to actively explore the required knowledge in order to solve the problems. Although the World Wide Web (WWW) is a rich knowledge resource for students to…
ERIC Educational Resources Information Center
Cotos, Elena
2010-01-01
This dissertation presents an innovative approach to the development and empirical evaluation of Automated Writing Evaluation (AWE) technology used for teaching and learning. It introduces IADE (Intelligent Academic Discourse Evaluator), a new web-based AWE program that analyzes research article Introduction sections and generates immediate,…
A Web Based Intelligent Training System for SMEs
ERIC Educational Resources Information Center
Mullins, Roisin; Duan, Yanqing; Hamblin, David; Burrell, Phillip; Jin, Huan; Jerzy, Goluchowski; Ewa, Ziemba; Aleksander, Billewicz
2007-01-01
It is widely accepted that employees in small business suffer from a lack of knowledge and skills. This lack of skills means that small companies will miss out on new business opportunities. This is even more evident with respect to the adoption of Internet marketing in Small and Medium Enterprises (SMEs). This paper reports a pilot research…
Application of Learning Analytics Using Clustering Data Mining for Students' Disposition Analysis
ERIC Educational Resources Information Center
Bharara, Sanyam; Sabitha, Sai; Bansal, Abhay
2018-01-01
Learning Analytics (LA) is an emerging field in which sophisticated analytic tools are used to improve learning and education. It draws from, and is closely tied to, a series of other fields of study like business intelligence, web analytics, academic analytics, educational data mining, and action analytics. The main objective of this research…
ERIC Educational Resources Information Center
Cleveland, Simon; Jackson, Barcus C.; Dawson, Maurice
2016-01-01
With the rise of Web 2.0, microblogging has become a widely accepted phenomenon for sharing information. Moreover, the Twitter platform has become the tool of choice for universities looking to increase their digital footprint. However, scant research addresses the viability of microblogging as a tool to facilitate knowledge creation practices…
The Future of the Web, Intelligent Devices, and Education
ERIC Educational Resources Information Center
Strauss, Howard
2007-01-01
In this article, the author looks to the past for trends in hardware, software, networking, and education and attempt to extrapolate where they are going and what their broad implications might be. However, there are many different ways that trends can be interpreted, and it is easy to pick trends that support one's thesis and ignore ones that…
CircularLogo: A lightweight web application to visualize intra-motif dependencies.
Ye, Zhenqing; Ma, Tao; Kalmbach, Michael T; Dasari, Surendra; Kocher, Jean-Pierre A; Wang, Liguo
2017-05-22
The sequence logo has been widely used to represent DNA or RNA motifs for more than three decades. Despite its intelligibility and intuitiveness, the traditional sequence logo is unable to display the intra-motif dependencies and therefore is insufficient to fully characterize nucleotide motifs. Many methods have been developed to quantify the intra-motif dependencies, but fewer tools are available for visualization. We developed CircularLogo, a web-based interactive application, which is able to not only visualize the position-specific nucleotide consensus and diversity but also display the intra-motif dependencies. Applying CircularLogo to HNF6 binding sites and tRNA sequences demonstrated its ability to show intra-motif dependencies and intuitively reveal biomolecular structure. CircularLogo is implemented in JavaScript and Python based on the Django web framework. The program's source code and user's manual are freely available at http://circularlogo.sourceforge.net . CircularLogo web server can be accessed from http://bioinformaticstools.mayo.edu/circularlogo/index.html . CircularLogo is an innovative web application that is specifically designed to visualize and interactively explore intra-motif dependencies.
Sousa, F S; Hummel, A D; Maciel, R F; Cohrs, F M; Falcão, A E J; Teixeira, F; Baptista, R; Mancini, F; da Costa, T M; Alves, D; Pisa, I T
2011-05-01
The replacement of defective organs with healthy ones is an old problem, but only a few years ago was this issue put into practice. Improvements in the whole transplantation process have been increasingly important in clinical practice. In this context are clinical decision support systems (CDSSs), which have reflected a significant amount of work to use mathematical and intelligent techniques. The aim of this article was to present consideration of intelligent techniques used in recent years (2009 and 2010) to analyze organ transplant databases. To this end, we performed a search of the PubMed and Institute for Scientific Information (ISI) Web of Knowledge databases to find articles published in 2009 and 2010 about intelligent techniques applied to transplantation databases. Among 69 retrieved articles, we chose according to inclusion and exclusion criteria. The main techniques were: Artificial Neural Networks (ANN), Logistic Regression (LR), Decision Trees (DT), Markov Models (MM), and Bayesian Networks (BN). Most articles used ANN. Some publications described comparisons between techniques or the use of various techniques together. The use of intelligent techniques to extract knowledge from databases of healthcare is increasingly common. Although authors preferred to use ANN, statistical techniques were equally effective for this enterprise. Copyright © 2011 Elsevier Inc. All rights reserved.
Advancing translational research with the Semantic Web.
Ruttenberg, Alan; Clark, Tim; Bug, William; Samwald, Matthias; Bodenreider, Olivier; Chen, Helen; Doherty, Donald; Forsberg, Kerstin; Gao, Yong; Kashyap, Vipul; Kinoshita, June; Luciano, Joanne; Marshall, M Scott; Ogbuji, Chimezie; Rees, Jonathan; Stephens, Susie; Wong, Gwendolyn T; Wu, Elizabeth; Zaccagnini, Davide; Hongsermeier, Tonya; Neumann, Eric; Herman, Ivan; Cheung, Kei-Hoi
2007-05-09
A fundamental goal of the U.S. National Institute of Health (NIH) "Roadmap" is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly structured data across related biomedical domains. The Semantic Web is an extension of the current Web that enables navigation and meaningful use of digital resources by automatic processes. It is based on common formats that support aggregation and integration of data drawn from diverse sources. A variety of technologies have been built on this foundation that, together, support identifying, representing, and reasoning across a wide range of biomedical data. The Semantic Web Health Care and Life Sciences Interest Group (HCLSIG), set up within the framework of the World Wide Web Consortium, was launched to explore the application of these technologies in a variety of areas. Subgroups focus on making biomedical data available in RDF, working with biomedical ontologies, prototyping clinical decision support systems, working on drug safety and efficacy communication, and supporting disease researchers navigating and annotating the large amount of potentially relevant literature. We present a scenario that shows the value of the information environment the Semantic Web can support for aiding neuroscience researchers. We then report on several projects by members of the HCLSIG, in the process illustrating the range of Semantic Web technologies that have applications in areas of biomedicine. Semantic Web technologies present both promise and challenges. Current tools and standards are already adequate to implement components of the bench-to-bedside vision. On the other hand, these technologies are young. Gaps in standards and implementations still exist and adoption is limited by typical problems with early technology, such as the need for a critical mass of practitioners and installed base, and growing pains as the technology is scaled up. Still, the potential of interoperable knowledge sources for biomedicine, at the scale of the World Wide Web, merits continued work.
Advancing translational research with the Semantic Web
Ruttenberg, Alan; Clark, Tim; Bug, William; Samwald, Matthias; Bodenreider, Olivier; Chen, Helen; Doherty, Donald; Forsberg, Kerstin; Gao, Yong; Kashyap, Vipul; Kinoshita, June; Luciano, Joanne; Marshall, M Scott; Ogbuji, Chimezie; Rees, Jonathan; Stephens, Susie; Wong, Gwendolyn T; Wu, Elizabeth; Zaccagnini, Davide; Hongsermeier, Tonya; Neumann, Eric; Herman, Ivan; Cheung, Kei-Hoi
2007-01-01
Background A fundamental goal of the U.S. National Institute of Health (NIH) "Roadmap" is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly structured data across related biomedical domains. The Semantic Web is an extension of the current Web that enables navigation and meaningful use of digital resources by automatic processes. It is based on common formats that support aggregation and integration of data drawn from diverse sources. A variety of technologies have been built on this foundation that, together, support identifying, representing, and reasoning across a wide range of biomedical data. The Semantic Web Health Care and Life Sciences Interest Group (HCLSIG), set up within the framework of the World Wide Web Consortium, was launched to explore the application of these technologies in a variety of areas. Subgroups focus on making biomedical data available in RDF, working with biomedical ontologies, prototyping clinical decision support systems, working on drug safety and efficacy communication, and supporting disease researchers navigating and annotating the large amount of potentially relevant literature. Results We present a scenario that shows the value of the information environment the Semantic Web can support for aiding neuroscience researchers. We then report on several projects by members of the HCLSIG, in the process illustrating the range of Semantic Web technologies that have applications in areas of biomedicine. Conclusion Semantic Web technologies present both promise and challenges. Current tools and standards are already adequate to implement components of the bench-to-bedside vision. On the other hand, these technologies are young. Gaps in standards and implementations still exist and adoption is limited by typical problems with early technology, such as the need for a critical mass of practitioners and installed base, and growing pains as the technology is scaled up. Still, the potential of interoperable knowledge sources for biomedicine, at the scale of the World Wide Web, merits continued work. PMID:17493285
Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord Hendrix; Bonnal, Raoul Jp; Bruskiewich, Richard; Bryne, Jan C; Fernández, José M; Funahashi, Akira; Gordon, Paul Mk; Goto, Naohisa; Groscurth, Andreas; Gutteridge, Alex; Holland, Richard; Kano, Yoshinobu; Kawas, Edward A; Kerhornou, Arnaud; Kibukawa, Eri; Kinjo, Akira R; Kuhn, Michael; Lapp, Hilmar; Lehvaslaiho, Heikki; Nakamura, Hiroyuki; Nakamura, Yasukazu; Nishizawa, Tatsuya; Nobata, Chikashi; Noguchi, Tamotsu; Oinn, Thomas M; Okamoto, Shinobu; Owen, Stuart; Pafilis, Evangelos; Pocock, Matthew; Prins, Pjotr; Ranzinger, René; Reisinger, Florian; Salwinski, Lukasz; Schreiber, Mark; Senger, Martin; Shigemoto, Yasumasa; Standley, Daron M; Sugawara, Hideaki; Tashiro, Toshiyuki; Trelles, Oswaldo; Vos, Rutger A; Wilkinson, Mark D; York, William; Zmasek, Christian M; Asai, Kiyoshi; Takagi, Toshihisa
2010-08-21
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.
2010-01-01
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies. PMID:20727200
NASA Astrophysics Data System (ADS)
Haener, Rainer; Waechter, Joachim; Grellet, Sylvain; Robida, Francois
2017-04-01
Interoperability is the key factor in establishing scientific research environments and infrastructures, as well as in bringing together heterogeneous, geographically distributed risk management, monitoring, and early warning systems. Based on developments within the European Plate Observing System (EPOS), a reference architecture has been devised that comprises architectural blue-prints and interoperability models regarding the specification of business processes and logic as well as the encoding of data, metadata, and semantics. The architectural blueprint is developed on the basis of the so called service-oriented architecture (SOA) 2.0 paradigm, which combines intelligence and proactiveness of event-driven with service-oriented architectures. SOA 2.0 supports analysing (Data Mining) both, static and real-time data in order to find correlations of disparate information that do not at first appear to be intuitively obvious: Analysed data (e.g., seismological monitoring) can be enhanced with relationships discovered by associating them (Data Fusion) with other data (e.g., creepmeter monitoring), with digital models of geological structures, or with the simulation of geological processes. The interoperability model describes the information, communication (conversations) and the interactions (choreographies) of all participants involved as well as the processes for registering, providing, and retrieving information. It is based on the principles of functional integration, implemented via dedicated services, communicating via service-oriented and message-driven infrastructures. The services provide their functionality via standardised interfaces: Instead of requesting data directly, users share data via services that are built upon specific adapters. This approach replaces the tight coupling at data level by a flexible dependency on loosely coupled services. The main component of the interoperability model is the comprehensive semantic description of the information, business logic and processes on the basis of a minimal set of well-known, established standards. It implements the representation of knowledge with the application of domain-controlled vocabularies to statements about resources, information, facts, and complex matters (ontologies). Seismic experts for example, would be interested in geological models or borehole measurements at a certain depth, based on which it is possible to correlate and verify seismic profiles. The entire model is built upon standards from the Open Geospatial Consortium (Dictionaries, Service Layer), the International Organisation for Standardisation (Registries, Metadata), and the World Wide Web Consortium (Resource Description Framework, Spatial Data on the Web Best Practices). It has to be emphasised that this approach is scalable to the greatest possible extent: All information, necessary in the context of cross-domain infrastructures is referenced via vocabularies and knowledge bases containing statements that provide either the information itself or resources (service-endpoints), the information can be retrieved from. The entire infrastructure communication is subject to a broker-based business logic integration platform where the information exchanged between involved participants, is managed on the basis of standardised dictionaries, repositories, and registries. This approach also enables the development of Systems-of-Systems (SoS), which allow the collaboration of autonomous, large scale concurrent, and distributed systems, yet cooperatively interacting as a collective in a common environment.
Intelligent cloud computing security using genetic algorithm as a computational tools
NASA Astrophysics Data System (ADS)
Razuky AL-Shaikhly, Mazin H.
2018-05-01
An essential change had occurred in the field of Information Technology which represented with cloud computing, cloud giving virtual assets by means of web yet awesome difficulties in the field of information security and security assurance. Currently main problem with cloud computing is how to improve privacy and security for cloud “cloud is critical security”. This paper attempts to solve cloud security by using intelligent system with genetic algorithm as wall to provide cloud data secure, all services provided by cloud must detect who receive and register it to create list of users (trusted or un-trusted) depend on behavior. The execution of present proposal has shown great outcome.
Uncovering text mining: A survey of current work on web-based epidemic intelligence
Collier, Nigel
2012-01-01
Real world pandemics such as SARS 2002 as well as popular fiction like the movie Contagion graphically depict the health threat of a global pandemic and the key role of epidemic intelligence (EI). While EI relies heavily on established indicator sources a new class of methods based on event alerting from unstructured digital Internet media is rapidly becoming acknowledged within the public health community. At the heart of automated information gathering systems is a technology called text mining. My contribution here is to provide an overview of the role that text mining technology plays in detecting epidemics and to synthesise my existing research on the BioCaster project. PMID:22783909
A simple method for serving Web hypermaps with dynamic database drill-down
Boulos, Maged N Kamel; Roudsari, Abdul V; Carson, Ewart R
2002-01-01
Background HealthCyberMap aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems. PMID:12437788
Exploring the SCOAP3 Research Contributions of the United States
NASA Astrophysics Data System (ADS)
Marsteller, Matthew
2016-03-01
The Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP3) is a successful global partnership of libraries, funding agencies and research centers. This presentation will inform the audience about SCOAP3 and also delve into descriptive statistics of the United States' intellectual contribution to particle physics via these open access journals. Exploration of the SCOAP3 particle physics literature using a variety of metrics tools such as Web of Science™, InCites™, Scopus® and SciVal will be shared. ORA or Sci2 will be used to visualize author collaboration networks.
KML Super Overlay to WMS Translator
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2007-01-01
This translator is a server-based application that automatically generates KML super overlay configuration files required by Google Earth for map data access via the Open Geospatial Consortium WMS (Web Map Service) standard. The translator uses a set of URL parameters that mirror the WMS parameters as much as possible, and it also can generate a super overlay subdivision of any given area that is only loaded when needed, enabling very large areas of coverage at very high resolutions. It can make almost any dataset available as a WMS service visible and usable in any KML application, without the need to reformat the data.
Semantically-enabled sensor plug & play for the sensor web.
Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian
2011-01-01
Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC's Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research.
Semantically-Enabled Sensor Plug & Play for the Sensor Web
Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian
2011-01-01
Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC’s Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research. PMID:22164033
Strudwick, Gillian; Forchuk, Cheryl; Morse, Adam; Lachance, Jessica; Baskaran, Arani; Allison, Lauren
2017-01-01
Background Intelligent assistive technologies that complement and extend human abilities have proliferated in recent years. Service robots, home automation equipment, and other digital assistant devices possessing artificial intelligence are forms of assistive technologies that have become popular in society. Older adults (>55 years of age) have been identified by industry, government, and researchers as a demographic who can benefit significantly from the use of intelligent assistive technology to support various activities of daily living. Objective The purpose of this scoping review is to summarize the literature on the importance of the concept of “trust” in the adoption of intelligent assistive technologies to assist aging in place by older adults. Methods Using a scoping review methodology, our search strategy will examine the following databases: ACM Digital Library, Allied and Complementary Medicine Database (AMED), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Medline, PsycINFO, Scopus, and Web of Science. Two reviewers will independently screen the initial titles obtained from the search, and these results will be further inspected by other members of the research team for inclusion in the review. Results This review will provide insights into how the concept of trust is actualized in the adoption of intelligent assistive technology by older adults. Preliminary sensitization to the literature suggests that the concept of trust is fluid, unstable, and intimately tied to the type of intelligent assistive technology being examined. Furthermore, a wide range of theoretical lenses that include elements of trust have been used to examine this concept. Conclusions This review will describe the concept of trust in the adoption of intelligent assistive technology by older adults, and will provide insights for practitioners, policy makers, and technology vendors for future practice. PMID:29097354
Safdari, Reza; Maserat, Elham; Asadzadeh Aghdaei, Hamid; Javan Amoli, Amir Hossein; Mohaghegh Shalmani, Hamid
2017-01-01
To survey person centered survival rate in population based screening program by an intelligent clinical decision support system. Colorectal cancer is the most common malignancy and major cause of morbidity and mortality throughout the world. Colorectal cancer is the sixth leading cause of cancer death in Iran. In this survey, we used cosine similarity as data mining technique and intelligent system for estimating survival of at risk groups in the screening plan. In the first step, we determined minimum data set (MDS). MDS was approved by experts and reviewing literatures. In the second step, MDS were coded by python language and matched with cosine similarity formula. Finally, survival rate by percent was illustrated in the user interface of national intelligent system. The national intelligent system was designed in PyCharm environment. Main data elements of intelligent system consist demographic information, age, referral type, risk group, recommendation and survival rate. Minimum data set related to survival comprise of clinical status, past medical history and socio-demographic information. Information of the covered population as a comprehensive database was connected to intelligent system and survival rate estimated for each patient. Mean range of survival of HNPCC patients and FAP patients were respectively 77.7% and 75.1%. Also, the mean range of the survival rate and other calculations have changed with the entry of new patients in the CRC registry by real-time. National intelligent system monitors the entire of risk group and reports survival rates by electronic guidelines and data mining technique and also operates according to the clinical process. This web base software has a critical role in the estimation survival rate in order to health care planning.
Falk, Marni J; Shen, Lishuang; Gonzalez, Michael; Leipzig, Jeremy; Lott, Marie T; Stassen, Alphons P M; Diroma, Maria Angela; Navarro-Gomez, Daniel; Yeske, Philip; Bai, Renkui; Boles, Richard G; Brilhante, Virginia; Ralph, David; DaRe, Jeana T; Shelton, Robert; Terry, Sharon F; Zhang, Zhe; Copeland, William C; van Oven, Mannis; Prokisch, Holger; Wallace, Douglas C; Attimonelli, Marcella; Krotoski, Danuta; Zuchner, Stephan; Gai, Xiaowu
2015-03-01
Success rates for genomic analyses of highly heterogeneous disorders can be greatly improved if a large cohort of patient data is assembled to enhance collective capabilities for accurate sequence variant annotation, analysis, and interpretation. Indeed, molecular diagnostics requires the establishment of robust data resources to enable data sharing that informs accurate understanding of genes, variants, and phenotypes. The "Mitochondrial Disease Sequence Data Resource (MSeqDR) Consortium" is a grass-roots effort facilitated by the United Mitochondrial Disease Foundation to identify and prioritize specific genomic data analysis needs of the global mitochondrial disease clinical and research community. A central Web portal (https://mseqdr.org) facilitates the coherent compilation, organization, annotation, and analysis of sequence data from both nuclear and mitochondrial genomes of individuals and families with suspected mitochondrial disease. This Web portal provides users with a flexible and expandable suite of resources to enable variant-, gene-, and exome-level sequence analysis in a secure, Web-based, and user-friendly fashion. Users can also elect to share data with other MSeqDR Consortium members, or even the general public, either by custom annotation tracks or through the use of a convenient distributed annotation system (DAS) mechanism. A range of data visualization and analysis tools are provided to facilitate user interrogation and understanding of genomic, and ultimately phenotypic, data of relevance to mitochondrial biology and disease. Currently available tools for nuclear and mitochondrial gene analyses include an MSeqDR GBrowse instance that hosts optimized mitochondrial disease and mitochondrial DNA (mtDNA) specific annotation tracks, as well as an MSeqDR locus-specific database (LSDB) that curates variant data on more than 1300 genes that have been implicated in mitochondrial disease and/or encode mitochondria-localized proteins. MSeqDR is integrated with a diverse array of mtDNA data analysis tools that are both freestanding and incorporated into an online exome-level dataset curation and analysis resource (GEM.app) that is being optimized to support needs of the MSeqDR community. In addition, MSeqDR supports mitochondrial disease phenotyping and ontology tools, and provides variant pathogenicity assessment features that enable community review, feedback, and integration with the public ClinVar variant annotation resource. A centralized Web-based informed consent process is being developed, with implementation of a Global Unique Identifier (GUID) system to integrate data deposited on a given individual from different sources. Community-based data deposition into MSeqDR has already begun. Future efforts will enhance capabilities to incorporate phenotypic data that enhance genomic data analyses. MSeqDR will fill the existing void in bioinformatics tools and centralized knowledge that are necessary to enable efficient nuclear and mtDNA genomic data interpretation by a range of shareholders across both clinical diagnostic and research settings. Ultimately, MSeqDR is focused on empowering the global mitochondrial disease community to better define and explore mitochondrial diseases. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Konstantinidis, A.; Anaxagoras, T.; Esposito, M.; Allinson, N.; Speller, R.
2012-03-01
X-ray diffraction studies are used to identify specific materials. Several laboratory-based x-ray diffraction studies were made for breast cancer diagnosis. Ideally a large area, low noise, linear and wide dynamic range digital x-ray detector is required to perform x-ray diffraction measurements. Recently, digital detectors based on Complementary Metal-Oxide- Semiconductor (CMOS) Active Pixel Sensor (APS) technology have been used in x-ray diffraction studies. Two APS detectors, namely Vanilla and Large Area Sensor (LAS), were developed by the Multidimensional Integrated Intelligent Imaging (MI-3) consortium to cover a range of scientific applications including x-ray diffraction. The MI-3 Plus consortium developed a novel large area APS, named as Dynamically Adjustable Medical Imaging Technology (DynAMITe), to combine the key characteristics of Vanilla and LAS with a number of extra features. The active area (12.8 × 13.1 cm2) of DynaMITe offers the ability of angle dispersive x-ray diffraction (ADXRD). The current study demonstrates the feasibility of using DynaMITe for breast cancer diagnosis by identifying six breast-equivalent plastics. Further work will be done to optimize the system in order to perform ADXRD for identification of suspicious areas of breast tissue following a conventional mammogram taken with the same sensor.
Object classification and outliers analysis in the forthcoming Gaia mission
NASA Astrophysics Data System (ADS)
Ordóñez-Blanco, D.; Arcay, B.; Dafonte, C.; Manteiga, M.; Ulla, A.
2010-12-01
Astrophysics is evolving towards the rational optimization of costly observational material by the intelligent exploitation of large astronomical databases from both terrestrial telescopes and spatial mission archives. However, there has been relatively little advance in the development of highly scalable data exploitation and analysis tools needed to generate the scientific returns from these large and expensively obtained datasets. Among the upcoming projects of astronomical instrumentation, Gaia is the next cornerstone ESA mission. The Gaia survey foresees the creation of a data archive and its future exploitation with automated or semi-automated analysis tools. This work reviews some of the work that is being developed by the Gaia Data Processing and Analysis Consortium for the object classification and analysis of outliers in the forthcoming mission.
Improving Logistics Processes in Industry Using Web Technologies
NASA Astrophysics Data System (ADS)
Jánošík, Ján; Tanuška, Pavol; Václavová, Andrea
2016-12-01
The aim of this paper is to propose the concept of a system that takes advantage of web technologies and integrates them into the management process and management of internal stocks which may relate to external applications and creates the conditions to transform a Computerized Control of Warehouse Stock (CCWS) in the company. The importance of implementing CCWS is in the elimination of the claims caused by the human factor, as well as to allow the processing of information for analytical purposes and their subsequent use to improve internal processes. Using CCWS in the company would also facilitate better use of the potential tools Business Intelligence and Data Mining.
From Newton to Einstein; Ask the physicist about mechanics and relativity
NASA Astrophysics Data System (ADS)
Baker, F. Todd
2014-12-01
Since 2006 the author has run a web site, WWW.AskThePhysicist.com, where he answers questions about physics. The site is not intended for answering highly technical questions; rather the purpose is to answer, with as little mathematics and formalism as possible, questions from intelligent and curious laypersons. This book is about classical mechanics. Usually `classical' calls to mind Newtonian mechanics and that is indeed where modern physics started. The bulk of the book is devoted to sections which will contain mainly categorized groups of Q&As from the web site, sort of a Best of Ask the Physicist.
A Story of a Crashed Plane in US-Mexican border
NASA Astrophysics Data System (ADS)
Bermudez, Luis; Hobona, Gobe; Vretanos, Peter; Peterson, Perry
2013-04-01
A plane has crashed on the US-Mexican border. The search and rescue command center planner needs to find information about the crash site, a mountain, nearby mountains for the establishment of a communications tower, as well as ranches for setting up a local incident center. Events like this one occur all over the world and exchanging information seamlessly is key to save lives and prevent further disasters. This abstract describes an interoperability testbed that applied this scenario using technologies based on Open Geospatial Consortium (OGC) standards. The OGC, which has about 500 members, serves as a global forum for the collaboration of developers and users of spatial data products and services, and to advance the development of international standards for geospatial interoperability. The OGC Interoperability Program conducts international interoperability testbeds, such as the OGC Web Services Phase 9 (OWS-9), that encourages rapid development, testing, validation, demonstration and adoption of open, consensus based standards and best practices. The Cross-Community Interoperability (CCI) thread in OWS-9 advanced the Web Feature Service for Gazetteers (WFS-G) by providing a Single Point of Entry Global Gazetteer (SPEGG), where a user can submit a single query and access global geographic names data across multiple Federal names databases. Currently users must make two queries with differing input parameters against two separate databases to obtain authoritative cross border geographic names data. The gazetteers in this scenario included: GNIS and GNS. GNIS or Geographic Names Information System is managed by USGS. It was first developed in 1964 and contains information about domestic and Antarctic names. GNS or GeoNET Names Server provides the Geographic Names Data Base (GNDB) and it is managed by National Geospatial Intelligence Agency (NGA). GNS has been in service since 1994, and serves names for areas outside the United States and its dependent areas, as well as names for undersea features. The following challenges were advanced: Cascaded WFS-G servers (allowing to query multiple WFSs with a "parent" WFS), implemented query names filters (e.g. fuzzy search, text search), implemented dealing with multilingualism and diacritics, implemented advanced spatial constraints (e.g. search by radial search and nearest neighbor) and semantically mediated feature types (e.g. mountain vs. hill). To enable semantic mediation, a series of semantic mappings were defined between the NGA GNS, USGS GNIS and the Alexandria Digital Library (ADL) Gazetteer. The mappings were encoded in the Web Ontology Language (OWL) to enable them to be used by semantic web technologies. The semantic mappings were then published for ingestion into a semantic mediator that used the mappings to associate location types from one gazetteer with location types in another. The semantic mediator was then able to transform requests on the fly, providing a single point of entry WFS-G to multiple gazetteers. The presentation will provide a live presentation of the work performed, highlight main developments, and discuss future development.
Kawano, Shin; Watanabe, Tsutomu; Mizuguchi, Sohei; Araki, Norie; Katayama, Toshiaki; Yamaguchi, Atsuko
2014-07-01
TogoTable (http://togotable.dbcls.jp/) is a web tool that adds user-specified annotations to a table that a user uploads. Annotations are drawn from several biological databases that use the Resource Description Framework (RDF) data model. TogoTable uses database identifiers (IDs) in the table as a query key for searching. RDF data, which form a network called Linked Open Data (LOD), can be searched from SPARQL endpoints using a SPARQL query language. Because TogoTable uses RDF, it can integrate annotations from not only the reference database to which the IDs originally belong, but also externally linked databases via the LOD network. For example, annotations in the Protein Data Bank can be retrieved using GeneID through links provided by the UniProt RDF. Because RDF has been standardized by the World Wide Web Consortium, any database with annotations based on the RDF data model can be easily incorporated into this tool. We believe that TogoTable is a valuable Web tool, particularly for experimental biologists who need to process huge amounts of data such as high-throughput experimental output. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Buzzi, Marina; Leporini, Barbara
2009-07-01
This study aims to improve Wikipedia usability for the blind and promote the application of standards relating to Web accessibility and usability. First, accessibility and usability of Wikipedia home, search result and edit pages are analysed using the JAWS screen reader; next, suggestions for improving interaction are proposed and a new Wikipedia editing interface built. Most of the improvements were obtained using the Accessible Rich Internet Applications (WAI-ARIA) suite, developed by the World Wide Web Consortium (W3C) within the framework of the Web Accessibility Initiative (WAI). Last, a scenario of use compares interaction of blind people with the original and the modified interfaces. Our study highlights that although all contents are accessible via screen reader, usability issues exist due to the user's difficulties when interacting with the interface. The scenario of use shows how building an editing interface with the W3C WAI-ARIA suite eliminates many obstacles that can prevent blind users from actively contributing to Wikipedia. The modified Wikipedia editing page is simpler to use via a screen reader than the original one because ARIA ensures a page overview, rapid navigation, and total control of what is happening in the interface.
Bridging Social and Semantic Computing - Design and Evaluation of User Interfaces for Hybrid Systems
ERIC Educational Resources Information Center
Bostandjiev, Svetlin Alex I.
2012-01-01
The evolution of the Web brought new interesting problems to computer scientists that we loosely classify in the fields of social and semantic computing. Social computing is related to two major paradigms: computations carried out by a large amount of people in a collective intelligence fashion (i.e. wikis), and performing computations on social…
ERIC Educational Resources Information Center
Wendel, Holly Marie
2016-01-01
The purpose of this study was to determine the relationship each of the mathematics web-based programs, MyMathLab and Assessments and Learning in Knowledge Spaces (ALEKS), has with students' mathematics achievement. In addition, the study examined the relationship between students' affective domain and the type of program as well as student…
ERIC Educational Resources Information Center
McCarthy, Matthew T.
2017-01-01
Artificial intelligence (AI) that is based upon semantic search has become one of the dominant means for accessing information in recent years. This is particularly the case in mobile contexts, as search-based AI are embedded in each of the major mobile operating systems. The implications are such that information is becoming less a matter of…
How Do You Act Intelligently When You Don't Know What You Are Doing?
ERIC Educational Resources Information Center
Levinson, Eliot; Grohe, Barbara
2001-01-01
Presents basic questions for schools to consider in deciding between late and early adoption of new curriculum systems. Outlines rules of thumb for setting up contracts for new Web-based technologies. Suggests that good planning in the initial stages will ameliorate most of the problems that can occur. Concludes with two additional guidelines:…
Ship to Shore Data Communication and Prioritization
2011-12-01
First Out FTP File Transfer Protocol GCCS-M Global Command and Control System Maritime HAIPE High Assurance Internet Protocol Encryptor HTTP Hypertext...Transfer Protocol (world wide web protocol ) IBS Integrated Bar Code System IDEF0 Integration Definition IER Information Exchange Requirements...INTEL Intelligence IP Internet Protocol IPT Integrated Product Team ISEA In-Service Engineering Agent ISNS Integrated Shipboard Network System IT
An ontological knowledge framework for adaptive medical workflow.
Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir
2008-10-01
As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly.
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2012-05-01
This paper is the fifth part (out of five) of the research survey of WILGA Symposium work, May 2012 Edition, concerned with Biomedical, Artificial Intelligence and DNA Computing technologies. It presents a digest of chosen technical work results shown by young researchers from different technical universities from this country during the Jubilee XXXth SPIE-IEEE Wilga 2012, May Edition, symposium on Photonics and Web Engineering. Topical tracks of the symposium embraced, among others, nanomaterials and nanotechnologies for photonics, sensory and nonlinear optical fibers, object oriented design of hardware, photonic metrology, optoelectronics and photonics applications, photonics-electronics co-design, optoelectronic and electronic systems for astronomy and high energy physics experiments, JET tokamak and pi-of-the sky experiments development. The symposium is an annual summary in the development of numerable Ph.D. theses carried out in this country in the area of advanced electronic and photonic systems. It is also a great occasion for SPIE, IEEE, OSA and PSP students to meet together in a large group spanning the whole country with guests from this part of Europe. A digest of Wilga references is presented [1-270].
Proactive Supply Chain Performance Management with Predictive Analytics
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605
Proactive supply chain performance management with predictive analytics.
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.
Ad-Hoc Queries over Document Collections - A Case Study
NASA Astrophysics Data System (ADS)
Löser, Alexander; Lutter, Steffen; Düssel, Patrick; Markl, Volker
We discuss the novel problem of supporting analytical business intelligence queries over web-based textual content, e.g., BI-style reports based on 100.000's of documents from an ad-hoc web search result. Neither conventional search engines nor conventional Business Intelligence and ETL tools address this problem, which lies at the intersection of their capabilities. "Google Squared" or our system GOOLAP.info, are examples of these kinds of systems. They execute information extraction methods over one or several document collections at query time and integrate extracted records into a common view or tabular structure. Frequent extraction and object resolution failures cause incomplete records which could not be joined into a record answering the query. Our focus is the identification of join-reordering heuristics maximizing the size of complete records answering a structured query. With respect to given costs for document extraction we propose two novel join-operations: The multi-way CJ-operator joins records from multiple relationships extracted from a single document. The two-way join-operator DJ ensures data density by removing incomplete records from results. In a preliminary case study we observe that our join-reordering heuristics positively impact result size, record density and lower execution costs.
Gianni, Daniele; McKeever, Steve; Yu, Tommy; Britten, Randall; Delingette, Hervé; Frangi, Alejandro; Hunter, Peter; Smith, Nicolas
2010-06-28
Sharing and reusing anatomical models over the Web offers a significant opportunity to progress the investigation of cardiovascular diseases. However, the current sharing methodology suffers from the limitations of static model delivery (i.e. embedding static links to the models within Web pages) and of a disaggregated view of the model metadata produced by publications and cardiac simulations in isolation. In the context of euHeart--a research project targeting the description and representation of cardiovascular models for disease diagnosis and treatment purposes--we aim to overcome the above limitations with the introduction of euHeartDB, a Web-enabled database for anatomical models of the heart. The database implements a dynamic sharing methodology by managing data access and by tracing all applications. In addition to this, euHeartDB establishes a knowledge link with the physiome model repository by linking geometries to CellML models embedded in the simulation of cardiac behaviour. Furthermore, euHeartDB uses the exFormat--a preliminary version of the interoperable FieldML data format--to effectively promote reuse of anatomical models, and currently incorporates Continuum Mechanics, Image Analysis, Signal Processing and System Identification Graphical User Interface (CMGUI), a rendering engine, to provide three-dimensional graphical views of the models populating the database. Currently, euHeartDB stores 11 cardiac geometries developed within the euHeart project consortium.
NASA Astrophysics Data System (ADS)
Jiang, Guodong; Fan, Ming; Li, Lihua
2016-03-01
Mammography is the gold standard for breast cancer screening, reducing mortality by about 30%. The application of a computer-aided detection (CAD) system to assist a single radiologist is important to further improve mammographic sensitivity for breast cancer detection. In this study, a design and realization of the prototype for remote diagnosis system in mammography based on cloud platform were proposed. To build this system, technologies were utilized including medical image information construction, cloud infrastructure and human-machine diagnosis model. Specifically, on one hand, web platform for remote diagnosis was established by J2EE web technology. Moreover, background design was realized through Hadoop open-source framework. On the other hand, storage system was built up with Hadoop distributed file system (HDFS) technology which enables users to easily develop and run on massive data application, and give full play to the advantages of cloud computing which is characterized by high efficiency, scalability and low cost. In addition, the CAD system was realized through MapReduce frame. The diagnosis module in this system implemented the algorithms of fusion of machine and human intelligence. Specifically, we combined results of diagnoses from doctors' experience and traditional CAD by using the man-machine intelligent fusion model based on Alpha-Integration and multi-agent algorithm. Finally, the applications on different levels of this system in the platform were also discussed. This diagnosis system will have great importance for the balanced health resource, lower medical expense and improvement of accuracy of diagnosis in basic medical institutes.
WE-E-BRB-11: Riview a Web-Based Viewer for Radiotherapy.
Apte, A; Wang, Y; Deasy, J
2012-06-01
Collaborations involving radiotherapy data collection, such as the recently proposed international radiogenomics consortium, require robust, web-based tools to facilitate reviewing treatment planning information. We present the architecture and prototype characteristics for a web-based radiotherapy viewer. The web-based environment developed in this work consists of the following components: 1) Import of DICOM/RTOG data: CERR was leveraged to import DICOM/RTOG data and to convert to database friendly RT objects. 2) Extraction and Storage of RT objects: The scan and dose distributions were stored as .png files per slice and view plane. The file locations were written to the MySQL database. Structure contours and DVH curves were written to the database as numeric data. 3) Web interfaces to query, retrieve and visualize the RT objects: The Web application was developed using HTML 5 and Ruby on Rails (RoR) technology following the MVC philosophy. The open source ImageMagick library was utilized to overlay scan, dose and structures. The application allows users to (i) QA the treatment plans associated with a study, (ii) Query and Retrieve patients matching anonymized ID and study, (iii) Review up to 4 plans simultaneously in 4 window panes (iv) Plot DVH curves for the selected structures and dose distributions. A subset of data for lung cancer patients was used to prototype the system. Five user accounts were created to have access to this study. The scans, doses, structures and DVHs for 10 patients were made available via the web application. A web-based system to facilitate QA, and support Query, Retrieve and the Visualization of RT data was prototyped. The RIVIEW system was developed using open source and free technology like MySQL and RoR. We plan to extend the RIVIEW system further to be useful in clinical trial data collection, outcomes research, cohort plan review and evaluation. © 2012 American Association of Physicists in Medicine.
Automated expert modeling for automated student evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, Robert G.
The 8th International Conference on Intelligent Tutoring Systems provides a leading international forum for the dissemination of original results in the design, implementation, and evaluation of intelligent tutoring systems and related areas. The conference draws researchers from a broad spectrum of disciplines ranging from artificial intelligence and cognitive science to pedagogy and educational psychology. The conference explores intelligent tutoring systems increasing real world impact on an increasingly global scale. Improved authoring tools and learning object standards enable fielding systems and curricula in real world settings on an unprecedented scale. Researchers deploy ITS's in ever larger studies and increasingly use datamore » from real students, tasks, and settings to guide new research. With high volumes of student interaction data, data mining, and machine learning, tutoring systems can learn from experience and improve their teaching performance. The increasing number of realistic evaluation studies also broaden researchers knowledge about the educational contexts for which ITS's are best suited. At the same time, researchers explore how to expand and improve ITS/student communications, for example, how to achieve more flexible and responsive discourse with students, help students integrate Web resources into learning, use mobile technologies and games to enhance student motivation and learning, and address multicultural perspectives.« less
National Geothermal Data System: State Geological Survey Contributions to Date
NASA Astrophysics Data System (ADS)
Patten, K.; Allison, M. L.; Richard, S. M.; Clark, R.; Love, D.; Coleman, C.; Caudill, C.; Matti, J.; Musil, L.; Day, J.; Chen, G.
2012-12-01
In collaboration with the Association of American State Geologists the Arizona Geological Survey is leading the effort to bring legacy geothermal data to the U.S. Department of Energy's National Geothermal Data System (NGDS). NGDS is a national, sustainable, distributed, interoperable network of data and service (application) providers entering its final stages of development. Once completed the geothermal industry, the public, and policy makers will have access to consistent and reliable data, which in turn, reduces the amount of staff time devoted to finding, retrieving, integrating, and verifying information. With easier access to information, the high cost and risk of geothermal power projects (especially exploration drilling) is reduced. This presentation focuses on the scientific and data integration methodology as well as State Geological Survey contributions to date. The NGDS is built using the U.S. Geoscience Information Network (USGIN) data integration framework to promote interoperability across the Earth sciences community and with other emerging data integration and networking efforts. Core to the USGIN concept is that of data provenance; by allowing data providers to maintain and house their data. After concluding the second year of the project, we have nearly 800 datasets representing over 2 million data points from the state geological surveys. A new AASG specific search catalog based on popular internet search formats enables end users to more easily find and identify geothermal resources in a specific region. Sixteen states, including a consortium of Great Basin states, have initiated new field data collection for submission to the NGDS. The new field data includes data from at least 21 newly drilled thermal gradient holes in previously unexplored areas. Most of the datasets provided to the NGDS are being portrayed as Open Geospatial Consortium (OGC) Web Map Services (WMS) and Web Feature Services (WFS), meaning that the data is compatible with a variety of visualization software. Web services are ideal for the NGDS data for a number of reasons including that they preserve data ownership in that they are read only and new services can be deployed to meet new requirements without modifying existing applications.
Emergency Response Virtual Environment for Safe Schools
NASA Technical Reports Server (NTRS)
Wasfy, Ayman; Walker, Teresa
2008-01-01
An intelligent emergency response virtual environment (ERVE) that provides emergency first responders, response planners, and managers with situational awareness as well as training and support for safe schools is presented. ERVE incorporates an intelligent agent facility for guiding and assisting the user in the context of the emergency response operations. Response information folders capture key information about the school. The system enables interactive 3D visualization of schools and academic campuses, including the terrain and the buildings' exteriors and interiors in an easy to use Web..based interface. ERVE incorporates live camera and sensors feeds and can be integrated with other simulations such as chemical plume simulation. The system is integrated with a Geographical Information System (GIS) to enable situational awareness of emergency events and assessment of their effect on schools in a geographic area. ERVE can also be integrated with emergency text messaging notification systems. Using ERVE, it is now possible to address safe schools' emergency management needs with a scaleable, seamlessly integrated and fully interactive intelligent and visually compelling solution.
NASA Astrophysics Data System (ADS)
Murphy, M.; Corns, A.; Cahill, J.; Eliashvili, K.; Chenau, A.; Pybus, C.; Shaw, R.; Devlin, G.; Deevy, A.; Truong-Hong, L.
2017-08-01
Cultural heritage researchers have recently begun applying Building Information Modelling (BIM) to historic buildings. The model is comprised of intelligent objects with semantic attributes which represent the elements of a building structure and are organised within a 3D virtual environment. Case studies in Ireland are used to test and develop the suitable systems for (a) data capture/digital surveying/processing (b) developing library of architectural components and (c) mapping these architectural components onto the laser scan or digital survey to relate the intelligent virtual representation of a historic structure (HBIM). While BIM platforms have the potential to create a virtual and intelligent representation of a building, its full exploitation and use is restricted to narrow set of expert users with access to costly hardware, software and skills. The testing of open BIM approaches in particular IFCs and the use of game engine platforms is a fundamental component for developing much wider dissemination. The semantically enriched model can be transferred into a WEB based game engine platform.
Leveraging Open Standard Interfaces in Accessing and Processing NASA Data Model Outputs
NASA Astrophysics Data System (ADS)
Falke, S. R.; Alameh, N. S.; Hoijarvi, K.; de La Beaujardiere, J.; Bambacus, M. J.
2006-12-01
An objective of NASA's Earth Science Division is to develop advanced information technologies for processing, archiving, accessing, visualizing, and communicating Earth Science data. To this end, NASA and other federal agencies have collaborated with the Open Geospatial Consortium (OGC) to research, develop, and test interoperability specifications within projects and testbeds benefiting the government, industry, and the public. This paper summarizes the results of a recent effort under the auspices of the OGC Web Services testbed phase 4 (OWS-4) to explore standardization approaches for accessing and processing the outputs of NASA models of physical phenomena. Within the OWS-4 context, experiments were designed to leverage the emerging OGC Web Processing Service (WPS) and Web Coverage Service (WCS) specifications to access, filter and manipulate the outputs of the NASA Goddard Earth Observing System (GEOS) and Goddard Chemistry Aerosol Radiation and Transport (GOCART) forecast models. In OWS-4, the intent is to provide the users with more control over the subsets of data that they can extract from the model results as well as over the final portrayal of that data. To meet that goal, experiments have been designed to test the suitability of use of OGC's Web Processing Service (WPS) and Web Coverage Service (WCS) for filtering, processing and portraying the model results (including slices by height or by time), and to identify any enhancements to the specs to meet the desired objectives. This paper summarizes the findings of the experiments highlighting the value of the Web Processing Service in providing standard interfaces for accessing and manipulating model data within spatial and temporal frameworks. The paper also points out the key shortcomings of the WPS especially in terms in comparison with a SOAP/WSDL approach towards solving the same problem.
NASA Technical Reports Server (NTRS)
Mandl, Daniel; Unger, Stephen; Ames, Troy; Frye, Stuart; Chien, Steve; Cappelaere, Pat; Tran, Danny; Derezinski, Linda; Paules, Granville
2007-01-01
This paper will describe the progress of a 3 year research award from the NASA Earth Science Technology Office (ESTO) that began October 1, 2006, in response to a NASA Announcement of Research Opportunity on the topic of sensor webs. The key goal of this research is to prototype an interoperable sensor architecture that will enable interoperability between a heterogeneous set of space-based, Unmanned Aerial System (UAS)-based and ground based sensors. Among the key capabilities being pursued is the ability to automatically discover and task the sensors via the Internet and to automatically discover and assemble the necessary science processing algorithms into workflows in order to transform the sensor data into valuable science products. Our first set of sensor web demonstrations will prototype science products useful in managing wildfires and will use such assets as the Earth Observing 1 spacecraft, managed out of NASA/GSFC, a UASbased instrument, managed out of Ames and some automated ground weather stations, managed by the Forest Service. Also, we are collaborating with some of the other ESTO awardees to expand this demonstration and create synergy between our research efforts. Finally, we are making use of Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) suite of standards and some Web 2.0 capabilities to Beverage emerging technologies and standards. This research will demonstrate and validate a path for rapid, low cost sensor integration, which is not tied to a particular system, and thus be able to absorb new assets in an easily evolvable, coordinated manner. This in turn will help to facilitate the United States contribution to the Global Earth Observation System of Systems (GEOSS), as agreed by the U.S. and 60 other countries at the third Earth Observation Summit held in February of 2005.
Cognetti, G; Cecere, L
2003-12-01
In 2002 the Italian Ministry of Health promoted the institution of a network and a web portal, E-oncology (2), for the seven NHS research institutions specialising in oncology (Istituti di Ricovero e Cura a Carattere Scientifico-IRCCS). One of the aims was to gather and provide information on tumoral pathologies to operators and the public. For an optimum organisation of a health web site it is necessary to comply with the standards internationally used. The World Wide Web Consortium (W3C) has developed guidelines for accessibility and usability of the sites, implemented in Italy through governmental issues. Many international organisations adopt rules and codes of conduct to validate biomedical information and have organised quality portals such as NLM, OMNI, MEDCIRCLE, HON etc. Some terminological standards, such as the MESH thesaurus and UMLS, have been produced by the libraries for a correct management and an effective information retrieval, and are currently used by the most important biomedical web sites. The Dublin Core, metadata standard for the integration of information deriving from heterogeneous archives, has also been developed by the libraries. The easy access to information dims the complex architecture necessary for the construction of a web site. The contribution of different professionals is necessary to guarantee the production of quality medical/health web sites, among them librarians have always been involved with the management of knowledge and their skills are extremely valuable. Furthermore, the libraries' network is essential in order to guarantee universal access to health information, mostly still against payment, and to contribute to overcoming the 'digital divide' and 'second-level digital divide'.
Web-based expert system for foundry pollution prevention
NASA Astrophysics Data System (ADS)
Moynihan, Gary P.
2004-02-01
Pollution prevention is a complex task. Many small foundries lack the in-house expertise to perform these tasks. Expert systems are a type of computer information system that incorporates artificial intelligence. As noted in the literature, they provide a means of automating specialized expertise. This approach may be further leveraged by implementing the expert system on the internet (or world-wide web). This will allow distribution of the expertise to a variety of geographically-dispersed foundries. The purpose of this research is to develop a prototype web-based expert system to support pollution prevention for the foundry industry. The prototype system identifies potential emissions for a specified process, and also provides recommendations for the prevention of these contaminants. The system is viewed as an initial step toward assisting the foundry industry in better meeting government pollution regulations, as well as improving operating efficiencies within these companies.
NASA Astrophysics Data System (ADS)
Criado, Javier; Padilla, Nicolás; Iribarne, Luis; Asensio, Jose-Andrés
Due to the globalization of the information and knowledge society on the Internet, modern Web-based Information Systems (WIS) must be flexible and prepared to be easily accessible and manageable in real-time. In recent times it has received a special interest the globalization of information through a common vocabulary (i.e., ontologies), and the standardized way in which information is retrieved on the Web (i.e., powerful search engines, and intelligent software agents). These same principles of globalization and standardization should also be valid for the user interfaces of the WIS, but they are built on traditional development paradigms. In this paper we present an approach to reduce the gap of globalization/standardization in the generation of WIS user interfaces by using a real-time "bottom-up" composition perspective with COTS-interface components (type interface widgets) and trading services.
Virtual Sensor Web Architecture
NASA Astrophysics Data System (ADS)
Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.
2006-12-01
NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.
NASA Astrophysics Data System (ADS)
Newman, Andrew J.; Richardson, Casey L.; Kain, Sean M.; Stankiewicz, Paul G.; Guseman, Paul R.; Schreurs, Blake A.; Dunne, Jeffrey A.
2016-05-01
This paper introduces the game of reconnaissance blind multi-chess (RBMC) as a paradigm and test bed for understanding and experimenting with autonomous decision making under uncertainty and in particular managing a network of heterogeneous Intelligence, Surveillance and Reconnaissance (ISR) sensors to maintain situational awareness informing tactical and strategic decision making. The intent is for RBMC to serve as a common reference or challenge problem in fusion and resource management of heterogeneous sensor ensembles across diverse mission areas. We have defined a basic rule set and a framework for creating more complex versions, developed a web-based software realization to serve as an experimentation platform, and developed some initial machine intelligence approaches to playing it.
Ajay, Dara; Gangwal, Rahul P; Sangamwar, Abhay T
2015-01-01
Intelligent Patent Analysis Tool (IPAT) is an online data retrieval tool, operated based on text mining algorithm to extract specific patent information in a predetermined pattern into an Excel sheet. The software is designed and developed to retrieve and analyze technology information from multiple patent documents and generate various patent landscape graphs and charts. The software is C# coded in visual studio 2010, which extracts the publicly available patent information from the web pages like Google Patent and simultaneously study the various technology trends based on user-defined parameters. In other words, IPAT combined with the manual categorization will act as an excellent technology assessment tool in competitive intelligence and due diligence for predicting the future R&D forecast.
Ontological engineering versus metaphysics
NASA Astrophysics Data System (ADS)
Tataj, Emanuel; Tomanek, Roman; Mulawka, Jan
2011-10-01
It has been recognized that ontologies are a semantic version of world wide web and can be found in knowledge-based systems. A recent time survey of this field also suggest that practical artificial intelligence systems may be motivated by this research. Especially strong artificial intelligence as well as concept of homo computer can also benefit from their use. The main objective of this contribution is to present and review already created ontologies and identify the main advantages which derive such approach for knowledge management systems. We would like to present what ontological engineering borrows from metaphysics and what a feedback it can provide to natural language processing, simulations and modelling. The potential topics of further development from philosophical point of view is also underlined.
Sabariego, Carla; Cieza, Alarcos
2016-01-01
Background Mental disorders (MDs) affect almost 1 in 4 adults at some point during their lifetime, and coupled with substance use disorders are the fifth leading cause of disability adjusted life years worldwide. People with these disorders often use the Web as an informational resource, platform for convenient self-directed treatment, and a means for many other kinds of support. However, some features of the Web can potentially erect barriers for this group that limit their access to these benefits, and there is a lack of research looking into this eventuality. Therefore, it is important to identify gaps in knowledge about “what” barriers exist and “how” they could be addressed so that this knowledge can inform Web professionals who aim to ensure the Web is inclusive to this population. Objective The objective of this study was to provide an overview of existing evidence regarding the barriers people with mental disorders experience when using the Web and the facilitation measures used to address such barriers. Methods This study involved a systematic review of studies that have considered the difficulties people with mental disorders experience when using digital technologies. Digital technologies were included because knowledge about any barriers here would likely be also applicable to the Web. A synthesis was performed by categorizing data according to the 4 foundational principles of Web accessibility as proposed by the World Wide Web Consortium, which forms the necessary basis for anyone to gain adequate access to the Web. Facilitation measures recommended by studies were later summarized into a set of minimal recommendations. Results A total of 16 publications were included in this review, comprising 13 studies and 3 international guidelines. Findings suggest that people with mental disorders experience barriers that limit how they perceive, understand, and operate websites. Identified facilitation measures target these barriers in addition to ensuring that Web content can be reliably interpreted by a wide range of user applications. Conclusions People with mental disorders encounter barriers on the Web, and attempts have been made to remove or reduce these barriers. As forewarned by experts in the area, only a few studies investigating this issue were found. More rigorous research is needed to be exhaustive and to have a larger impact on improving the Web for people with mental disorders. PMID:27282115
Bernard, Renaldo; Sabariego, Carla; Cieza, Alarcos
2016-06-09
Mental disorders (MDs) affect almost 1 in 4 adults at some point during their lifetime, and coupled with substance use disorders are the fifth leading cause of disability adjusted life years worldwide. People with these disorders often use the Web as an informational resource, platform for convenient self-directed treatment, and a means for many other kinds of support. However, some features of the Web can potentially erect barriers for this group that limit their access to these benefits, and there is a lack of research looking into this eventuality. Therefore, it is important to identify gaps in knowledge about "what" barriers exist and "how" they could be addressed so that this knowledge can inform Web professionals who aim to ensure the Web is inclusive to this population. The objective of this study was to provide an overview of existing evidence regarding the barriers people with mental disorders experience when using the Web and the facilitation measures used to address such barriers. This study involved a systematic review of studies that have considered the difficulties people with mental disorders experience when using digital technologies. Digital technologies were included because knowledge about any barriers here would likely be also applicable to the Web. A synthesis was performed by categorizing data according to the 4 foundational principles of Web accessibility as proposed by the World Wide Web Consortium, which forms the necessary basis for anyone to gain adequate access to the Web. Facilitation measures recommended by studies were later summarized into a set of minimal recommendations. A total of 16 publications were included in this review, comprising 13 studies and 3 international guidelines. Findings suggest that people with mental disorders experience barriers that limit how they perceive, understand, and operate websites. Identified facilitation measures target these barriers in addition to ensuring that Web content can be reliably interpreted by a wide range of user applications. People with mental disorders encounter barriers on the Web, and attempts have been made to remove or reduce these barriers. As forewarned by experts in the area, only a few studies investigating this issue were found. More rigorous research is needed to be exhaustive and to have a larger impact on improving the Web for people with mental disorders.
Optimized Autonomous Space In-situ Sensor-Web for volcano monitoring
Song, W.-Z.; Shirazi, B.; Kedar, S.; Chien, S.; Webb, F.; Tran, D.; Davis, A.; Pieri, D.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.
2008-01-01
In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), is developing a prototype dynamic and scaleable hazard monitoring sensor-web and applying it to volcano monitoring. The combined Optimized Autonomous Space -In-situ Sensor-web (OASIS) will have two-way communication capability between ground and space assets, use both space and ground data for optimal allocation of limited power and bandwidth resources on the ground, and use smart management of competing demands for limited space assets. It will also enable scalability and seamless infusion of future space and in-situ assets into the sensor-web. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been active since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO-1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real-time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be integrated such that each element is capable of autonomously tasking the other. Sensor-web data acquisition and dissemination will be accomplished through the use of the Open Geospatial Consortium Sensorweb Enablement protocols. The three-year project will demonstrate end-to-end system performance with the in-situ test-bed at Mount St. Helens and NASA's EO-1 platform. ??2008 IEEE.
Rassinoux, A-M
2011-01-01
To summarize excellent current research in the field of knowledge representation and management (KRM). A synopsis of the articles selected for the IMIA Yearbook 2011 is provided and an attempt to highlight the current trends in the field is sketched. This last decade, with the extension of the text-based web towards a semantic-structured web, NLP techniques have experienced a renewed interest in knowledge extraction. This trend is corroborated through the five papers selected for the KRM section of the Yearbook 2011. They all depict outstanding studies that exploit NLP technologies whenever possible in order to accurately extract meaningful information from various biomedical textual sources. Bringing semantic structure to the meaningful content of textual web pages affords the user with cooperative sharing and intelligent finding of electronic data. As exemplified by the best paper selection, more and more advanced biomedical applications aim at exploiting the meaningful richness of free-text documents in order to generate semantic metadata and recently to learn and populate domain ontologies. These later are becoming a key piece as they allow portraying the semantics of the Semantic Web content. Maintaining their consistency with documents and semantic annotations that refer to them is a crucial challenge of the Semantic Web for the coming years.
Intelligent Systems Technologies to Assist in Utilization of Earth Observation Data
NASA Technical Reports Server (NTRS)
Ramapriyan, Hampapuram K.; McConaughy, Gail; Lynnes, Christopher; McDonald, Kenneth; Kempler, Steven
2003-01-01
With the launch of several Earth observing satellites over the last decade, we are now in a data rich environment. From NASA's Earth Observing System (EOS) satellites alone, we are accumulating more than 3 TB per day of raw data and derived geophysical parameters. The data products are being distributed to a large user community comprising scientific researchers, educators and operational government agencies. Notable progress has been made in the last decade in facilitating access to data. However, to realize the full potential of the growing archives of valuable scientific data, further progress is necessary in the transformation of data into information, and information into knowledge that can be used in particular applications. Sponsored by NASA s Intelligent Systems Project within the Computing, Information and Communication Technology (CICT) Program, a conceptual architecture study has been conducted to examine ideas to improve data utilization through the addition of intelligence into the archives in the context of an overall knowledge building system. Potential Intelligent Archive concepts include: 1) Mining archived data holdings using Intelligent Data Understanding algorithms to improve metadata to facilitate data access and usability; 2) Building intelligence about transformations on data, information, knowledge, and accompanying services involved in a scientific enterprise; 3) Recognizing the value of results, indexing and formatting them for easy access, and delivering them to concerned individuals; 4) Interacting as a cooperative node in a web of distributed systems to perform knowledge building (i.e., the transformations from data to information to knowledge) instead of just data pipelining; and 5) Being aware of other nodes in the knowledge building system, participating in open systems interfaces and protocols for virtualization, and collaborative interoperability. This paper presents some of these concepts and identifies issues to be addressed by research in future intelligent systems technology.
Chu, Larry F; Young, Chelsea; Zamora, Abby; Kurup, Viji; Macario, Alex
2010-04-01
Informatics is a broad field encompassing artificial intelligence, cognitive science, computer science, information science, and social science. The goal of this review is to illustrate how Web 2.0 information technologies could be used to improve anesthesia education. Educators in all specialties of medicine are increasingly studying Web 2.0 technologies to maximize postgraduate medical education of housestaff. These technologies include microblogging, blogs, really simple syndication (RSS) feeds, podcasts, wikis, and social bookmarking and networking. 'Anesthesia 2.0' reflects our expectation that these technologies will foster innovation and interactivity in anesthesia-related web resources which embraces the principles of openness, sharing, and interconnectedness that represent the Web 2.0 movement. Although several recent studies have shown benefits of implementing these systems into medical education, much more investigation is needed. Although direct practice and observation in the operating room are essential, Web 2.0 technologies hold great promise to innovate anesthesia education and clinical practice such that the resident learner need not be in a classroom for a didactic talk, or even in the operating room to see how an arterial line is properly placed. Thoughtful research to maximize implementation of these technologies should be a priority for development by academic anesthesiology departments. Web 2.0 and advanced informatics resources will be part of physician lifelong learning and clinical practice.
Bringing Web 2.0 to bioinformatics.
Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P
2009-01-01
Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.
A Semantic Grid Oriented to E-Tourism
NASA Astrophysics Data System (ADS)
Zhang, Xiao Ming
With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.
A Web-based system for the intelligent management of diabetic patients.
Riva, A; Bellazzi, R; Stefanelli, M
1997-01-01
We describe the design and implementation of a distributed computer-based system for the management of insulin-dependent diabetes mellitus. The goal of the system is to support the normal activities of the physicians and patients involved in the care of diabetes by providing them with a set of automated services ranging from data collection and transmission to data analysis and decision support. The system is highly integrated with current practices in the management of diabetes, and it uses Internet technology to achieve high availability and ease of use. In particular, the user interaction takes place through dynamically generated World Wide Web pages, so that all the system's functions share an intuitive graphic user interface.
Linked open drug data for pharmaceutical research and development
2011-01-01
There is an abundance of information about drugs available on the Web. Data sources range from medicinal chemistry results, over the impact of drugs on gene expression, to the outcomes of drugs in clinical trials. These data are typically not connected together, which reduces the ease with which insights can be gained. Linking Open Drug Data (LODD) is a task force within the World Wide Web Consortium's (W3C) Health Care and Life Sciences Interest Group (HCLS IG). LODD has surveyed publicly available data about drugs, created Linked Data representations of the data sets, and identified interesting scientific and business questions that can be answered once the data sets are connected. The task force provides recommendations for the best practices of exposing data in a Linked Data representation. In this paper, we present past and ongoing work of LODD and discuss the growing importance of Linked Data as a foundation for pharmaceutical R&D data sharing. PMID:21575203
Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.
2015-01-01
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402
Weather forecasting with open source software
NASA Astrophysics Data System (ADS)
Rautenhaus, Marc; Dörnbrack, Andreas
2013-04-01
To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.
LungMAP: The Molecular Atlas of Lung Development Program
Ardini-Poleske, Maryanne E.; Ansong, Charles; Carson, James P.; Corley, Richard A.; Deutsch, Gail H.; Hagood, James S.; Kaminski, Naftali; Mariani, Thomas J.; Potter, Steven S.; Pryhuber, Gloria S.; Warburton, David; Whitsett, Jeffrey A.; Palmer, Scott M.; Ambalavanan, Namasivayam
2017-01-01
The National Heart, Lung, and Blood Institute is funding an effort to create a molecular atlas of the developing lung (LungMAP) to serve as a research resource and public education tool. The lung is a complex organ with lengthy development time driven by interactive gene networks and dynamic cross talk among multiple cell types to control and coordinate lineage specification, cell proliferation, differentiation, migration, morphogenesis, and injury repair. A better understanding of the processes that regulate lung development, particularly alveologenesis, will have a significant impact on survival rates for premature infants born with incomplete lung development and will facilitate lung injury repair and regeneration in adults. A consortium of four research centers, a data coordinating center, and a human tissue repository provides high-quality molecular data of developing human and mouse lungs. LungMAP includes mouse and human data for cross correlation of developmental processes across species. LungMAP is generating foundational data and analysis, creating a web portal for presentation of results and public sharing of data sets, establishing a repository of young human lung tissues obtained through organ donor organizations, and developing a comprehensive lung ontology that incorporates the latest findings of the consortium. The LungMAP website (www.lungmap.net) currently contains more than 6,000 high-resolution lung images and transcriptomic, proteomic, and lipidomic human and mouse data and provides scientific information to stimulate interest in research careers for young audiences. This paper presents a brief description of research conducted by the consortium, database, and portal development and upcoming features that will enhance the LungMAP experience for a community of users. PMID:28798251
Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction
Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon
2016-01-01
Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023
Network-Capable Application Process and Wireless Intelligent Sensors for ISHM
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Morris, Jon; Turowski, Mark; Wang, Ray
2011-01-01
Intelligent sensor technology and systems are increasingly becoming attractive means to serve as frameworks for intelligent rocket test facilities with embedded intelligent sensor elements, distributed data acquisition elements, and onboard data acquisition elements. Networked intelligent processors enable users and systems integrators to automatically configure their measurement automation systems for analog sensors. NASA and leading sensor vendors are working together to apply the IEEE 1451 standard for adding plug-and-play capabilities for wireless analog transducers through the use of a Transducer Electronic Data Sheet (TEDS) in order to simplify sensor setup, use, and maintenance, to automatically obtain calibration data, and to eliminate manual data entry and error. A TEDS contains the critical information needed by an instrument or measurement system to identify, characterize, interface, and properly use the signal from an analog sensor. A TEDS is deployed for a sensor in one of two ways. First, the TEDS can reside in embedded, nonvolatile memory (typically flash memory) within the intelligent processor. Second, a virtual TEDS can exist as a separate file, downloadable from the Internet. This concept of virtual TEDS extends the benefits of the standardized TEDS to legacy sensors and applications where the embedded memory is not available. An HTML-based user interface provides a visual tool to interface with those distributed sensors that a TEDS is associated with, to automate the sensor management process. Implementing and deploying the IEEE 1451.1-based Network-Capable Application Process (NCAP) can achieve support for intelligent process in Integrated Systems Health Management (ISHM) for the purpose of monitoring, detection of anomalies, diagnosis of causes of anomalies, prediction of future anomalies, mitigation to maintain operability, and integrated awareness of system health by the operator. It can also support local data collection and storage. This invention enables wide-area sensing and employs numerous globally distributed sensing devices that observe the physical world through the existing sensor network. This innovation enables distributed storage, distributed processing, distributed intelligence, and the availability of DiaK (Data, Information, and Knowledge) to any element as needed. It also enables the simultaneous execution of multiple processes, and represents models that contribute to the determination of the condition and health of each element in the system. The NCAP (intelligent process) can configure data-collection and filtering processes in reaction to sensed data, allowing it to decide when and how to adapt collection and processing with regard to sophisticated analysis of data derived from multiple sensors. The user will be able to view the sensing device network as a single unit that supports a high-level query language. Each query would be able to operate over data collected from across the global sensor network just as a search query encompasses millions of Web pages. The sensor web can preserve ubiquitous information access between the querier and the queried data. Pervasive monitoring of the physical world raises significant data and privacy concerns. This innovation enables different authorities to control portions of the sensing infrastructure, and sensor service authors may wish to compose services across authority boundaries.
Grid Enabled Geospatial Catalogue Web Service
NASA Technical Reports Server (NTRS)
Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush
2004-01-01
Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.
Lee, Joo Yong; Kang, Dong Hyuk; Moon, Hong Sang; Kim, Yong Tae; Yoo, Tag Keun; Choi, Hong Yong; Lee, Tchun Yong
2011-01-01
Purpose We performed an analysis of the smartphone legibility of the websites of the Korean Urological Association (KUA) and other urological societies. Materials and Methods This study was conducted on the websites of the KUA and nine other urological societies. Each website was accessed via iPhone Safari and Android Chrome, respectively, to evaluate the establishment and readability of the mobile web pages. The provision of Really Simple Syndication (RSS) feeds by the websites and whether the websites had Twitter and Facebook accounts were evaluated. In addition, a validation test on the web standards was performed by using the World Wide Web Consortium (W3C®) Markup Validation Service, and subsequently the numbers of errors and warnings that occurred were analyzed. Results When accessed via Safari, two websites were legible, four were somewhat legible, and four were somewhat illegible. When accessed via Chrome, two websites were legible, six were somewhat legible, and two were somewhat illegible. One website provided an RSS feed and two websites managed members via separate Twitter accounts. No website supported mobile web pages. The result of the W3C® Markup Validation test on 10 websites showed a mean error rate of 221.6 (range, 13-1,477) and a mean warning rate of 127.13 (range, 0-655). Conclusions The smartphone legibility level of the websites of urological societies was relatively low. Improved smartphone legibility and web standard compliance of the websites of urological societies are required to keep up with the popularity of smartphones. PMID:21379433
Onto-Agents-Enabling Intelligent Agents on the Web
2005-05-01
AIR FORCE RESEARCH LABORATORY INFORMATION DIRECTORATE ROME RESEARCH SITE ROME, NEW YORK STINFO FINAL REPORT This report has been reviewed...by the Air Force Research Laboratory, Information Directorate, Public Affairs Office (IFOIPA) and is releasable to the National Technical... Information Service (NTIS). At NTIS it will be releasable to the general public, including foreign nations. AFRL-IF-RS-TR-2005-178 has been reviewed
ERIC Educational Resources Information Center
Folkestad, James E.; Anderson, Sharon K.
2009-01-01
Is the world "flat" or is the world "spiky"? Although leading authors and thinkers [Florida, 2005] struggle to find the perfect metaphor for describing our 21st century global ecosystem, there is agreement that the landscape is shifting. There is overwhelming agreement that our current education system was designed and…
Air Force and Diversity: The Awkward Embrace
2013-02-14
Streeter is a U.S. Air Force intelligence officer assigned to the Air War College, Air University, Maxwell AFB, AL. She graduated from the United States...future leaders.” Princeton University Office of Human Resources Web site; Wilson et al, Grooming Top Leaders, 4. 28 45. Dr Fil J . Arenas...Air Force Diversity Strategic Roadmap (2012), 14. 86. Dr. Fil J . Arenas (Associate Professor, Organizational Leadership Studies, Squadron
NOAA's Big Data Partnership and Applications to Ocean Sciences
NASA Astrophysics Data System (ADS)
Kearns, E. J.
2016-02-01
New opportunities for the distribution of NOAA's oceanographic and other environmental data are being explored through NOAA's Big Data Partnership (BDP) with Amazon Web Services, Google Cloud Platform, IBM, Microsoft Corp. and the Open Cloud Consortium. This partnership was established in April 2015 through Cooperative Research and Development Agreements, and is seeking new, financially self-sustaining collaborations between the Partners and the federal government centered upon NOAA's data and their potential value in the information marketplace. We will discuss emerging opportunities for collaboration among businesses and NOAA, progress in making NOAA's ocean data more widely accessible through the Partnerships, and applications based upon this access to NOAA's data.
Design & implementation of distributed spatial computing node based on WPS
NASA Astrophysics Data System (ADS)
Liu, Liping; Li, Guoqing; Xie, Jibo
2014-03-01
Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.
NASA Technical Reports Server (NTRS)
Trottier, C. Michael
1996-01-01
Recently, scientists and engineers have investigated the advantages of smart materials and structures by including actuators in material systems for controlling and altering the response of structural environments. Applications of these materials systems include vibration suppression/isolation, precision positioning, damage detection and tunable devices. Some of the embedded materials being investigated for accomplishing these tasks include piezoelectric ceramics, shape memory alloys, and fiber optics. These materials have some benefits and some shortcomings; each is being studied for use in active material design in the SPICES (Synthesis and Processing of Intelligent Cost Effective Structures) Consortium. The focus of this paper concerns the manufacturing aspects of smart structures by incorporating piezoelectric ceramics, shape memory alloys and fiber optics in a reinforced thermoset matrix via resin transfer molding (RTM).
NASA Astrophysics Data System (ADS)
Johnson, L. P.; Austin, S. A.; Howard, A. M.; Boxe, C.; Jiang, M.; Tulsee, T.; Chow, Y. W.; Zavala-Gutierrez, R.; Barley, R.; Filin, B.; Brathwaite, K.
2015-12-01
This presentation describes projects at Medgar Evers College of the City University of New York that contribute to the preparation of a diverse workforce in the areas of ocean modeling, planetary atmospheres, space weather and space technology. Specific projects incorporating both undergraduate and high school students include Assessing Parameterizations of Energy Input to Internal Ocean Mixing, Reaction Rate Uncertainty on Mars Atmospheric Ozone, Remote Sensing of Solar Active Regions and Intelligent Software for Nano-satellites. These projects are accompanied by a newly developed Computational Earth and Space Science course to provide additional background on methodologies and tools for scientific data analysis. This program is supported by NSF award AGS-1359293 REU Site: CUNY/GISS Center for Global Climate Research and the NASA New York State Space Grant Consortium.
OneGeology Web Services and Portal as a global geological SDI - latest standards and technology
NASA Astrophysics Data System (ADS)
Duffy, Tim; Tellez-Arenas, Agnes
2014-05-01
The global coverage of OneGeology Web Services (www.onegeology.org and portal.onegeology.org) achieved since 2007 from the 120 participating geological surveys will be reviewed and issues arising discussed. Recent enhancements to the OneGeology Web Services capabilities will be covered including new up to 5 star service accreditation scheme utilising the ISO/OGC Web Mapping Service standard version 1.3, core ISO 19115 metadata additions and Version 2.0 Web Feature Services (WFS) serving the new IUGS-CGI GeoSciML V3.2 geological web data exchange language standard (http://www.geosciml.org/) with its associated 30+ IUGS-CGI available vocabularies (http://resource.geosciml.org/ and http://srvgeosciml.brgm.fr/eXist2010/brgm/client.html). Use of the CGI simpelithology and timescale dictionaries now allow those who wish to do so to offer data harmonisation to query their GeoSciML 3.2 based Web Feature Services and their GeoSciML_Portrayal V2.0.1 (http://www.geosciml.org/) Web Map Services in the OneGeology portal (http://portal.onegeology.org). Contributing to OneGeology involves offering to serve ideally 1:1000,000 scale geological data (in practice any scale now is warmly welcomed) as an OGC (Open Geospatial Consortium) standard based WMS (Web Mapping Service) service from an available WWW server. This may either be hosted within the Geological Survey or a neighbouring, regional or elsewhere institution that offers to serve that data for them i.e. offers to help technically by providing the web serving IT infrastructure as a 'buddy'. OneGeology is a standards focussed Spatial Data Infrastructure (SDI) and works to ensure that these standards work together and it is now possible for European Geological Surveys to register their INSPIRE web services within the OneGeology SDI (e.g. see http://www.geosciml.org/geosciml/3.2/documentation/cookbook/INSPIRE_GeoSciML_Cookbook%20_1.0.pdf). The Onegeology portal (http://portal.onegeology.org) is the first port of call for anyone wishing to discover the availability of global geological web services and has new functionality to view and use such services including multiple projection support. KEYWORDS : OneGeology; GeoSciML V 3.2; Data exchange; Portal; INSPIRE; Standards; OGC; Interoperability; GeoScience information; WMS; WFS; Cookbook.
Gene Ontology Consortium: going forward.
2015-01-01
The Gene Ontology (GO; http://www.geneontology.org) is a community-based bioinformatics resource that supplies information about gene product function using ontologies to represent biological knowledge. Here we describe improvements and expansions to several branches of the ontology, as well as updates that have allowed us to more efficiently disseminate the GO and capture feedback from the research community. The Gene Ontology Consortium (GOC) has expanded areas of the ontology such as cilia-related terms, cell-cycle terms and multicellular organism processes. We have also implemented new tools for generating ontology terms based on a set of logical rules making use of templates, and we have made efforts to increase our use of logical definitions. The GOC has a new and improved web site summarizing new developments and documentation, serving as a portal to GO data. Users can perform GO enrichment analysis, and search the GO for terms, annotations to gene products, and associated metadata across multiple species using the all-new AmiGO 2 browser. We encourage and welcome the input of the research community in all biological areas in our continued effort to improve the Gene Ontology. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Developing consistent Landsat data sets for large area applications: the MRLC 2001 protocol
Chander, G.; Huang, Chengquan; Yang, Limin; Homer, Collin G.; Larson, C.
2009-01-01
One of the major efforts in large area land cover mapping over the last two decades was the completion of two U.S. National Land Cover Data sets (NLCD), developed with nominal 1992 and 2001 Landsat imagery under the auspices of the MultiResolution Land Characteristics (MRLC) Consortium. Following the successful generation of NLCD 1992, a second generation MRLC initiative was launched with two primary goals: (1) to develop a consistent Landsat imagery data set for the U.S. and (2) to develop a second generation National Land Cover Database (NLCD 2001). One of the key enhancements was the formulation of an image preprocessing protocol and implementation of a consistent image processing method. The core data set of the NLCD 2001 database consists of Landsat 7 Enhanced Thematic Mapper Plus (ETM+) images. This letter details the procedures for processing the original ETM+ images and more recent scenes added to the database. NLCD 2001 products include Anderson Level II land cover classes, percent tree canopy, and percent urban imperviousness at 30-m resolution derived from Landsat imagery. The products are freely available for download to the general public from the MRLC Consortium Web site at http://www.mrlc.gov.
Patel, Ashokkumar A.; Gilbertson, John R.; Showe, Louise C.; London, Jack W.; Ross, Eric; Ochs, Michael F.; Carver, Joseph; Lazarus, Andrea; Parwani, Anil V.; Dhir, Rajiv; Beck, J. Robert; Liebman, Michael; Garcia, Fernando U.; Prichard, Jeff; Wilkerson, Myra; Herberman, Ronald B.; Becich, Michael J.
2007-01-01
Background: The Pennsylvania Cancer Alliance Bioinformatics Consortium (PCABC, http://www.pcabc.upmc.edu) is one of the first major project-based initiatives stemming from the Pennsylvania Cancer Alliance that was funded for four years by the Department of Health of the Commonwealth of Pennsylvania. The objective of this was to initiate a prototype biorepository and bioinformatics infrastructure with a robust data warehouse by developing a statewide data model (1) for bioinformatics and a repository of serum and tissue samples; (2) a data model for biomarker data storage; and (3) a public access website for disseminating research results and bioinformatics tools. The members of the Consortium cooperate closely, exploring the opportunity for sharing clinical, genomic and other bioinformatics data on patient samples in oncology, for the purpose of developing collaborative research programs across cancer research institutions in Pennsylvania. The Consortium’s intention was to establish a virtual repository of many clinical specimens residing in various centers across the state, in order to make them available for research. One of our primary goals was to facilitate the identification of cancer-specific biomarkers and encourage collaborative research efforts among the participating centers. Methods: The PCABC has developed unique partnerships so that every region of the state can effectively contribute and participate. It includes over 80 individuals from 14 organizations, and plans to expand to partners outside the State. This has created a network of researchers, clinicians, bioinformaticians, cancer registrars, program directors, and executives from academic and community health systems, as well as external corporate partners - all working together to accomplish a common mission. The various sub-committees have developed a common IRB protocol template, common data elements for standardizing data collections for three organ sites, intellectual property/tech transfer agreements, and material transfer agreements that have been approved by each of the member institutions. This was the foundational work that has led to the development of a centralized data warehouse that has met each of the institutions’ IRB/HIPAA standards. Results: Currently, this “virtual biorepository” has over 58,000 annotated samples from 11,467 cancer patients available for research purposes. The clinical annotation of tissue samples is either done manually over the internet or semi-automated batch modes through mapping of local data elements with PCABC common data elements. The database currently holds information on 7188 cases (associated with 9278 specimens and 46,666 annotated blocks and blood samples) of prostate cancer, 2736 cases (associated with 3796 specimens and 9336 annotated blocks and blood samples) of breast cancer and 1543 cases (including 1334 specimens and 2671 annotated blocks and blood samples) of melanoma. These numbers continue to grow, and plans to integrate new tumor sites are in progress. Furthermore, the group has also developed a central web-based tool that allows investigators to share their translational (genomics/proteomics) experiment data on research evaluating potential biomarkers via a central location on the Consortium’s web site. Conclusions: The technological achievements and the statewide informatics infrastructure that have been established by the Consortium will enable robust and efficient studies of biomarkers and their relevance to the clinical course of cancer. Studies resulting from the creation of the Consortium may allow for better classification of cancer types, more accurate assessment of disease prognosis, a better ability to identify the most appropriate individuals for clinical trial participation, and better surrogate markers of disease progression and/or response to therapy. PMID:19455246
Component Models for Semantic Web Languages
NASA Astrophysics Data System (ADS)
Henriksson, Jakob; Aßmann, Uwe
Intelligent applications and agents on the Semantic Web typically need to be specified with, or interact with specifications written in, many different kinds of formal languages. Such languages include ontology languages, data and metadata query languages, as well as transformation languages. As learnt from years of experience in development of complex software systems, languages need to support some form of component-based development. Components enable higher software quality, better understanding and reusability of already developed artifacts. Any component approach contains an underlying component model, a description detailing what valid components are and how components can interact. With the multitude of languages developed for the Semantic Web, what are their underlying component models? Do we need to develop one for each language, or is a more general and reusable approach achievable? We present a language-driven component model specification approach. This means that a component model can be (automatically) generated from a given base language (actually, its specification, e.g. its grammar). As a consequence, we can provide components for different languages and simplify the development of software artifacts used on the Semantic Web.
Designing Crop Simulation Web Service with Service Oriented Architecture Principle
NASA Astrophysics Data System (ADS)
Chinnachodteeranun, R.; Hung, N. D.; Honda, K.
2015-12-01
Crop simulation models are efficient tools for simulating crop growth processes and yield. Running crop models requires data from various sources as well as time-consuming data processing, such as data quality checking and data formatting, before those data can be inputted to the model. It makes the use of crop modeling limited only to crop modelers. We aim to make running crop models convenient for various users so that the utilization of crop models will be expanded, which will directly improve agricultural applications. As the first step, we had developed a prototype that runs DSSAT on Web called as Tomorrow's Rice (v. 1). It predicts rice yields based on a planting date, rice's variety and soil characteristics using DSSAT crop model. A user only needs to select a planting location on the Web GUI then the system queried historical weather data from available sources and expected yield is returned. Currently, we are working on weather data connection via Sensor Observation Service (SOS) interface defined by Open Geospatial Consortium (OGC). Weather data can be automatically connected to a weather generator for generating weather scenarios for running the crop model. In order to expand these services further, we are designing a web service framework consisting of layers of web services to support compositions and executions for running crop simulations. This framework allows a third party application to call and cascade each service as it needs for data preparation and running DSSAT model using a dynamic web service mechanism. The framework has a module to manage data format conversion, which means users do not need to spend their time curating the data inputs. Dynamic linking of data sources and services are implemented using the Service Component Architecture (SCA). This agriculture web service platform demonstrates interoperability of weather data using SOS interface, convenient connections between weather data sources and weather generator, and connecting various services for running crop models for decision support.
NASA Astrophysics Data System (ADS)
Chen, R. S.; MacManus, K.; Vinay, S.; Yetman, G.
2016-12-01
The Socioeconomic Data and Applications Center (SEDAC), one of 12 Distributed Active Archive Centers (DAACs) in the NASA Earth Observing System Data and Information System (EOSDIS), has developed a variety of operational spatial data services aimed at providing online access, visualization, and analytic functions for geospatial socioeconomic and environmental data. These services include: open web services that implement Open Geospatial Consortium (OGC) specifications such as Web Map Service (WMS), Web Feature Service (WFS), and Web Coverage Service (WCS); spatial query services that support Web Processing Service (WPS) and Representation State Transfer (REST); and web map clients and a mobile app that utilize SEDAC and other open web services. These services may be accessed from a variety of external map clients and visualization tools such as NASA's WorldView, NOAA's Climate Explorer, and ArcGIS Online. More than 200 data layers related to population, settlements, infrastructure, agriculture, environmental pollution, land use, health, hazards, climate change and other aspects of sustainable development are available through WMS, WFS, and/or WCS. Version 2 of the SEDAC Population Estimation Service (PES) supports spatial queries through WPS and REST in the form of a user-defined polygon or circle. The PES returns an estimate of the population residing in the defined area for a specific year (2000, 2005, 2010, 2015, or 2020) based on SEDAC's Gridded Population of the World version 4 (GPWv4) dataset, together with measures of accuracy. The SEDAC Hazards Mapper and the recently released HazPop iOS mobile app enable users to easily submit spatial queries to the PES and see the results. SEDAC has developed an operational virtualized backend infrastructure to manage these services and support their continual improvement as standards change, new data and services become available, and user needs evolve. An ongoing challenge is to improve the reliability and performance of the infrastructure, in conjunction with external services, to meet both research and operational needs.
McMurray, Josephine; Strudwick, Gillian; Forchuk, Cheryl; Morse, Adam; Lachance, Jessica; Baskaran, Arani; Allison, Lauren; Booth, Richard
2017-11-02
Intelligent assistive technologies that complement and extend human abilities have proliferated in recent years. Service robots, home automation equipment, and other digital assistant devices possessing artificial intelligence are forms of assistive technologies that have become popular in society. Older adults (>55 years of age) have been identified by industry, government, and researchers as a demographic who can benefit significantly from the use of intelligent assistive technology to support various activities of daily living. The purpose of this scoping review is to summarize the literature on the importance of the concept of "trust" in the adoption of intelligent assistive technologies to assist aging in place by older adults. Using a scoping review methodology, our search strategy will examine the following databases: ACM Digital Library, Allied and Complementary Medicine Database (AMED), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Medline, PsycINFO, Scopus, and Web of Science. Two reviewers will independently screen the initial titles obtained from the search, and these results will be further inspected by other members of the research team for inclusion in the review. This review will provide insights into how the concept of trust is actualized in the adoption of intelligent assistive technology by older adults. Preliminary sensitization to the literature suggests that the concept of trust is fluid, unstable, and intimately tied to the type of intelligent assistive technology being examined. Furthermore, a wide range of theoretical lenses that include elements of trust have been used to examine this concept. This review will describe the concept of trust in the adoption of intelligent assistive technology by older adults, and will provide insights for practitioners, policy makers, and technology vendors for future practice. ©Josephine McMurray, Gillian Strudwick, Cheryl Forchuk, Adam Morse, Jessica Lachance, Arani Baskaran, Lauren Allison, Richard Booth. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 02.11.2017.
An Industrial-Based Consortium to Develop Premium Carbon Products from Coal Final Report - Part 5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Bruce; Shea, Winton
2010-12-31
Since 1998, The Pennsylvania State University successfully managed the Consortium for Premium Carbon Products from Coal (CPCPC), which was a vehicle for industry-driven research on the promotion, development, and transfer of innovative technologies on premium carbon products from coal to the U.S. industry. The CPCPC was an initiative led by Penn State, its cocharter member West Virginia University (WVU), and the U.S. Department of Energy's (DOE) National Energy Technology Laboratory (NETL), who also provided the base funding for the program, with Penn State responsible for consortium management. CPCPC began in 1998 under DOE Cooperative Agreement No. DE-FC26-98FT40350. This agreement ended November 2004 but the CPCPC activity continued under cooperative agreement No. DE-FC26-03NT41874, which started October 1, 2003 and ended December 31, 2010. The objective of the second agreement was to continue the successful operation of the CPCPC. The CPCPC enjoyed tremendous success with its organizational structure, which included Penn State and WVU as charter members, numerous industrial affiliate members, and strategic university affiliate members together with NETL, forming a vibrant and creative team for innovative research in the area of transforming coal to carbon products. The key aspect of CPCPC was its industry-led council that selected proposals submitted by CPCPC members to ensure CPCPC target areas had strong industrial support. CPCPC had 58 member companies and universities engaged over the 7-year period of this contract. Members were from 17 states and five countries outside of the U.S. During this period, the CPCPC Executive Council selected 46 projects for funding. DOE/CPCPC providedmore » $3.9 million in funding or an average of $564,000 per year. The total project costs were $5.45 million with $$1.5 million, or {approx}28% of the total, provided by the members as cost share. Total average project size was $$118,000 with $85,900 provided by DOE/CPCPC. In addition to the research, technology transfer/outreach was a large component of CPCPC's activities. Efficient technology transfer was critical for the deployment of new technologies into the field. CPCPC organized and hosted technology transfer meetings, tours, and tutorials, attended outreach conferences and workshops to represent CPCPC and attract new members, prepared and distributed reports and publications, and developed and maintained a Web site. The second contract ended December 31, 2010, and it is apparent that CPCPC positively impacted the carbon industry and coal research. Statistics and information were compiled to provide a comprehensive account of the impact the consortium had and the beneficial outcomes of many of the individual projects. Project fact sheet, success stories, and other project information were prepared. Two topical reports, a Synthesis report and a Web report, were prepared detailing this information.« less
An Industrial-Based Consortium to Develop Premium Carbon Products from Coal Final Report - Part 4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Bruce; Shea, Winton
Since 1998, The Pennsylvania State University successfully managed the Consortium for Premium Carbon Products from Coal (CPCPC), which was a vehicle for industry-driven research on the promotion, development, and transfer of innovative technologies on premium carbon products from coal to the U.S. industry. The CPCPC was an initiative led by Penn State, its cocharter member West Virginia University (WVU), and the U.S. Department of Energy's (DOE) National Energy Technology Laboratory (NETL), who also provided the base funding for the program, with Penn State responsible for consortium management. CPCPC began in 1998 under DOE Cooperative Agreement No. DE-FC26-98FT40350. This agreement ended November 2004 but the CPCPC activity continued under cooperative agreement No. DE-FC26-03NT41874, which started October 1, 2003 and ended December 31, 2010. The objective of the second agreement was to continue the successful operation of the CPCPC. The CPCPC enjoyed tremendous success with its organizational structure, which included Penn State and WVU as charter members, numerous industrial affiliate members, and strategic university affiliate members together with NETL, forming a vibrant and creative team for innovative research in the area of transforming coal to carbon products. The key aspect of CPCPC was its industry-led council that selected proposals submitted by CPCPC members to ensure CPCPC target areas had strong industrial support. CPCPC had 58 member companies and universities engaged over the 7-year period of this contract. Members were from 17 states and five countries outside of the U.S. During this period, the CPCPC Executive Council selected 46 projects for funding. DOE/CPCPC providedmore » $3.9 million in funding or an average of $564,000 per year. The total project costs were $5.45 million with $$1.5 million, or {approx}28% of the total, provided by the members as cost share. Total average project size was $$118,000 with $85,900 provided by DOE/CPCPC. In addition to the research, technology transfer/outreach was a large component of CPCPC's activities. Efficient technology transfer was critical for the deployment of new technologies into the field. CPCPC organized and hosted technology transfer meetings, tours, and tutorials, attended outreach conferences and workshops to represent CPCPC and attract new members, prepared and distributed reports and publications, and developed and maintained a Web site. The second contract ended December 31, 2010, and it is apparent that CPCPC positively impacted the carbon industry and coal research. Statistics and information were compiled to provide a comprehensive account of the impact the consortium had and the beneficial outcomes of many of the individual projects. Project fact sheet, success stories, and other project information were prepared. Two topical reports, a Synthesis report and a Web report, were prepared detailing this information.« less
An Industrial-Based Consortium to Develop Premium Carbon Products from Coal Final Report - Part 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Bruce; Winton, Shea
2010-12-31
Since 1998, The Pennsylvania State University successfully managed the Consortium for Premium Carbon Products from Coal (CPCPC), which was a vehicle for industry-driven research on the promotion, development, and transfer of innovative technologies on premium carbon products from coal to the U.S. industry. The CPCPC was an initiative led by Penn State, its cocharter member West Virginia University (WVU), and the U.S. Department of Energy's (DOE) National Energy Technology Laboratory (NETL), who also provided the base funding for the program, with Penn State responsible for consortium management. CPCPC began in 1998 under DOE Cooperative Agreement No. DE-FC26-98FT40350. This agreement endedmore » November 2004 but the CPCPC activity continued under cooperative agreement No. DE-FC26-03NT41874, which started October 1, 2003 and ended December 31, 2010. The objective of the second agreement was to continue the successful operation of the CPCPC. The CPCPC enjoyed tremendous success with its organizational structure, which included Penn State and WVU as charter members, numerous industrial affiliate members, and strategic university affiliate members together with NETL, forming a vibrant and creative team for innovative research in the area of transforming coal to carbon products. The key aspect of CPCPC was its industry-led council that selected proposals submitted by CPCPC members to ensure CPCPC target areas had strong industrial support. CPCPC had 58 member companies and universities engaged over the 7-year period of this contract. Members were from 17 states and five countries outside of the U.S. During this period, the CPCPC Executive Council selected 46 projects for funding. DOE/CPCPC provided $3.9 million in funding or an average of $564,000 per year. The total project costs were $5.45 million with $1.5 million, or ~28% of the total, provided by the members as cost share. Total average project size was $118,000 with $85,900 provided by DOE/CPCPC. In addition to the research, technology transfer/outreach was a large component of CPCPC's activities. Efficient technology transfer was critical for the deployment of new technologies into the field. CPCPC organized and hosted technology transfer meetings, tours, and tutorials, attended outreach conferences and workshops to represent CPCPC and attract new members, prepared and distributed reports and publications, and developed and maintained a Web site. The second contract ended December 31, 2010, and it is apparent that CPCPC positively impacted the carbon industry and coal research. Statistics and information were compiled to provide a comprehensive account of the impact the consortium had and the beneficial outcomes of many of the individual projects. Project fact sheet, success stories, and other project information were prepared. Two topical reports, a Synthesis report and a Web report, were prepared detailing this information.« less
An Industrial-Based Consortium to Develop Premium Carbon Products from Coal Final Report - Part 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Bruce; Shea, Winton
2010-12-31
Since 1998, The Pennsylvania State University successfully managed the Consortium for Premium Carbon Products from Coal (CPCPC), which was a vehicle for industry-driven research on the promotion, development, and transfer of innovative technologies on premium carbon products from coal to the U.S. industry. The CPCPC was an initiative led by Penn State, its cocharter member West Virginia University (WVU), and the U.S. Department of Energy's (DOE) National Energy Technology Laboratory (NETL), who also provided the base funding for the program, with Penn State responsible for consortium management. CPCPC began in 1998 under DOE Cooperative Agreement No. DE-FC26-98FT40350. This agreement endedmore » November 2004 but the CPCPC activity continued under cooperative agreement No. DE-FC26-03NT41874, which started October 1, 2003 and ended December 31, 2010. The objective of the second agreement was to continue the successful operation of the CPCPC. The CPCPC enjoyed tremendous success with its organizational structure, which included Penn State and WVU as charter members, numerous industrial affiliate members, and strategic university affiliate members together with NETL, forming a vibrant and creative team for innovative research in the area of transforming coal to carbon products. The key aspect of CPCPC was its industry-led council that selected proposals submitted by CPCPC members to ensure CPCPC target areas had strong industrial support. CPCPC had 58 member companies and universities engaged over the 7-year period of this contract. Members were from 17 states and five countries outside of the U.S. During this period, the CPCPC Executive Council selected 46 projects for funding. DOE/CPCPC provided $3.9 million in funding or an average of $564,000 per year. The total project costs were $5.45 million with $1.5 million, or ~28% of the total, provided by the members as cost share. Total average project size was $118,000 with $85,900 provided by DOE/CPCPC. In addition to the research, technology transfer/outreach was a large component of CPCPC's activities. Efficient technology transfer was critical for the deployment of new technologies into the field. CPCPC organized and hosted technology transfer meetings, tours, and tutorials, attended outreach conferences and workshops to represent CPCPC and attract new members, prepared and distributed reports and publications, and developed and maintained a Web site. The second contract ended December 31, 2010, and it is apparent that CPCPC positively impacted the carbon industry and coal research. Statistics and information were compiled to provide a comprehensive account of the impact the consortium had and the beneficial outcomes of many of the individual projects. Project fact sheet, success stories, and other project information were prepared. Two topical reports, a Synthesis report and a Web report, were prepared detailing this information.« less
An Industrial-Based Consortium to Develop Premium Carbon Products from Coal Final Report - Part 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Bruce; Winton, Shea
2010-12-31
Since 1998, The Pennsylvania State University successfully managed the Consortium for Premium Carbon Products from Coal (CPCPC), which was a vehicle for industry-driven research on the promotion, development, and transfer of innovative technologies on premium carbon products from coal to the U.S. industry. The CPCPC was an initiative led by Penn State, its cocharter member West Virginia University (WVU), and the U.S. Department of Energy's (DOE) National Energy Technology Laboratory (NETL), who also provided the base funding for the program, with Penn State responsible for consortium management. CPCPC began in 1998 under DOE Cooperative Agreement No. DE-FC26-98FT40350. This agreement endedmore » November 2004 but the CPCPC activity continued under cooperative agreement No. DE-FC26-03NT41874, which started October 1, 2003 and ended December 31, 2010. The objective of the second agreement was to continue the successful operation of the CPCPC. The CPCPC enjoyed tremendous success with its organizational structure, which included Penn State and WVU as charter members, numerous industrial affiliate members, and strategic university affiliate members together with NETL, forming a vibrant and creative team for innovative research in the area of transforming coal to carbon products. The key aspect of CPCPC was its industry-led council that selected proposals submitted by CPCPC members to ensure CPCPC target areas had strong industrial support. CPCPC had 58 member companies and universities engaged over the 7-year period of this contract. Members were from 17 states and five countries outside of the U.S. During this period, the CPCPC Executive Council selected 46 projects for funding. DOE/CPCPC provided $3.9 million in funding or an average of $564,000 per year. The total project costs were $5.45 million with $1.5 million, or ~28% of the total, provided by the members as cost share. Total average project size was $118,000 with $85,900 provided by DOE/CPCPC. In addition to the research, technology transfer/outreach was a large component of CPCPC's activities. Efficient technology transfer was critical for the deployment of new technologies into the field. CPCPC organized and hosted technology transfer meetings, tours, and tutorials, attended outreach conferences and workshops to represent CPCPC and attract new members, prepared and distributed reports and publications, and developed and maintained a Web site. The second contract ended December 31, 2010, and it is apparent that CPCPC positively impacted the carbon industry and coal research. Statistics and information were compiled to provide a comprehensive account of the impact the consortium had and the beneficial outcomes of many of the individual projects. Project fact sheet, success stories, and other project information were prepared. Two topical reports, a Synthesis report and a Web report, were prepared detailing this information.« less
Restful Implementation of Catalogue Service for Geospatial Data Provenance
NASA Astrophysics Data System (ADS)
Jiang, L. C.; Yue, P.; Lu, X. C.
2013-10-01
Provenance, also known as lineage, is important in understanding the derivation history of data products. Geospatial data provenance helps data consumers to evaluate the quality and reliability of geospatial data. In a service-oriented environment, where data are often consumed or produced by distributed services, provenance could be managed by following the same service-oriented paradigm. The Open Geospatial Consortium (OGC) Catalogue Service for the Web (CSW) is used for the registration and query of geospatial data provenance by extending ebXML Registry Information Model (ebRIM). Recent advance of the REpresentational State Transfer (REST) paradigm has shown great promise for the easy integration of distributed resources. RESTful Web Service aims to provide a standard way for Web clients to communicate with servers based on REST principles. The existing approach for provenance catalogue service could be improved by adopting the RESTful design. This paper presents the design and implementation of a catalogue service for geospatial data provenance following RESTful architecture style. A middleware named REST Converter is added on the top of the legacy catalogue service to support a RESTful style interface. The REST Converter is composed of a resource request dispatcher and six resource handlers. A prototype service is developed to demonstrate the applicability of the approach.