Common Data Format (CDF) and Coordinated Data Analysis Web (CDAWeb)
NASA Technical Reports Server (NTRS)
Candey, Robert M.
2010-01-01
The Coordinated Data Analysis Web (CDAWeb)
NASA Astrophysics Data System (ADS)
McGuire, R. E.; Bilitza, D.; Candey, R. M.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Harris, B. T.; Johnson, R. C.; Kovalick, T.; Lal, N.; Leckner, H.; Liu, M.; Papitashvili, N. E.; Roberts, D. A.
2014-12-01
A wide range of current, public, science-quality particle and field data from the Van Allen Probes and related missions is ingested, archived and served to the international science community by SPDF. As an active heliophysics final archive, SPDF now serves 100+ Level-2 and Level-3 data products that fully span the range of measurements from particles and plasmas (RBSPICE, ECT) through magnetic-electric fields and waves (EMFISIS, EFW). This collection of mission data (in a standard CDF format with standard ISTP/SPDF) is available through SPDF's CDAWeb user interface, through CDAWeb's web services and associated APIs for IDL and Matlab users, and through direct FTP/HTTP download access. These data are supplemented with orbit displays through our SSCWeb and 4D Orbit Viewer services and HDP/VSPO direct links to investigator sites/resources. This range of data in CDAWeb makes comparison of data among instruments and spacecraft much easier, as well as comparisons and analysis of these data with current data from other missions including THEMIS, TWINS, Cluster, ACE, Wind and now >120 ground magnetometer stations. In addition, SPDF supports data from the BARREL Antarctic balloon program and new data from instruments on the NOAA GOES and POES spacecraft. SPDF will add public data from the MMS mission to this collection when launched in 2015.
New SPDF Directions and Evolving Services Supporting Heliophysics Research
NASA Technical Reports Server (NTRS)
McGuire, Robert E.; Candey, Robert M.; Bilitza, D.; Chimiak, Reine A.; Cooper, John F.; Fung, Shing F.; Han, David B.; Harris, Bernie; Johnson R.; Klipsch, C.;
2006-01-01
The next advances in Heliophysics science and its paradigm of a Great Observatory require an increasingly integrated and transparent data environment, where data can be easily accessed and used across the boundaries of both missions and traditional disciplines. The Space Physics Data Facility (SPDF) project includes uniquely important multi-mission data services with current data from most operating space physics missions. This paper reviews the capabilities of key services now available and the directions in which they are expected to evolve to enable future multi-mission correlative research. The Coordinated Data Analysis Web (CDAWeb) and Satellite Situation Center Web (SSCWeb), critically supported by the Common Data Format (CDF) effort and supplemented by more focused science services such as OMNIWeb and technical services such as data format translations are important operational capabilities serving the international community today (and cited last year by 20% of the papers published in JGR Space Physics). These services continue to add data from most current missions as SPDF works with new missions such as THEMIS to help enable their unique science goals and the meaningful sharing of their data in a multi-mission correlative context. Recent enhancements to CDF, our 3D Java interactive orbit viewer (TIPSOD), the CDAWeb Plus system, increasing automation of data service population, the new folding of the VSPO effort into SPDF and our continuing thrust towards fully-functional web services APIs to allow ready invocation from distributed external middleware and clients will be shown.
NASA Astrophysics Data System (ADS)
McGuire, R. E.; Bilitza, D.; Candey, R. M.; Chimiak, R. A.; Cooper, J. F.; Garcia, L. N.; Harris, B. T.; Johnson, R. C.; King, J. H.; Kovalick, T. J.; Lal, N.; Leckner, H. A.; Liu, M. H.; Papitashvili, N. E.; Roberts, D.
2013-12-01
A wide range of current, public, science-quality particle and field data from the Van Allen Probes and related missions is being ingested, archived and served to the international science community by SPDF. As an active heliophysics archive, SPDF now serves some eighty Level-2 (and increasingly Level-3) data products that fully span the range of measurements from particles-plasma (RBSPICE, ECT) through magnetic-electric fields and waves (EMFISIS, EFW). This coherent collection of mission data, in a standard format (CDF) with standard metadata, is available through SPDF's CDAWeb user interface, CDAWeb's web services and associated APIs for IDL and Matlab users, as well as through direct FTP/HTTP download access supplemented with orbit displays through our SSCWeb and 4D Orbit Viewer services and HDP/VSPO direct links to investigator sites/resources. With the dedicated work of the project and instrument teams, these data products are of increasingly high quality and typically current within 2 months or less. Having this range of data in CDAWeb makes comparison of data among instruments and spacecraft much easier, as well as comparisons and analysis of these data with current data from other missions including THEMIS, TWINS, Cluster, ACE, Wind and now >120 ground magnetometer stations. In addition, SPDF supports data from the BARREL Antarctic balloon program and recent data from instruments on the NOAA GOES spacecraft. SPDF will also support public data from the MMS mission when launched in later 2014.
Services, Perspective and Directions of the Space Physics Data Facility
NASA Technical Reports Server (NTRS)
McGuire, Robert E.; Bilitza, Dieter; Candey, Reine A.; Chimiak, Reine A.; Cooper, John F.; Fung, Shing F.; Harris, Bernard T.; Johnson, Rita C.; King, Joseph H.; Kovalick, Tamara;
2008-01-01
The multi-mission data and orbit services of NASA's Space Physics Data Facility (SPDF) project offer unique capabilities supporting science of the Heliophysics Great Observatory and that are highly complementary to other services now evolving in the international heliophysics data environment. The VSPO (Virtual Space Physics Observatory) service is an active portal to a wide rage of distributed data sources. CDAWeb (Coordinated Data Analysis Web) offers plots, listings and file downloads for current data from many missions across the boundaries of missions and instrument types. CDAWeb now includes extensive new data from STEREO and THEMIS, plus new ROCSAT IPEI data, the latest data from all four TIMED instruments and high-resolution data from all DE-2 experiments. SSCWeb, Helioweb and out 3D Animated Orbit Viewer (TIPSOD) provide position data and identification of spacecraft and ground conjunctions. OMNI Web, with its new extension to 1- and 5-minute resolution, provides interplanetary parameters at the Earth's bow shock. SPDF maintains NASA's CDF (Common Data Format) standard and a range of associated tools including format translation services. These capabilities are all now available through web services based APIs, one element in SPDF's ongoing work to enable heliophysics community development of Virtual discipline Observatories (e.g. VITMO). We will demonstrate out latest data and capabilities, review the lessons we continue to learn in what science users need and value in this class of services, and discuss out current thinking to the future role and appropriate focus of the SPDF effort in the evolving and increasingly distributed heliophysics data environment.
Science Enabling Roles and Services of SPDF
NASA Technical Reports Server (NTRS)
McGuire, Robert E.; Bilitza, Dieter; Candey, Robert M.; Chimiak, Reine A.; Cooper, John F.; Garcia, Leonard N.; Harris, Bernard T.; Johnson, Rita C.; King, Joseph H.; Kovalick, Tamara J.;
2011-01-01
The current Heliophysics Science Data Management Policy defines the roles of the Space Physics Data Facility (SPDF) project as a heliophysics active Final Archive, a focus for critical data infrastructure services and a center of excellence for data and ancillary information services. This presentation will highlight some of our current activities and our understanding of why and how our services are useful to researchers, as well as SPDF's programmatic emphasis in the coming year. We will discuss how. in cooperation with the Heliophysics Virtual discipline Observatories (VxOs), we are working closely with the RBSP and MMS mission teams to support their decisions to use CDF as a primary format for their public data products, to leverage the ongoing data flows and capabilities of CDAWeb (directly and through external clients such as Autoplot) to serve their data in a multi-mission context and to use SSCWeb to assist community science planning and analysis. Among other current activities, we will also discuss and demonstrate our continuing effort to make the Virtual Space Physics Observatory (VSPO) service comprehensive in all significant and NASA relevant heliophysics data. The OMNI and OMNI High Resolution datasets remain current and heavily cited in publications. We are expanding our FTP file services to include online archived non-CDF data from all active missions, which is a re-hosting of this function from NSSDC's FTP site. We have extended the definitions of time in CDF to unambiguously and consistently handle leap seconds. We are improving SSCWeb for much faster per1ormance, more capabilities and a web services inter1ace to Query functionality. We will also review how CDAWeb data can be easily accessed within IDL and new features in CDAWeb.
Access and Use of MMS Data through SPDF Services
NASA Astrophysics Data System (ADS)
McGuire, R. E.; Bilitza, D.; Boardsen, S. A.; Candey, R. M.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Harris, B. T.; Johnson, R. C.; Kovalick, T. J.; Lal, N.; Leckner, H. A.; Liu, M. H.; Papitashvili, N. E.; Rao, U. R.; Roberts, D. A.; Yurow, R. E.
2016-12-01
In its role as a Heliophysics Active Final Archive and in close cooperation with the MMS project and its Science Data Center, the Space Physics Data Facility (SPDF) now serves a full set of public MMS data and QuickLook plots. All SPDF services for this data and all data are available via links from the SPDF home page (http://spdf.gsfc.nasa.gov). SPDF's CDAWeb features MMS Level-2 survey and burst mode data with graphics, listing and data superset/subset functions. These capabilities are available (1) through our html user interface, (2) through calls to our CDAS web services API, and (3) through other interfaces and libraries using the CDAS web services or that otherwise access our holdings including SPDF's Heliophysics Data Portal and several external systems. As context in use of the MMS data, CDAWeb also serves current data from many other current missions. These include the Van Allen Probes 1/2 and the five THEMIS/ARTEMIS spacecraft, as well as e.g. ACE, Cluster 1/2/3/4, DMSP 16/17/18, Geotail, GOES 13/14/15, NOAA/POES 15/16/18/19, MetOP POES 1/2, Stereo A/B, TWINS 1/2, Wind and >120 Ground-Based investigations). This full set of public MMS Level-2 science data and QuickLook plots, and all other public data held by SPDF, are also available for direct file download by HTTP or FTP links from the SPDF home page above. As a reminder, MMS Level-2 data are publicly available about 30 days after data is taken, and QuickLook survey plots are available about a day after data is taken). MMS orbits (current and predictive) are served through SPDF's SSCWeb service and our Java-based interactive 4D Orbit Viewer, also with orbits of many other current missions). Our presentation will discuss recent enhancements to CDAWeb and other services and our plans to support new MMS data products and upcoming heliophysics missions including ICON, GOLD and Solar Probe Plus.
New SECAA/ NSSDC Capabilities for Accessing ITM Data
NASA Astrophysics Data System (ADS)
Bilitza, D.; Papitashvili, N.; McGuire, R.
NASA's National Space Science Data Center (NSSDC) archives a large volume of data and models that are of relevance to the International Living with a Star (ILWS) project. Working with NSSDC its sister organization the Sun Earth Connection Active Archive (SECAA) has developed a number of data access and browse tools to facilitate user access to this important data source. For the most widely used empirical models (IRI, IGRF, MSIS/CIRA, AE/AP-8) Java-based web interfaces let users compute, list, plot, and download model parameters. We will report about recent enhancements and extensions of these data and model services in the area of ionospheric-thermospheric-mesospheric (ITM) physics. The ATMOWeb system (http://nssdc.gsfc.nasa.gov/atmoweb/) includes data from many of the ITM satellite missions of the sixties, seventies, and eighties (BE-B, DME-A, Alouette 2, AE-B, OGO-6, ISIS-1, ISIS-2, AEROS-A, AE-C, AE-D, AE-E, DE-2, and Hinotori). New capabilities of the ATMOWeb system include in addition to time series plots and data retrievals, ATMOWeb now lets user generate scatter plots and linear regression fits for any pair of parameters. Optional upper and lower boundaries let users filter out specific segments of the data and/or certain ranges of orbit parameters (altitude, longitude, local time, etc.). Data from TIMED is being added to the CDAWeb system, including new web service capabilities, to be available jointly with the broad scope of space physics data already served by CDAWeb. We will also present the newest version of the NSSDC/SECAA models web pages. The layout and sequence of these entry pages to the models catalog, archive, and web interfaces was greatly simplified and broad up-to-date.
Auroras observations of the MAIN in Apatity during 2014/15 winter season
NASA Astrophysics Data System (ADS)
Guineva, V.; Despirak, I.; Kozelov, B.
2017-08-01
In this work we review substorms, originated during the 2014/2015 winter season. Observations of the Multiscale Aurora Imaging Network (MAIN) in Apatity have been used. Solar wind and interplanetary magnetic field parameters were estimated by the 1-min sampled OMNI data base from CDAWeb (http://cdaweb.gsfc.nasa.gov/cdaweb/ istp_public/). Auroral disturbances were verified by the 10-s sampled data of IMAGE magnetometers and by data of the all-sky camera at Apatity. Subject of the review were the peculiarities in the development of substorms occurred during different geomagnetic conditions. The behavior of the substorms developed in non-storm time and during different phases of geomagnetic storms was discussed.
Putting Space Physics Data Facility (SPDF) Services to Good Use
NASA Astrophysics Data System (ADS)
Candey, R. M.; Bilitza, D.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Harris, B.; Johnson, R. C.; King, J. H.; Kovalick, T.; Leckner, H.; Liu, M.; McGuire, R. E.; Papitashvili, N. E.; Roberts, A.
2009-12-01
The Space Physics Data Facility (SPDF) project provides heliophysics science-enabling information services and is the most widely used single access point to heliophysics science data and orbits from NASA's solar-heliospheric satellite missions. Our emphasis has been on active service of the best digital data products and key ancillary information with graphics, listings and production of subsetted or merged files (mass downloads or parameter-specific selections). Our services today include the: (1) Heliophysics Resource Gateway (HRG) data finding service (also known as the Virtual Space Physics Observatory or VSPO); (2) Data services including the Coordinated Data Analysis Web (CDAWeb), OMNIweb compilation of interplanetary parameters (mapped to the Earth's bow shock) and related indices, and their large underlying collection of datasets; (3) Orbit information and display services including the Satellite Situation Center (SSCweb) and the 4D Orbit Viewer interactive Java client; and the (4) Common Data Format (CDF) software library and file format and science file format translation suite. (5) Upcoming is the Heliospheric Event List Manager (HELM) to coordinate lists of interesting events and provide a mechanism for tying together the above services and others. We describe several research projects that heavily used SPDF's services and resulted in publications. Although not actually all used at once, the following research scenario shows how SPDF and VxO services can be combined for studying solar events that produce energetic particles and effects at Earth: use the HRG/VPSO to locate data of interest, perhaps query OMNIWeb for times when energetic particle solar activity is high and query the SSCWeb orbit location service for when Cluster, Geotail, Polar/IMAGE are in position to measure the cusp, magnetotail and the Earth's aurora, respectively. Also query SSCweb for times when Polar and magnetometer ground stations are on the same field lines. Using these times, use CDAWeb to browse data from these spacecraft, and add Wind and ACE field and plasma data to identify interplanetary shocks arriving at Earth. Use HRG to find and retrieve SOHO LASCO CME data at SDAC. Use the SSCWeb 4D Orbit Viewer to display the relative spacecraft positions and geophysical boundaries and to follow the magnetic footpoints of the satellites. Confirm auroral substorm activity by a quick browse of IMAGE FUV and TIMED GUVI data as movies showing the expanding and intensifying auroral oval. Finally, pull these data directly into your own analysis tool (such as ViSBARD or some model in IDL) via our web services or simple FTP transfer to complete the analysis.
Heliophysics Legacy Data Restoration
NASA Astrophysics Data System (ADS)
Candey, R. M.; Bell, E. V., II; Bilitza, D.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Grayzeck, E. J.; Harris, B. T.; Hills, H. K.; Johnson, R. C.; Kovalick, T. J.; Lal, N.; Leckner, H. A.; Liu, M. H.; McCaslin, P. W.; McGuire, R. E.; Papitashvili, N. E.; Rhodes, S. A.; Roberts, D. A.; Yurow, R. E.
2016-12-01
The Space Physics Data Facility (SPDF)
An Integrated Nonlinear Analysis library - (INA) for solar system plasma turbulence
NASA Astrophysics Data System (ADS)
Munteanu, Costel; Kovacs, Peter; Echim, Marius; Koppan, Andras
2014-05-01
We present an integrated software library dedicated to the analysis of time series recorded in space and adapted to investigate turbulence, intermittency and multifractals. The library is written in MATLAB and provides a graphical user interface (GUI) customized for the analysis of space physics data available online like: Coordinated Data Analysis Web (CDAWeb), Automated Multi Dataset Analysis system (AMDA), Planetary Science Archive (PSA), World Data Center Kyoto (WDC), Ulysses Final Archive (UFA) and Cluster Active Archive (CAA). Three main modules are already implemented in INA : the Power Spectral Density (PSD) Analysis, the Wavelet and Intemittency Analysis and the Probability Density Functions (PDF) analysis.The layered structure of the software allows the user to easily switch between different modules/methods while retaining the same time interval for the analysis. The wavelet analysis module includes algorithms to compute and analyse the PSD, the Scalogram, the Local Intermittency Measure (LIM) or the Flatness parameter. The PDF analysis module includes algorithms for computing the PDFs for a range of scales and parameters fully customizable by the user; it also computes the Flatness parameter and enables fast comparison with standard PDF profiles like, for instance, the Gaussian PDF. The library has been already tested on Cluster and Venus Express data and we will show relevant examples. Research supported by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 313038/STORM, and a grant of the Romanian Ministry of National Education, CNCS UEFISCDI, project number PN-II-ID PCE-2012-4-0418.
Zen and the Art of Virtual Observatory Maintenance
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2014-12-01
The NASA Science Mission Directive Science Plan stresses that the primary goals of Heliophysics research focus on the understanding of the Sun's influence on the Earth and other bodies in the solar system. The NASA Heliophysics Division has adopted the Virtual Observatory, or VxO, concept in order to enable scientists to easily discover and access all data products relevant to these goals via web portals that act as clearinghouses. Furthermore, Heliophysics discipline scientists have defined the Space Physics Archive Search and Extract (SPASE) metadata schema in order to describe the contents of such applicable data products with detail extending all the way down to the parameter level. One SPASE metadata description file must be written to describe each data product at the global level. And the collection of such data product metadata description files, stored in repositories, provides the searchable content that the VxO web sites require in order to match the list of products to the unique needs of each researcher. The VxO metadata repository content also allows one to provide links to each unique data file contained in the full complement of files on a per data product basis. These links are contained within SPASE "Granule" description files and permit uniform access, worldwide, regardless of data server location thus permitting the VxO clearinghouse capability. The VxO concept is sound in theory but difficult in practice given that the Heliophysics data environment is diverse, ever expanding, and volatile. Thus, it is imperative to update the VxO metadata repositories in order to provide a complete, accurate, and current portrayal of the data environment. Such attention to detail is not a VxO desire but a necessity in order to support Heliophysics researchers and foster VxO user loyalty. An application of these basic tenets to the construction of a VxO repository dedicated to providing access to the CDF-formatted data collection hosted on the NASA Goddard CDAWeb data server. Note that the CDF format is self-describing and thus it provides a source of information for initiating SPASE metadata description at the data product level. Also, the CDAWeb data server provides high-quality data product tracking down to the individual data file level permitting easy updating of SPASE Granule metadata.
NASA Technical Reports Server (NTRS)
McGuire, Robert E.; Candey, Robert M.; Bilitza, D.
2006-01-01
The Sun-Earth Connection Active Archive (SECAA) project of NASA's Space Physics Data Facility operates a range of unique and heavily used multi-mission data services in support of the large-scale science objectives of the Great Observatory, including services such as CDAWeb, the CDAWeb Plus client, SSCWeb, OMNIweb and the CDF data format. In developing and operating these services, we have encountered and continue to struggle with a wide range of issues such as balancing scope and functionality with simplicity and ease of use, understanding the effectiveness of our choices and identifying areas most important for further improvement. In this paper, we will review our key services and then discuss some of our observations and new approaches to understanding and meeting user data service requirements. Some observations are obvious but may still have substantial implications; e.g. functionality without information content is of little user interest, which has led to our recent emphasis on development of web services interfaces, so the content and functionality we already serve is readily and fully available as a building block for new services. Some observations require careful design and tradeoffs; e.g. users will complain when they are offered interfaces with limited options but users are also easily intimidated and become lost when offered extensive options for customization. Some observations remain highly challenging; e.g. a comprehensive multi-mission, multi-source view of all data and services available easily produces a daunting list, but a more selective view can easily lead users to overlook available and relevant data. It is often difficult to obtain and meaningfully interpret measures of true productive usage and overall user satisfaction, even with a variety of techniques including statistics, citations, case studies, user feedback and advisory committees. Most of these issues will apply to and may even be more acute for distributed implementation architectures.
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted files, or the addition of new or the deletion of old data products. Next, ADAPT routines analyzed the query results and issued updates to the metadata stored in the UCLA CDAWEB and SPDF metadata registries. In this way, the SPASE metadata registries generated by ADAPT can be relied on to provide up to date and complete access to Heliophysics CDF data resources on a daily basis.
New DMSP Database of Precipitating Auroral Electrons and Ions.
Redmon, Robert J; Denig, William F; Kilcommons, Liam M; Knipp, Delores J
2017-08-01
Since the mid 1970's, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low earth orbit. As the program evolved, so to have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) through F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information (NCEI) and the Coordinated Data Analysis Web (CDAWeb). We describe how the new database is being applied to high latitude studies of: the co-location of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field aligned currents and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.
Preparing WIND for the STEREO Mission
NASA Astrophysics Data System (ADS)
Schroeder, P.; Ogilve, K.; Szabo, A.; Lin, R.; Luhmann, J.
2006-05-01
The upcoming STEREO mission's IMPACT and PLASTIC investigations will provide the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma ions and electrons, suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment will make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. To fully exploit these unique data sets, tight integration with similarly equipped missions at L1 will be essential, particularly WIND and ACE. The STEREO mission is building novel data analysis tools to take advantage of the mission's scientific potential. These tools will require reliable access and a well-documented interface to the L1 data sets. Such an interface already exists for ACE through the ACE Science Center. We plan to provide a similar service for the WIND mission that will supplement existing CDAWeb services. Building on tools also being developed for STEREO, we will create a SOAP application program interface (API) which will allow both our STEREO/WIND/ACE interactive browser and third-party software to access WIND data as a seamless and integral part of the STEREO mission. The API will also allow for more advanced forms of data mining than currently available through other data web services. Access will be provided to WIND-specific data analysis software as well. The development of cross-spacecraft data analysis tools will allow a larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.
NASA Technical Reports Server (NTRS)
Fung, Shing F.; Bilitza, D.; Candey, R.; Chimiak, R.; Cooper, John; Fung, Shing; Harris, B.; Johnson R.; King, J.; Kovalick, T.;
2008-01-01
From a user's perspective, the multi-mission data and orbit services of NASA's Space Physics Data Facility (SPDF) project offer a unique range of important data and services highly complementary to other services presently available or now evolving in the international heliophysics data environment. The VSP (Virtual Space Physics Observatory) service is an active portal to a wide range of distributed data sources. CDAWeb (Coordinate Data Analysis Web) enables plots, listings and file downloads for current data cross the boundaries of missions and instrument types (and now including data from THEMIS and STEREO). SSCWeb, Helioweb and our 3D Animated Orbit Viewer (TIPSOD) provide position data and query logic for most missions currently important to heliophysics science. OMNIWeb with its new extension to 1- and 5-minute resolution provides interplanetary parameters at the Earth's bow shock as a unique value-added data product. SPDF also maintains NASA's CDF (common Data Format) standard and a range of associated tools including translation services. These capabilities are all now available through webservices-based APIs as well as through our direct user interfaces. In this paper, we will demonstrate the latest data and capabilities now supported in these multi-mission services, review the lessons we continue to learn in what science users need and value in this class of services, and discuss out current thinking to the future role and appropriate focus of the SPDF effort in the evolving and increasingly distributed heliophysics data environment.
A large‐scale view of Space Technology 5 magnetometer response to solar wind drivers
Kilcommons, L. M.; Gjerloev, J.; Redmon, R. J.; Slavin, J.; Le, G.
2015-01-01
Abstract In this data report we discuss reprocessing of the Space Technology 5 (ST5) magnetometer database for inclusion in NASA's Coordinated Data Analysis Web (CDAWeb) virtual observatory. The mission consisted of three spacecraft flying in elliptical orbits, from 27 March to 27 June 2006. Reprocessing includes (1) transforming the data into the Modified Apex Coordinate System for projection to a common reference altitude of 110 km, (2) correcting gain jumps, and (3) validating the results. We display the averaged magnetic perturbations as a keogram, which allows direct comparison of the full‐mission data with the solar wind values and geomagnetic indices. With the data referenced to a common altitude, we find the following: (1) Magnetic perturbations that track the passage of corotating interaction regions and high‐speed solar wind; (2) unexpectedly strong dayside perturbations during a solstice magnetospheric sawtooth oscillation interval characterized by a radial interplanetary magnetic field (IMF) component that may have enhanced the accompanying modest southward IMF; and (3) intervals of reduced magnetic perturbations or “calms,” associated with periods of slow solar wind, interspersed among variable‐length episodic enhancements. These calms are most evident when the IMF is northward or projects with a northward component onto the geomagnetic dipole. The reprocessed ST5 data are in very good agreement with magnetic perturbations from the Defense Meteorological Satellite Program (DMSP) spacecraft, which we also map to 110 km. We briefly discuss the methods used to remap the ST5 data and the means of validating the results against DMSP. Our methods form the basis for future intermission comparisons of space‐based magnetometer data. PMID:27981071
A large-scale view of Space Technology 5 magnetometer response to solar wind drivers.
Knipp, D J; Kilcommons, L M; Gjerloev, J; Redmon, R J; Slavin, J; Le, G
2015-04-01
In this data report we discuss reprocessing of the Space Technology 5 (ST5) magnetometer database for inclusion in NASA's Coordinated Data Analysis Web (CDAWeb) virtual observatory. The mission consisted of three spacecraft flying in elliptical orbits, from 27 March to 27 June 2006. Reprocessing includes (1) transforming the data into the Modified Apex Coordinate System for projection to a common reference altitude of 110 km, (2) correcting gain jumps, and (3) validating the results. We display the averaged magnetic perturbations as a keogram, which allows direct comparison of the full-mission data with the solar wind values and geomagnetic indices. With the data referenced to a common altitude, we find the following: (1) Magnetic perturbations that track the passage of corotating interaction regions and high-speed solar wind; (2) unexpectedly strong dayside perturbations during a solstice magnetospheric sawtooth oscillation interval characterized by a radial interplanetary magnetic field (IMF) component that may have enhanced the accompanying modest southward IMF; and (3) intervals of reduced magnetic perturbations or "calms," associated with periods of slow solar wind, interspersed among variable-length episodic enhancements. These calms are most evident when the IMF is northward or projects with a northward component onto the geomagnetic dipole. The reprocessed ST5 data are in very good agreement with magnetic perturbations from the Defense Meteorological Satellite Program (DMSP) spacecraft, which we also map to 110 km. We briefly discuss the methods used to remap the ST5 data and the means of validating the results against DMSP. Our methods form the basis for future intermission comparisons of space-based magnetometer data.
SPDF Data and Orbit Services Supporting Open Access, Use and Archiving of MMS Data
NASA Astrophysics Data System (ADS)
McGuire, R. E.; Bilitza, D.; Candey, R. M.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Harris, B. T.; Johnson, R. C.; Kovalick, T. J.; Lal, N.; Leckner, H. A.; Liu, M. H.; Papitashvili, N. E.; Roberts, D. A.; Yurow, R. E.
2015-12-01
NASA's Space Physics Data Facility (SPDF) project is now serving MMS definitive and predictive interactive orbit plots, listings and conjunction calculations through our SSCWeb and 4D Orbit Viewer services. In March 2016 and in parallel with the MMS Science Data Center (SDC) at LASP, SPDF will begin publicly serving a complete set of MMS Level-2 and higher, survey and burst-mode science data products from all four spacecraft and all instruments. The initial Level-2 data available will be from September 2015 to early February 2016, with Level-2 products subsequently validated and publicly available with an approximate one month lag. All MMS Level-2 and higher data products are produced in standard CDF format with standard ISTP/SPDF metadata and will be served by SPDF through our CDAWeb data service, including our web services and associated APIs for IDL and Matlab users, and through direct FTP/HTTP directory browse and file downloads. SPDF's ingest, archival preservation and active serving of current MMS science data is part of our role as an active heliophysics final archive. SPDF's ingest of complete and current science data products from other active heliophysics missions with SPDF services will help enable coordinated and correlative MMS science analysis by the open international science community with current data from THEMIS, the Van Allen Probes and other missions including TWINS, Cluster, ACE, Wind, >120 ground magnetometer stations as well as instruments on the NOAA GOES and POES spacecraft. Please see the related Candey et.al. paper on "SPDF Ancillary Services and Technologies Supporting Open Access, Use and Archiving of MMS Data" for other aspects of what SPDF is doing. All SPDF data and services are available from the SPDF home page at http://spdf.gsfc.nasa.gov .
CDPP activities: Promoting research and education in space physics
NASA Astrophysics Data System (ADS)
Genot, V. N.; Andre, N.; Cecconi, B.; Gangloff, M.; Bouchemit, M.; Dufourg, N.; Pitout, F.; Budnik, E.; Lavraud, B.; Rouillard, A. P.; Heulet, D.; Bellucci, A.; Durand, J.; Delmas, D.; Alexandrova, O.; Briand, C.; Biegun, A.
2015-12-01
The French Plasma Physics Data Centre (CDPP, http://cdpp.eu/) addresses for more than 15 years all issues pertaining to natural plasma data distribution and valorization. Initially established by CNES and CNRS on the ground of a solid data archive, CDPP activities diversified with the advent of broader networks and interoperability standards, and through fruitful collaborations (e.g. with NASA/PDS): providing access to remote data, designing and building science driven analysis tools then became at the forefront of CDPP developments. For instance today AMDA helps scientists all over the world accessing and analyzing data from ancient to very recent missions (from Voyager, Galileo, Geotail, ... to Maven, Rosetta, MMS, ...) as well as results from models and numerical simulations. Other tools like the Propagation Tool or 3DView allow users to put their data in context and interconnect with other databases (CDAWeb, MEDOC) and tools (Topcat). This presentation will briefly review this evolution, show technical and science use cases, and finally put CDPP activities in the perspective of ongoing collaborative projects (Europlanet H2020, HELCATS, ...) and future missions (Bepicolombo, Solar Orbiter, ...).
NASA Technical Reports Server (NTRS)
Merka, J.; Szabo, A.; Narock, T. W.; King, J. H.; Paularena, K. I.; Richardson, J. D.
2003-01-01
The MIT portion of this project was to use the plasma data from IMP 8 to identify bow shock crossings for construction of a bow shock data base. In collaboration with Goddard, we determined which shock parameters would be included in the catalog and developed a set of flags for characterizing the data. IMP 8 data from 1973-2001 were surveyed for bow shock crossings; the crossings apparent in the plasma data were compared to a list of crossing chosen in the magnetometer data by Goddard. Differences were reconciled to produce a single list. The data were then provided to the NSSDC for archiving. All the work ascribed to MIT in the proposal was completed.
Database of ion temperature maps during geomagnetic storms
NASA Astrophysics Data System (ADS)
Keesee, Amy M.; Scime, Earl E.
2015-02-01
Ion temperatures as a function of the x and y axes in the geocentric solar magnetospheric (GSM) coordinate system and time are available for 76 geomagnetic storms that occurred during the period July 2008 to December 2013 on CDAWeb. The method for mapping energetic neutral atom data from the Two Wide-angle Imaging Spectrometers (TWINS) mission to the GSM equatorial plane and subsequent ion temperature calculation are described here. The ion temperatures are a measure of the average thermal energy of the bulk ion population in the 1-40 keV energy range. These temperatures are useful for studies of ion dynamics, for placing in situ measurements in a global context, and for establishing boundary conditions for models of the inner magnetosphere and the plasma sheet.
Database of ion temperature maps during geomagnetic storms.
Keesee, Amy M; Scime, Earl E
2015-02-01
Ion temperatures as a function of the x and y axes in the geocentric solar magnetospheric (GSM) coordinate system and time are available for 76 geomagnetic storms that occurred during the period July 2008 to December 2013 on CDAWeb. The method for mapping energetic neutral atom data from the Two Wide-angle Imaging Spectrometers (TWINS) mission to the GSM equatorial plane and subsequent ion temperature calculation are described here. The ion temperatures are a measure of the average thermal energy of the bulk ion population in the 1-40 keV energy range. These temperatures are useful for studies of ion dynamics, for placing in situ measurements in a global context, and for establishing boundary conditions for models of the inner magnetosphere and the plasma sheet.
Database of ion temperature maps during geomagnetic storms
Scime, Earl E.
2015-01-01
Abstract Ion temperatures as a function of the x and y axes in the geocentric solar magnetospheric (GSM) coordinate system and time are available for 76 geomagnetic storms that occurred during the period July 2008 to December 2013 on CDAWeb. The method for mapping energetic neutral atom data from the Two Wide‐angle Imaging Spectrometers (TWINS) mission to the GSM equatorial plane and subsequent ion temperature calculation are described here. The ion temperatures are a measure of the average thermal energy of the bulk ion population in the 1–40 keV energy range. These temperatures are useful for studies of ion dynamics, for placing in situ measurements in a global context, and for establishing boundary conditions for models of the inner magnetosphere and the plasma sheet. PMID:27981070
Introducing GHOST: The Geospace/Heliosphere Observation & Simulation Tool-kit
NASA Astrophysics Data System (ADS)
Murphy, J. J.; Elkington, S. R.; Schmitt, P.; Wiltberger, M. J.; Baker, D. N.
2013-12-01
Simulation models of the heliospheric and geospace environments can provide key insights into the geoeffective potential of solar disturbances such as Coronal Mass Ejections and High Speed Solar Wind Streams. Advanced post processing of the results of these simulations greatly enhances the utility of these models for scientists and other researchers. Currently, no supported centralized tool exists for performing these processing tasks. With GHOST, we introduce a toolkit for the ParaView visualization environment that provides a centralized suite of tools suited for Space Physics post processing. Building on the work from the Center For Integrated Space Weather Modeling (CISM) Knowledge Transfer group, GHOST is an open-source tool suite for ParaView. The tool-kit plugin currently provides tools for reading LFM and Enlil data sets, and provides automated tools for data comparison with NASA's CDAweb database. As work progresses, many additional tools will be added and through open-source collaboration, we hope to add readers for additional model types, as well as any additional tools deemed necessary by the scientific public. The ultimate end goal of this work is to provide a complete Sun-to-Earth model analysis toolset.
New satellite mission with old data: Rescuing a unique data set
NASA Astrophysics Data System (ADS)
Benson, Robert F.; Bilitza, Dieter
2009-02-01
We review efforts to save a unique data set and scientific results based on the rescued data. The goal of the project was to produce Alouette 2, ISIS 1, and ISIS 2 digital topside ionograms from selected original seven-track analog telemetry tapes. This project was initiated to preserve a significant portion of 60 satellite years of analog data, collected between 1962 and 1990, in digital form before the tapes were discarded. More than 1/2 million digital topside ionograms are now available for downloading at http://nssdcftp.gsfc.nasa.gov and for browsing and plotting at http://cdaweb.gsfc.nasa.gov. We illustrate data products, discuss analysis programs, review scientific results based on the digital data, and recognize those who made the project possible. The scientific results include evidence of extremely low altitude ionospheric peak densities at high latitudes, improved and new ionospheric models including one connecting the F2 topside ionosphere and the plasmasphere, transionospheric HF propagation investigations, and new interpretations of sounder-stimulated plasma emissions that have challenged theorists for decades. The homepage for the ISIS project is at http://nssdc.gsfc.nasa.gov/space/isis/isis-status.html.
NASA Technical Reports Server (NTRS)
McGuire, Robert E.; Candey, Robert M.
2007-01-01
SPDF now supports a broad range of data, user services and other activities. These include: CDAWeb current multi-mission data graphics, listings, file subsetting and supersetting by time and parameters; SSCWeb and 3-D Java client orbit graphics, listings and conjunction queries; OMNIWeb 1/5/60 minute interplanetary parameters at Earth; product-level SPASE descriptions of data including holdings of nssdcftp; VSPO SPASE-based heliophysics-wide product site finding and data use;, standard Data format Translation Webservices (DTWS); metrics software and others. These data and services are available through standard user and application webservices interfaces, so middleware services such as the Heliophysics VxOs, and externally-developed clients or services, can readily leverage our data and capabilities. Beyond a short summary of the above, we will then conduct the talk as a conversation to evolving VxO needs and planned approach to leverage such existing and ongoing services.
Simplifying the Analysis of Data from Multiple Heliophysics Instruments and Missions
NASA Astrophysics Data System (ADS)
Bazell, D.; Vandegriff, J. D.
2014-12-01
Understanding the intertwined plasma, particles and fields connecting the Sun and the Earth requires combining data from many diverse sources, but there are still many technological barriers that complicate the merging of data from different instruments and missions. We present an emerging data serving capability that provides a uniform way to access heterogeneous and distributed data. The goal of our data server is to provide a standardized data access mechanism that is identical for data of any format and layout (CDF, custom binary, FITS, netCDF, CSV and other flavors of ASCII, etc). Data remain in their original format and location (i.e., at instrument team sites or existing data centers), and our data server delivers a dynamically reformatted view of the data. Scientists can then use tools (clients that talk to the server) that offer a single interface for browsing, analyzing or downloading many different contemporary and legacy heliophysics data sets. Our current server accesses many CDF data resources at CDAWeb, as well as multiple other instrument team sites. Our webservice will be deployed on the Amazon Cloud at http://datashop.elasticbeanstalk.com/. Two basic clients will also be demonstrated: one in Java and one in IDL. Python, Perl, and Matlab clients are also planned. Complex missions such as Solar Orbiter and Solar Probe Plus will benefit greatly from tools that enable multi-instrument and multi-mission data comparison.
Joint Ne/O and Fe/O Analysis to Diagnose Large Solar Energetic Particle Events during Solar Cycle 23
NASA Astrophysics Data System (ADS)
Malandraki, Olga; Tan, Lun C.; Shao, Xi
2017-04-01
In this work we have examined 29 large SEP events with the peak proton intensity Jpp(>60MeV) >1 pfu during the solar cycle 23. The emphasis of our examination is put on a joint analysis of the Ne/O and Fe/O data in the 3-40 MeV/nucleon energy range as covered by the Wind/LEMT and ACE/SIS sensors in order to differentiate between the Fe-poor and Fe-rich events emerged from the CME-driven shock acceleration process. Some of our main findings are: (1) An improved ion ratio calculation can be carried out by re-binning ion intensity data into the form of equal bin widths in the logarithmic energy scale, (2) through the analysis we find that the variability of Ne/O and Fe/O ratios can be used to investigate the accelerating shock properties, (3) in particular, we observe a good correlation of the high-energy Ne/O ratio with the source plasma temperature T recently reported by Reames (2016). Therefore, the (Ne/O)n value at high energies should be a proxy of the injection energy in the shock acceleration process, and hence the shock θBn according to the models of Tylka & Lee (2006) as well as Schwadron et al. (2015). Acknowledgements. We gratefully acknowledge the source plasma temperature data provided by D. Reames, Wind/EPACT/LEMT data provided by the NASA/Space Physics Data Facility (SPDF)/CDAWeb, and the ACE/SIS data provided by the ACE Science Center. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 637324.
ERIC Educational Resources Information Center
Larson, Ray R.
1996-01-01
Examines the bibliometrics of the World Wide Web based on analysis of Web pages collected by the Inktomi "Web Crawler" and on the use of the DEC AltaVista search engine for cocitation analysis of a set of Earth Science related Web sites. Looks at the statistical characteristics of Web documents and their hypertext links, and the…
Henry, Anna E; Story, Mary
2009-01-01
To identify food and beverage brand Web sites featuring designated children's areas, assess marketing techniques present on those industry Web sites, and determine nutritional quality of branded food items marketed to children. Systematic content analysis of food and beverage brand Web sites and nutrient analysis of food and beverages advertised on these Web sites. The World Wide Web. One-hundred thirty Internet Web sites of food and beverage brands with top media expenditures based on the America's Top 2000 Brands section of Brandweek magazine's annual "Superbrands" report. A standardized content analysis rating form to determine marketing techniques used on the food and beverage brand Web sites. Nutritional analysis of food brands was conducted. Of 130 Web sites analyzed, 48% featured designated children's areas. These Web sites featured a variety of Internet marketing techniques, including advergaming on 85% of the Web sites and interactive programs on 92% of the Web sites. Branded spokescharacters and tie-ins to other products were featured on the majority of the Web sites, as well. Few food brands (13%) with Web sites that market to children met the nutrition criteria set by the National Alliance for Nutrition and Activity. Nearly half of branded Web sites analyzed used designated children's areas to market food and beverages to children, 87% of which were of low nutritional quality. Nutrition professionals should advocate the use of advertising techniques to encourage healthful food choices for children.
web cellHTS2: a web-application for the analysis of high-throughput screening data.
Pelz, Oliver; Gilsdorf, Moritz; Boutros, Michael
2010-04-12
The analysis of high-throughput screening data sets is an expanding field in bioinformatics. High-throughput screens by RNAi generate large primary data sets which need to be analyzed and annotated to identify relevant phenotypic hits. Large-scale RNAi screens are frequently used to identify novel factors that influence a broad range of cellular processes, including signaling pathway activity, cell proliferation, and host cell infection. Here, we present a web-based application utility for the end-to-end analysis of large cell-based screening experiments by cellHTS2. The software guides the user through the configuration steps that are required for the analysis of single or multi-channel experiments. The web-application provides options for various standardization and normalization methods, annotation of data sets and a comprehensive HTML report of the screening data analysis, including a ranked hit list. Sessions can be saved and restored for later re-analysis. The web frontend for the cellHTS2 R/Bioconductor package interacts with it through an R-server implementation that enables highly parallel analysis of screening data sets. web cellHTS2 further provides a file import and configuration module for common file formats. The implemented web-application facilitates the analysis of high-throughput data sets and provides a user-friendly interface. web cellHTS2 is accessible online at http://web-cellHTS2.dkfz.de. A standalone version as a virtual appliance and source code for platforms supporting Java 1.5.0 can be downloaded from the web cellHTS2 page. web cellHTS2 is freely distributed under GPL.
Electron Density Profiles of the Topside Ionosphere
NASA Technical Reports Server (NTRS)
Huang, Xue-Qin; Reinsch, Bodo W.; Bilitza, Dieter; Benson, Robert F.
2002-01-01
The existing uncertainties about the electron density profiles in the topside ionosphere, i.e., in the height region from h,F2 to - 2000 km, require the search for new data sources. The ISIS and Alouette topside sounder satellites from the sixties to the eighties recorded millions of ionograms but most were not analyzed in terms of electron density profiles. In recent years an effort started to digitize the analog recordings to prepare the ionograms for computerized analysis. As of November 2001 about 350000 ionograms have been digitized from the original 7-track analog tapes. These data are available in binary and CDF format from the anonymous ftp site of the National Space Science Data Center. A search site and browse capabilities on CDAWeb assist the scientific usage of these data. All information and access links can be found at http://nssdc.gsfc.nasa.gov/space/isis/isis- status.htm1. This paper describes the ISIS data restoration effort and shows how the digital ionograms are automatically processed into electron density profiles from satellite orbit altitude (1400 km for ISIS-2) down to the F peak. Because of the large volume of data an automated processing algorithm is imperative. The TOPside Ionogram Scaler with True height algorithm TOPIST software developed for this task is successfully scaling - 70% of the ionograms. An <
IsoWeb: A Bayesian Isotope Mixing Model for Diet Analysis of the Whole Food Web
Kadoya, Taku; Osada, Yutaka; Takimoto, Gaku
2012-01-01
Quantitative description of food webs provides fundamental information for the understanding of population, community, and ecosystem dynamics. Recently, stable isotope mixing models have been widely used to quantify dietary proportions of different food resources to a focal consumer. Here we propose a novel mixing model (IsoWeb) that estimates diet proportions of all consumers in a food web based on stable isotope information. IsoWeb requires a topological description of a food web, and stable isotope signatures of all consumers and resources in the web. A merit of IsoWeb is that it takes into account variation in trophic enrichment factors among different consumer-resource links. Sensitivity analysis using realistic hypothetical food webs suggests that IsoWeb is applicable to a wide variety of food webs differing in the number of species, connectance, sample size, and data variability. Sensitivity analysis based on real topological webs showed that IsoWeb can allow for a certain level of topological uncertainty in target food webs, including erroneously assuming false links, omission of existent links and species, and trophic aggregation into trophospecies. Moreover, using an illustrative application to a real food web, we demonstrated that IsoWeb can compare the plausibility of different candidate topologies for a focal web. These results suggest that IsoWeb provides a powerful tool to analyze food-web structure from stable isotope data. We provide R and BUGS codes to aid efficient applications of IsoWeb. PMID:22848427
Engineering Analysis Using a Web-based Protocol
NASA Technical Reports Server (NTRS)
Schoeffler, James D.; Claus, Russell W.
2002-01-01
This paper reviews the development of a web-based framework for engineering analysis. A one-dimensional, high-speed analysis code called LAPIN was used in this study, but the approach can be generalized to any engineering analysis tool. The web-based framework enables users to store, retrieve, and execute an engineering analysis from a standard web-browser. We review the encapsulation of the engineering data into the eXtensible Markup Language (XML) and various design considerations in the storage and retrieval of application data.
ERIC Educational Resources Information Center
Henry, Anna E.; Story, Mary
2009-01-01
Objective: To identify food and beverage brand Web sites featuring designated children's areas, assess marketing techniques present on those industry Web sites, and determine nutritional quality of branded food items marketed to children. Design: Systematic content analysis of food and beverage brand Web sites and nutrient analysis of food and…
WMD Intent Identification and Interaction Analysis Using the Dark Web
2016-04-01
WMD Intent Identification and Interaction Analysis Using the Dark Web Distribution Statement A. Approved for public release; distribution is...Organization/Institution: University of Arizona Project Title: WMD Intent Identification and Interaction Analysis Using the Dark Web Report Period: Final...and social media analytics. We are leveraging our highly successful Dark Web project as our research testbed (for identifying target adversarial
Seahawk: moving beyond HTML in Web-based bioinformatics analysis.
Gordon, Paul M K; Sensen, Christoph W
2007-06-18
Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therefore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer.
Seahawk: moving beyond HTML in Web-based bioinformatics analysis
Gordon, Paul MK; Sensen, Christoph W
2007-01-01
Background Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therfore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. Results We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. Conclusion As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer. PMID:17577405
A Structural and Content-Based Analysis for Web Filtering.
ERIC Educational Resources Information Center
Lee, P. Y.; Hui, S. C.; Fong, A. C. M.
2003-01-01
Presents an analysis of the distinguishing features of pornographic Web pages so that effective filtering techniques can be developed. Surveys the existing techniques for Web content filtering and describes the implementation of a Web content filtering system that uses an artificial neural network. (Author/LRW)
EasyFRAP-web: a web-based tool for the analysis of fluorescence recovery after photobleaching data.
Koulouras, Grigorios; Panagopoulos, Andreas; Rapsomaniki, Maria A; Giakoumakis, Nickolaos N; Taraviras, Stavros; Lygerou, Zoi
2018-06-13
Understanding protein dynamics is crucial in order to elucidate protein function and interactions. Advances in modern microscopy facilitate the exploration of the mobility of fluorescently tagged proteins within living cells. Fluorescence recovery after photobleaching (FRAP) is an increasingly popular functional live-cell imaging technique which enables the study of the dynamic properties of proteins at a single-cell level. As an increasing number of labs generate FRAP datasets, there is a need for fast, interactive and user-friendly applications that analyze the resulting data. Here we present easyFRAP-web, a web application that simplifies the qualitative and quantitative analysis of FRAP datasets. EasyFRAP-web permits quick analysis of FRAP datasets through an intuitive web interface with interconnected analysis steps (experimental data assessment, different types of normalization and estimation of curve-derived quantitative parameters). In addition, easyFRAP-web provides dynamic and interactive data visualization and data and figure export for further analysis after every step. We test easyFRAP-web by analyzing FRAP datasets capturing the mobility of the cell cycle regulator Cdt2 in the presence and absence of DNA damage in cultured cells. We show that easyFRAP-web yields results consistent with previous studies and highlights cell-to-cell heterogeneity in the estimated kinetic parameters. EasyFRAP-web is platform-independent and is freely accessible at: https://easyfrap.vmnet.upatras.gr/.
Macroscopic characterisations of Web accessibility
NASA Astrophysics Data System (ADS)
Lopes, Rui; Carriço, Luis
2010-12-01
The Web Science framework poses fundamental questions on the analysis of the Web, by focusing on how microscopic properties (e.g. at the level of a Web page or Web site) emerge into macroscopic properties and phenomena. One research topic on the analysis of the Web is Web accessibility evaluation, which centres on understanding how accessible a Web page is for people with disabilities. However, when framing Web accessibility evaluation on Web Science, we have found that existing research stays at the microscopic level. This article presents an experimental study on framing Web accessibility evaluation into Web Science's goals. This study resulted in novel accessibility properties of the Web not found at microscopic levels, as well as of Web accessibility evaluation processes themselves. We observed at large scale some of the empirical knowledge on how accessibility is perceived by designers and developers, such as the disparity of interpretations of accessibility evaluation tools warnings. We also found a direct relation between accessibility quality and Web page complexity. We provide a set of guidelines for designing Web pages, education on Web accessibility, as well as on the computational limits of large-scale Web accessibility evaluations.
How Japanese students characterize information from web-sites.
Iwahara, A; Yamada, M; Hatta, T; Kawakami, A; Okamoto, M
2000-12-01
How 352 Japanese university students regard web-site information was investigated by two kinds of survey. Application of correspondence analysis and cluster analysis to the questionnaire responses to the web-site advertisement showed students regarded a web-site as a new alien medium which is different from current media. Students regarded web-sites as simply complicated, intellectual, and impermanent, or not memorable. Students got precise information from web-sites but they did not use it in making decisions to purchase goods.
Things That Work: Roles and Services of SPDF
NASA Technical Reports Server (NTRS)
McGuire, R. E.; Bilitza, D.; Candey, R. M.; Chimiak, R. A.; Cooper, J. F.; Garcia, L. N.; Han, D. B.; Harris, B. T.; Johnson, R. C.; King, J. H.;
2010-01-01
The current Heliophysics Science Data Management Policy (HpSDMP) defines the roles of the Space Physics Data Facility (SPDF) project as a heliophysics active Final Archive (aFA), a focus for critical data infrastructure services and a center of excellence for data and ancillary information services. This presentation will highlight (1) select current SPDF activities, (2) the lessons we are continuing to learn in how to usefully serve the the heliophysics science community and (3)SPDF's programmatic emphasis in the coming year. In cooperation with the Heliophysics Virtual discipline Observatories (VxOs), we are working closely with current, and with upcoming missions such as RBSP and MMS, to define effective approaches to ensure the long-term availability and archiving of mission data, as well as how SPDF services can complement active mission capabilities. We are working to make the Virtual Space Physics Observatory (VSPO) service comprehensive in all significant and NASA relevant heliophysics data. We will highlight a new CDAWeb interface, a faster SSCWeb, availability of our data through VxO services such as Autoplot, a new capability to easily access our data from within IDL and continuing improvements to CDF including better handling of leap seconds.
NASA Astrophysics Data System (ADS)
Faden, J.; Vandegriff, J. D.; Weigel, R. S.
2016-12-01
Autoplot was introduced in 2008 as an easy-to-use plotting tool for the space physics community. It reads data from a variety of file resources, such as CDF and HDF files, and a number of specialized data servers, such as the PDS/PPI's DIT-DOS, CDAWeb, and from the University of Iowa's RPWG Das2Server. Each of these servers have optimized methods for transmitting data to display in Autoplot, but require coordination and specialized software to work, limiting Autoplot's ability to access new servers and datasets. Likewise, groups who would like software to access their APIs must either write thier own clients, or publish a specification document in hopes that people will write clients. The HAPI specification was written so that a simple, standard API could be used by both Autoplot and server implementations, to remove these barriers to free flow of time series data. Autoplot's software for communicating with HAPI servers is presented, showing the user interface scientists will use, and how data servers might implement the HAPI specification to provide access to their data. This will also include instructions on how Autoplot is used and installed desktop computers, and used to view data from the RBSP, Juno, and other missions.
Deriving a Typology of Web 2.0 Learning Technologies
ERIC Educational Resources Information Center
Bower, Matt
2016-01-01
This paper presents the methods and outcomes of a typological analysis of Web 2.0 technologies. A comprehensive review incorporating over 2000 links led to identification of over 200 Web 2.0 technologies that were suitable for learning and teaching purposes. The typological analysis involved development of relevant Web 2.0 dimensions, grouping…
A Risk-Analysis Approach to Implementing Web-Based Assessment
ERIC Educational Resources Information Center
Ricketts, Chris; Zakrzewski, Stan
2005-01-01
Computer-Based Assessment is a risky business. This paper proposes the use of a model for web-based assessment systems that identifies pedagogic, operational, technical (non web-based), web-based and financial risks. The strategies and procedures for risk elimination or reduction arise from risk analysis and management and are the means by which…
Resource Management Scheme Based on Ubiquitous Data Analysis
Lee, Heung Ki; Jung, Jaehee
2014-01-01
Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients' requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme. PMID:25197692
Semantic web for integrated network analysis in biomedicine.
Chen, Huajun; Ding, Li; Wu, Zhaohui; Yu, Tong; Dhanapalan, Lavanya; Chen, Jake Y
2009-03-01
The Semantic Web technology enables integration of heterogeneous data on the World Wide Web by making the semantics of data explicit through formal ontologies. In this article, we survey the feasibility and state of the art of utilizing the Semantic Web technology to represent, integrate and analyze the knowledge in various biomedical networks. We introduce a new conceptual framework, semantic graph mining, to enable researchers to integrate graph mining with ontology reasoning in network data analysis. Through four case studies, we demonstrate how semantic graph mining can be applied to the analysis of disease-causal genes, Gene Ontology category cross-talks, drug efficacy analysis and herb-drug interactions analysis.
Comparing cosmic web classifiers using information theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin
We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Ourmore » study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.« less
Beyond Electronic Brochures: An Analysis of Singapore Primary School Web Sites
ERIC Educational Resources Information Center
Hu, Chun; Soong, Andrew Kheng Fah
2007-01-01
This study aims to investigate how Singapore primary schools use their web sites, what kind of information is contained in the web sites, and how the information is presented. Based on an analysis of 176 primary school web sites, which represent all but one of the country's primary schools, findings indicate that most of Singapore's primary school…
New Data on the Topside Electron Density Distribution
NASA Technical Reports Server (NTRS)
Huang, Xue-Qin; Reinisch, Bodo; Bilitza, Dieter; Benson, Robert F.
2001-01-01
The existing uncertainties about the electron density profiles in the topside ionosphere, i.e., in the height region from hmF2 to approx. 2000 km, require the search for new data sources. The ISIS and Alouette topside sounder satellites from the sixties to the eighties recorded millions of ionograms and most were not analyzed in terms of electron density profiles. In recent years an effort started to digitize the analog recordings to prepare the ionograms for computerized analysis. As of November 2001 about 350,000 ionograms have been digitized from the original 7-track analog tapes. These data are available in binary and CDF format from the anonymous ftp site of the National Space Science Data Center. A search site and browse capabilities on CDAWeb assist the scientific usage of these data. All information and access links can be found at http://nssdc.gsfc.nasa.gov/space/isis/isis-status.html. This paper describes the ISIS data restoration effort and shows how the digital ionograms are automatically processed into electron density profiles from satellite orbit altitude (1400 km for ISIS-2) down to the F peak. Because of the large volume of data an automated processing algorithm is imperative. The automatic topside ionogram scaler with true height algorithm TOPIST software developed for this task is successfully scaling approx.70 % of the ionograms. An 'editing process' is available to manually scale the more difficult ionograms. The automated processing of the digitized ISIS ionograms is now underway, producing a much-needed database of topside electron density profiles for ionospheric modeling covering more than one solar cycle. The ISIS data restoration efforts are supported through NASA's Applied Systems and Information Research Program.
Lu, Ying-Hao; Kuo, Chen-Chun; Huang, Yaw-Bin
2011-08-01
We selected HTML, PHP and JavaScript as the programming languages to build "WebBio", a web-based system for patient data of biological products and used MySQL as database. WebBio is based on the PHP-MySQL suite and is run by Apache server on Linux machine. WebBio provides the functions of data management, searching function and data analysis for 20 kinds of biological products (plasma expanders, human immunoglobulin and hematological products). There are two particular features in WebBio: (1) pharmacists can rapidly find out whose patients used contaminated products for medication safety, and (2) the statistics charts for a specific product can be automatically generated to reduce pharmacist's work loading. WebBio has successfully turned traditional paper work into web-based data management.
Using Web Server Logs in Evaluating Instructional Web Sites.
ERIC Educational Resources Information Center
Ingram, Albert L.
2000-01-01
Web server logs contain a great deal of information about who uses a Web site and how they use it. This article discusses the analysis of Web logs for instructional Web sites; reviews the data stored in most Web server logs; demonstrates what further information can be gleaned from the logs; and discusses analyzing that information for the…
GobyWeb: Simplified Management and Analysis of Gene Expression and DNA Methylation Sequencing Data
Dorff, Kevin C.; Chambwe, Nyasha; Zeno, Zachary; Simi, Manuele; Shaknovich, Rita; Campagne, Fabien
2013-01-01
We present GobyWeb, a web-based system that facilitates the management and analysis of high-throughput sequencing (HTS) projects. The software provides integrated support for a broad set of HTS analyses and offers a simple plugin extension mechanism. Analyses currently supported include quantification of gene expression for messenger and small RNA sequencing, estimation of DNA methylation (i.e., reduced bisulfite sequencing and whole genome methyl-seq), or the detection of pathogens in sequenced data. In contrast to previous analysis pipelines developed for analysis of HTS data, GobyWeb requires significantly less storage space, runs analyses efficiently on a parallel grid, scales gracefully to process tens or hundreds of multi-gigabyte samples, yet can be used effectively by researchers who are comfortable using a web browser. We conducted performance evaluations of the software and found it to either outperform or have similar performance to analysis programs developed for specialized analyses of HTS data. We found that most biologists who took a one-hour GobyWeb training session were readily able to analyze RNA-Seq data with state of the art analysis tools. GobyWeb can be obtained at http://gobyweb.campagnelab.org and is freely available for non-commercial use. GobyWeb plugins are distributed in source code and licensed under the open source LGPL3 license to facilitate code inspection, reuse and independent extensions http://github.com/CampagneLaboratory/gobyweb2-plugins. PMID:23936070
Hospital web-site marketing: analysis, issues, and trends.
Sanchez, P M; Maier-Donati, P
1999-01-01
As hospitals continue to incorporate web technology into their overall marketing and communications strategies, they face several issues which we explore in this paper. Hospitals' effectiveness in dealing with these issues will affect the benefits received from this technology. We provide an exploratory analysis of current hospital web sites and develop implications for future web site development. Likewise, recommendations based on our research are also provided.
ERIC Educational Resources Information Center
Qin, Jian; Jurisica, Igor; Liddy, Elizabeth D.; Jansen, Bernard J; Spink, Amanda; Priss, Uta; Norton, Melanie J.
2000-01-01
These six articles discuss knowledge discovery in databases (KDD). Topics include data mining; knowledge management systems; applications of knowledge discovery; text and Web mining; text mining and information retrieval; user search patterns through Web log analysis; concept analysis; data collection; and data structure inconsistency. (LRW)
Sentiment Analysis of Web Sites Related to Vaginal Mesh Use in Pelvic Reconstructive Surgery.
Hobson, Deslyn T G; Meriwether, Kate V; Francis, Sean L; Kinman, Casey L; Stewart, J Ryan
2018-05-02
The purpose of this study was to utilize sentiment analysis to describe online opinions toward vaginal mesh. We hypothesized that sentiment in legal Web sites would be more negative than that in medical and reference Web sites. We generated a list of relevant key words related to vaginal mesh and searched Web sites using the Google search engine. Each unique uniform resource locator (URL) was sorted into 1 of 6 categories: "medical", "legal", "news/media", "patient generated", "reference", or "unrelated". Sentiment of relevant Web sites, the primary outcome, was scored on a scale of -1 to +1, and mean sentiment was compared across all categories using 1-way analysis of variance. Tukey test evaluated differences between category pairs. Google searches of 464 unique key words resulted in 11,405 URLs. Sentiment analysis was performed on 8029 relevant URLs (3472 legal, 1625 "medical", 1774 "reference", 666 "news media", 492 "patient generated"). The mean sentiment for all relevant Web sites was +0.01 ± 0.16; analysis of variance revealed significant differences between categories (P < 0.001). Web sites categorized as "legal" and "news/media" had a slightly negative mean sentiment, whereas those categorized as "medical," "reference," and "patient generated" had slightly positive mean sentiments. Tukey test showed differences between all category pairs except the "medical" versus "reference" in comparison with the largest mean difference (-0.13) seen in the "legal" versus "reference" comparison. Web sites related to vaginal mesh have an overall mean neutral sentiment, and Web sites categorized as "medical," "reference," and "patient generated" have significantly higher sentiment scores than related Web sites in "legal" and "news/media" categories.
Finite Element Analysis for the Web Offset of Wind Turbine Blade
NASA Astrophysics Data System (ADS)
Zhou, Bo; Wang, Xin; Zheng, Changwei; Cao, Jinxiang; Zou, Pingguo
2017-05-01
The web is an important part of wind turbine blade, which improves bending properties. Much of blade process is handmade, so web offset of wind turbine blade is one of common quality defects. In this paper, a 3D parametric finite element model of a blade for 2MW turbine was established by ANSYS. Stress distributions in different web offset values were studied. There were three kinds of web offset. The systematic study of web offset was done by orthogonal experiment. The most important factor of stress distributions was found. The analysis results have certain instructive significance to design and manufacture of wind turbine blade.
ERIC Educational Resources Information Center
Rauber, Andreas; Bruckner, Robert M.; Aschenbrenner, Andreas; Witvoet, Oliver; Kaiser, Max; Masanes, Julien; Marchionini, Gary; Geisler, Gary; King, Donald W.; Montgomery, Carol Hansen; Rudner, Lawrence M.; Gellmann, Jennifer S.; Miller-Whitehead, Marie; Iverson, Lee
2002-01-01
These six articles discuss Web archives and Web analysis building on data warehouses; international efforts at continuous Web archiving; the Open Video Digital Library; electronic journal collections in academic libraries; online education journals; and an electronic library symposium at the University of British Columbia. (LRW)
Developing web-based data analysis tools for precision farming using R and Shiny
NASA Astrophysics Data System (ADS)
Jahanshiri, Ebrahim; Mohd Shariff, Abdul Rashid
2014-06-01
Technologies that are set to increase the productivity of agricultural practices require more and more data. Nevertheless, farming data is also being increasingly cheap to collect and maintain. Bulk of data that are collected by the sensors and samples need to be analysed in an efficient and transparent manner. Web technologies have long being used to develop applications that can assist the farmers and managers. However until recently, analysing the data in an online environment has not been an easy task especially in the eyes of data analysts. This barrier is now overcome by the availability of new application programming interfaces that can provide real-time web based data analysis. In this paper developing a prototype web based application for data analysis using new facilities in R statistical package and its web development facility, Shiny is explored. The pros and cons of this type of data analysis environment for precision farming are enumerated and future directions in web application development for agricultural data are discussed.
ERIC Educational Resources Information Center
Obilade, Titilola T.; Burton, John K.
2015-01-01
This textual content analysis set out to determine the extent to which the theories, principles, and guidelines in 4 standard books of instructional design and technology were also addressed in 4 popular books on web design. The standard books on instructional design and the popular books on web design were chosen by experts in the fields. The…
Analysis Tool Web Services from the EMBL-EBI.
McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo
2013-07-01
Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.
Analysis Tool Web Services from the EMBL-EBI
McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo
2013-01-01
Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods. PMID:23671338
NASA Astrophysics Data System (ADS)
Vandegriff, J. D.; King, T. A.; Weigel, R. S.; Faden, J.; Roberts, D. A.; Harris, B. T.; Lal, N.; Boardsen, S. A.; Candey, R. M.; Lindholm, D. M.
2017-12-01
We present the Heliophysics Application Programmers Interface (HAPI), a new interface specification that both large and small data centers can use to expose time series data holdings in a standard way. HAPI was inspired by the similarity of existing services at many Heliophysics data centers, and these data centers have collaborated to define a single interface that captures best practices and represents what everyone considers the essential, lowest common denominator for basic data access. This low level access can serve as infrastructure to support greatly enhanced interoperability among analysis tools, with the goal being simplified analysis and comparison of data from any instrument, model, mission or data center. The three main services a HAPI server must perform are 1. list a catalog of datasets (one unique ID per dataset), 2. describe the content of one dataset (JSON metadata), and 3. retrieve numerical content for one dataset (stream the actual data). HAPI defines both the format of the query to the server, and the response from the server. The metadata is lightweight, focusing on use rather than discovery, and the data format is a streaming one, with Comma Separated Values (CSV) being required and binary or JSON streaming being optional. The HAPI specification is available at GitHub, where projects are also underway to develop reference implementation servers that data providers can adapt and use at their own sites. Also in the works are data analysis clients in multiple languages (IDL, Python, Matlab, and Java). Institutions which have agreed to adopt HAPI include Goddard (CDAWeb for data and CCMC for models), LASP at the University of Colorado Boulder, the Particles and Plasma Interactions node of the Planetary Data System (PPI/PDS) at UCLA, the Plasma Wave Group at the University of Iowa, the Space Sector at the Johns Hopkins Applied Physics Lab (APL), and the tsds.org site maintained at George Mason University. Over the next year, the adoption of a uniform way to access time series data is expected to significantly enhance interoperability within the Heliophysics data environment. https://github.com/hapi-server/data-specification
Web Analytics: A Picture of the Academic Library Web Site User
ERIC Educational Resources Information Center
Black, Elizabeth L.
2009-01-01
This article describes the usefulness of Web analytics for understanding the users of an academic library Web site. Using a case study, the analysis describes how Web analytics can answer questions about Web site user behavior, including when visitors come, the duration of the visit, how they get there, the technology they use, and the most…
An Analysis of Web Image Queries for Search.
ERIC Educational Resources Information Center
Pu, Hsiao-Tieh
2003-01-01
Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)
Mining a Web Citation Database for Author Co-Citation Analysis.
ERIC Educational Resources Information Center
He, Yulan; Hui, Siu Cheung
2002-01-01
Proposes a mining process to automate author co-citation analysis based on the Web Citation Database, a data warehouse for storing citation indices of Web publications. Describes the use of agglomerative hierarchical clustering for author clustering and multidimensional scaling for displaying author cluster maps, and explains PubSearch, a…
Stable-isotope analysis: a neglected tool for placing parasites in food webs.
Sabadel, A J M; Stumbo, A D; MacLeod, C D
2018-02-28
Parasites are often overlooked in the construction of food webs, despite their ubiquitous presence in almost every type of ecosystem. Researchers who do recognize their importance often struggle to include parasites using classical food-web theory, mainly due to the parasites' multiple hosts and life stages. A novel approach using compound-specific stable-isotope analysis promises to provide considerable insight into the energetic exchanges of parasite and host, which may solve some of the issues inherent in incorporating parasites using a classical approach. Understanding the role of parasites within food webs, and tracing the associated biomass transfers, are crucial to constructing new models that will expand our knowledge of food webs. This mini-review focuses on stable-isotope studies published in the past decade, and introduces compound-specific stable-isotope analysis as a powerful, but underutilized, newly developed tool that may answer many unresolved questions regarding the role of parasites in food webs.
Web usage mining at an academic health sciences library: an exploratory study.
Bracke, Paul J
2004-10-01
This paper explores the potential of multinomial logistic regression analysis to perform Web usage mining for an academic health sciences library Website. Usage of database-driven resource gateway pages was logged for a six-month period, including information about users' network addresses, referring uniform resource locators (URLs), and types of resource accessed. It was found that referring URL did vary significantly by two factors: whether a user was on-campus and what type of resource was accessed. Although the data available for analysis are limited by the nature of the Web and concerns for privacy, this method demonstrates the potential for gaining insight into Web usage that supplements Web log analysis. It can be used to improve the design of static and dynamic Websites today and could be used in the design of more advanced Web systems in the future.
The Use of Web Search Engines in Information Science Research.
ERIC Educational Resources Information Center
Bar-Ilan, Judit
2004-01-01
Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…
Discovering Authorities and Hubs in Different Topological Web Graph Structures.
ERIC Educational Resources Information Center
Meghabghab, George
2002-01-01
Discussion of citation analysis on the Web considers Web hyperlinks as a source to analyze citations. Topics include basic graph theory applied to Web pages, including matrices, linear algebra, and Web topology; and hubs and authorities, including a search technique called HITS (Hyperlink Induced Topic Search). (Author/LRW)
Corporate Web Sites in Traditional Print Advertisements.
ERIC Educational Resources Information Center
Pardun, Carol J.; Lamb, Larry
1999-01-01
Describes the Web presence in print advertisements to determine how marketers are creating bridges between traditional advertising and the Internet. Content analysis showed Web addresses in print ads; categories of advertisers most likely to link print ads with Web sites; and whether the Web site attempts to develop a database of potential…
Extracting Macroscopic Information from Web Links.
ERIC Educational Resources Information Center
Thelwall, Mike
2001-01-01
Discussion of Web-based link analysis focuses on an evaluation of Ingversen's proposed external Web Impact Factor for the original use of the Web, namely the interlinking of academic research. Studies relationships between academic hyperlinks and research activities for British universities and discusses the use of search engines for Web link…
Scalable web services for the PSIPRED Protein Analysis Workbench.
Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T
2013-07-01
Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.
Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools
NASA Astrophysics Data System (ADS)
Sánchez Pineda, A.
2015-12-01
We explore the potential of current web applications to create online interfaces that allow the visualization, interaction and real cut-based physics analysis and monitoring of processes through a web browser. The project consists in the initial development of web- based and cloud computing services to allow students and researchers to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte- Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H → ZZ → llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.
David, Fabrice P A; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion
2014-01-01
The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch.
HTSstation: A Web Application and Open-Access Libraries for High-Throughput Sequencing Data Analysis
David, Fabrice P. A.; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J.; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion
2014-01-01
The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch. PMID:24475057
Numerical analysis of beam with sinusoidally corrugated webs
NASA Astrophysics Data System (ADS)
Górecki, Marcin; Pieńko, Michał; Łagoda, GraŻyna
2018-01-01
The paper presents numerical tests results of the steel beam with sinusoidally corrugated web, which were performed in the Autodesk Algor Simulation Professional 2010. The analysis was preceded by laboratory tests including the beam's work under the influence of the four point bending as well as the study of material characteristics. Significant web's thickness and use of tools available in the software allowed to analyze the behavior of the plate girder as beam, and also to observe the occurrence of stresses in the characteristic element - the corrugated web. The stress distribution observed on the both web's surfaces was analyzed.
Using Spider-Web Patterns To Determine Toxicity
NASA Technical Reports Server (NTRS)
Noever, David A.; Cronise, Raymond J.; Relwani, Rachna A.
1995-01-01
Method of determining toxicities of chemicals involves recording and analysis of spider-web patterns. Based on observation spiders exposed to various chemicals spin webs that differ, in various ways, from normal webs. Potential alternative to toxicity testing on higher animals.
The Evolution of Web Searching.
ERIC Educational Resources Information Center
Green, David
2000-01-01
Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…
Web-video-mining-supported workflow modeling for laparoscopic surgeries.
Liu, Rui; Zhang, Xiaoli; Zhang, Hao
2016-11-01
As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Using a Web GIS Plate Tectonics Simulation to Promote Geospatial Thinking
ERIC Educational Resources Information Center
Bodzin, Alec M.; Anastasio, David; Sharif, Rajhida; Rutzmoser, Scott
2016-01-01
Learning with Web-based geographic information system (Web GIS) can promote geospatial thinking and analysis of georeferenced data. Web GIS can enable learners to analyze rich data sets to understand spatial relationships that are managed in georeferenced data visualizations. We developed a Web GIS plate tectonics simulation as a capstone learning…
The Role of Human Web Assistants in E-Commerce: An Analysis and a Usability Study.
ERIC Educational Resources Information Center
Aberg, Johan; Shahmehri, Nahid
2000-01-01
Discusses electronic commerce and presents the concept of Web assistants, human assistants working in an electronic Web shop. Presents results of a usability study of a prototype adaptive Web assistant system that show users were enthusiastic about the concept of Web assistants and its implications. (Author/LRW)
The NASA Heliophysics Active Final Archive at the Space Physics Data Facility
NASA Technical Reports Server (NTRS)
McGuire, Robert E.
2012-01-01
The 2009 NASA Heliophysics Science Data Management Policy re-defined and extended the responsibilities of the Space Physics Data Facility (SPDF) project. Building on SPDF's established capabilities, the new policy assigned the role of active "Final Archive" for non-solar NASA Heliophysics data to SPDF. The policy also recognized and formalized the responsibilities of SPDF as a source for critical infrastructure services such as VSPO to the overall Heliophysics Data Environment (HpDE) and as a Center of Excellence for existing SPDF science-enabling services and software including CDAWeb, SSCWeb/4D Orbit Viewer, OMNIweb and CDF. We will focus this talk to the principles, strategies and planned SPDF architecture to effectively and efficiently perform these roles, with special emphasis on how SPDF will ensure the long-term preservation and ongoing online community access to all the data entrusted to SPDF. We will layout our archival philosophy and what we are advocating in our work with NASA missions both current and future, with potential providers of NASA and NASA-relevant archival data, and to make the data and metadata held by SPDF accessible to other systems and services within the overall HpOE. We will also briefly review our current services, their metrics and our current plans and priorities for their evolution.
A study of medical and health queries to web search engines.
Spink, Amanda; Yang, Yin; Jansen, Jim; Nykanen, Pirrko; Lorence, Daniel P; Ozmutlu, Seda; Ozmutlu, H Cenk
2004-03-01
This paper reports findings from an analysis of medical or health queries to different web search engines. We report results: (i). comparing samples of 10000 web queries taken randomly from 1.2 million query logs from the AlltheWeb.com and Excite.com commercial web search engines in 2001 for medical or health queries, (ii). comparing the 2001 findings from Excite and AlltheWeb.com users with results from a previous analysis of medical and health related queries from the Excite Web search engine for 1997 and 1999, and (iii). medical or health advice-seeking queries beginning with the word 'should'. Findings suggest: (i). a small percentage of web queries are medical or health related, (ii). the top five categories of medical or health queries were: general health, weight issues, reproductive health and puberty, pregnancy/obstetrics, and human relationships, and (iii). over time, the medical and health queries may have declined as a proportion of all web queries, as the use of specialized medical/health websites and e-commerce-related queries has increased. Findings provide insights into medical and health-related web querying and suggests some implications for the use of the general web search engines when seeking medical/health information.
Using Maps in Web Analytics to Evaluate the Impact of Web-Based Extension Programs
ERIC Educational Resources Information Center
Veregin, Howard
2015-01-01
Maps can be a valuable addition to the Web analytics toolbox for Extension programs that use the Web to disseminate information. Extension professionals use Web analytics tools to evaluate program impacts. Maps add a unique perspective through visualization and analysis of geographic patterns and their relationships to other variables. Maps can…
How Public Is the Web?: Robots, Access, and Scholarly Communication.
ERIC Educational Resources Information Center
Snyder, Herbert; Rosenbaum, Howard
1998-01-01
Examines the use of Robot Exclusion Protocol (REP) to restrict the access of search engine robots to 10 major United States university Web sites. An analysis of Web site searching and interviews with Web server administrators shows that the decision to use this procedure is largely technical and is typically made by the Web server administrator.…
Faculty Recommendations for Web Tools: Implications for Course Management Systems
ERIC Educational Resources Information Center
Oliver, Kevin; Moore, John
2008-01-01
A gap analysis of web tools in Engineering was undertaken as one part of the Digital Library Network for Engineering and Technology (DLNET) grant funded by NSF (DUE-0085849). DLNET represents a Web portal and an online review process to archive quality knowledge objects in Engineering and Technology disciplines. The gap analysis coincided with the…
Web Log Analysis: A Study of Instructor Evaluations Done Online
ERIC Educational Resources Information Center
Klassen, Kenneth J.; Smith, Wayne
2004-01-01
This paper focuses on developing a relatively simple method for analyzing web-logs. It also explores the challenges and benefits of web-log analysis. The study of student behavior on this site provides insights into website design and the effectiveness of this site in particular. Another benefit realized from the paper is the ease with which these…
Users' Perceptions of the Web As Revealed by Transaction Log Analysis.
ERIC Educational Resources Information Center
Moukdad, Haidar; Large, Andrew
2001-01-01
Describes the results of a transaction log analysis of a Web search engine, WebCrawler, to analyze user's queries for information retrieval. Results suggest most users do not employ advanced search features, and the linguistic structure often resembles a human-human communication model that is not always successful in human-computer communication.…
NASA Astrophysics Data System (ADS)
Madiraju, Praveen; Zhang, Yanqing
2002-03-01
When a user logs in to a website, behind the scenes the user leaves his/her impressions, usage patterns and also access patterns in the web servers log file. A web usage mining agent can analyze these web logs to help web developers to improve the organization and presentation of their websites. They can help system administrators in improving the system performance. Web logs provide invaluable help in creating adaptive web sites and also in analyzing the network traffic analysis. This paper presents the design and implementation of a Web usage mining agent for digging in to the web log files.
ER2OWL: Generating OWL Ontology from ER Diagram
NASA Astrophysics Data System (ADS)
Fahad, Muhammad
Ontology is the fundamental part of Semantic Web. The goal of W3C is to bring the web into (its full potential) a semantic web with reusing previous systems and artifacts. Most legacy systems have been documented in structural analysis and structured design (SASD), especially in simple or Extended ER Diagram (ERD). Such systems need up-gradation to become the part of semantic web. In this paper, we present ERD to OWL-DL ontology transformation rules at concrete level. These rules facilitate an easy and understandable transformation from ERD to OWL. The set of rules for transformation is tested on a structured analysis and design example. The framework provides OWL ontology for semantic web fundamental. This framework helps software engineers in upgrading the structured analysis and design artifact ERD, to components of semantic web. Moreover our transformation tool, ER2OWL, reduces the cost and time for building OWL ontologies with the reuse of existing entity relationship models.
Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A
2011-11-29
Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios.
2011-01-01
Background Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Methods Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. Results The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. Conclusions In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios. PMID:22126392
A Framework for Web Usage Mining in Electronic Government
NASA Astrophysics Data System (ADS)
Zhou, Ping; Le, Zhongjian
Web usage mining has been a major component of management strategy to enhance organizational analysis and decision. The literature on Web usage mining that deals with strategies and technologies for effectively employing Web usage mining is quite vast. In recent years, E-government has received much attention from researchers and practitioners. Huge amounts of user access data are produced in Electronic government Web site everyday. The role of these data in the success of government management cannot be overstated because they affect government analysis, prediction, strategies, tactical, operational planning and control. Web usage miming in E-government has an important role to play in setting government objectives, discovering citizen behavior, and determining future courses of actions. Web usage mining in E-government has not received adequate attention from researchers or practitioners. We developed a framework to promote a better understanding of the importance of Web usage mining in E-government. Using the current literature, we developed the framework presented herein, in hopes that it would stimulate more interest in this important area.
Online data analysis using Web GDL
NASA Astrophysics Data System (ADS)
Jaffey, A.; Cheung, M.; Kobashi, A.
2008-12-01
The ever improving capability of modern astronomical instruments to capture data at high spatial resolution and cadence is opening up unprecedented opportunities for scientific discovery. When data sets become so large that they cannot be easily transferred over the internet, the researcher must find alternative ways to perform data analysis. One strategy is to bring the data analysis code to where the data resides. We present Web GDL, an implementation of GDL (GNU Data Language, open source incremental compiler compatible with IDL) that allows users to perform interactive data analysis within a web browser.
Analysis of pathology department Web sites and practical recommendations.
Nero, Christopher; Dighe, Anand S
2008-09-01
There are numerous customers for pathology departmental Web sites, including pathology department staff, clinical staff, residency applicants, job seekers, and other individuals outside the department seeking department information. Despite the increasing importance of departmental Web sites as a means of distributing information, no analysis has been done to date of the content and usage of pathology department Web sites. In this study, we analyzed pathology department Web sites to examine the elements present on each site and to evaluate the use of search technology on these sites. Further, we examined the usage patterns of our own departmental Internet and internet Web sites to better understand the users of pathology Web sites. We reviewed selected departmental pathology Web sites and analyzed their content and functionality. Our institution's departmental pathology Web sites were modified to enable detailed information to be stored regarding users and usage patterns, and that information was analyzed. We demonstrate considerable heterogeneity in departmental Web sites with many sites lacking basic content and search features. In addition, we demonstrate that increasing the traffic of a department's informational Web sites may result in reduced phone inquiries to the laboratory. We propose recommendations for pathology department Web sites to maximize promotion of a department's mission. A departmental pathology Web site is an essential communication tool for all pathology departments, and attention to the users and content of the site can have operational impact.
Web-TCGA: an online platform for integrated analysis of molecular cancer data sets.
Deng, Mario; Brägelmann, Johannes; Schultze, Joachim L; Perner, Sven
2016-02-06
The Cancer Genome Atlas (TCGA) is a pool of molecular data sets publicly accessible and freely available to cancer researchers anywhere around the world. However, wide spread use is limited since an advanced knowledge of statistics and statistical software is required. In order to improve accessibility we created Web-TCGA, a web based, freely accessible online tool, which can also be run in a private instance, for integrated analysis of molecular cancer data sets provided by TCGA. In contrast to already available tools, Web-TCGA utilizes different methods for analysis and visualization of TCGA data, allowing users to generate global molecular profiles across different cancer entities simultaneously. In addition to global molecular profiles, Web-TCGA offers highly detailed gene and tumor entity centric analysis by providing interactive tables and views. As a supplement to other already available tools, such as cBioPortal (Sci Signal 6:pl1, 2013, Cancer Discov 2:401-4, 2012), Web-TCGA is offering an analysis service, which does not require any installation or configuration, for molecular data sets available at the TCGA. Individual processing requests (queries) are generated by the user for mutation, methylation, expression and copy number variation (CNV) analyses. The user can focus analyses on results from single genes and cancer entities or perform a global analysis (multiple cancer entities and genes simultaneously).
NASA Astrophysics Data System (ADS)
Suchacka, Grazyna
2005-02-01
The paper concerns a new research area that is Quality of Web Service (QoWS). The need for QoWS is motivated by a still growing number of Internet users, by a steady development and diversification of Web services, and especially by popularization of e-commerce applications. The goal of the paper is a critical analysis of the literature concerning scheduling algorithms for e-commerce Web servers. The paper characterizes factors affecting the load of the Web servers and discusses ways of improving their efficiency. Crucial QoWS requirements of the business Web server are identified: serving requests before their individual deadlines, supporting user session integrity, supporting different classes of users and minimizing a number of rejected requests. It is justified that meeting these requirements and implementing them in an admission control (AC) and scheduling algorithm for the business Web server is crucial to the functioning of e-commerce Web sites and revenue generated by them. The paper presents results of the literature analysis and discusses algorithms that implement these important QoWS requirements. The analysis showed that very few algorithms take into consideration the above mentioned factors and that there is a need for designing an algorithm implementing them.
Analysis of governmental Web sites on food safety issues: a global perspective.
Namkung, Young; Almanza, Barbara A
2006-10-01
Despite a growing concern over food safety issues, as well as a growing dependence on the Internet as a source of information, little research has been done to examine the presence and relevance of food safety-related information on Web sites. The study reported here conducted Web site analysis in order to examine the current operational status of governmental Web sites on food safety issues. The study also evaluated Web site usability, especially information dimensionalities such as utility, currency, and relevance of content, from the perspective of the English-speaking consumer. Results showed that out of 192 World Health Organization members, 111 countries operated governmental Web sites that provide information about food safety issues. Among 171 searchable Web sites from the 111 countries, 123 Web sites (71.9 percent) were accessible, and 81 of those 123 (65.9 percent) were available in English. The majority of Web sites offered search engine tools and related links for more information, but their availability and utility was limited. In terms of content, 69.9 percent of Web sites offered information on foodborne-disease outbreaks, compared with 31.5 percent that had travel- and health-related information.
Testing Web Applications with Mutation Analysis
ERIC Educational Resources Information Center
Praphamontripong, Upsorn
2017-01-01
Web application software uses new technologies that have novel methods for integration and state maintenance that amount to new control flow mechanisms and new variables scoping. While modern web development technologies enhance the capabilities of web applications, they introduce challenges that current testing techniques do not adequately test…
WebMeV | Informatics Technology for Cancer Research (ITCR)
Web MeV (Multiple-experiment Viewer) is a web/cloud-based tool for genomic data analysis. Web MeV is being built to meet the challenge of exploring large public genomic data set with intuitive graphical interface providing access to state-of-the-art analytical tools.
Statistics, Structures & Satisfied Customers: Using Web Log Data to Improve Site Performance.
ERIC Educational Resources Information Center
Peacock, Darren
This paper explores some of the ways in which the National Museum of Australia is using Web analysis tools to shape its future directions in the delivery of online services. In particular, it explores the potential of quantitative analysis, based on Web server log data, to convert these ephemeral traces of user experience into a strategic…
Online interactive analysis of protein structure ensembles with Bio3D-web.
Skjærven, Lars; Jariwala, Shashank; Yao, Xin-Qiu; Grant, Barry J
2016-11-15
Bio3D-web is an online application for analyzing the sequence, structure and conformational heterogeneity of protein families. Major functionality is provided for identifying protein structure sets for analysis, their alignment and refined structure superposition, sequence and structure conservation analysis, mapping and clustering of conformations and the quantitative comparison of their predicted structural dynamics. Bio3D-web is based on the Bio3D and Shiny R packages. All major browsers are supported and full source code is available under a GPL2 license from http://thegrantlab.org/bio3d-web CONTACT: bjgrant@umich.edu or lars.skjarven@uib.no. © The Author 2016. Published by Oxford University Press.
2013-01-01
Background Surrogate variable analysis (SVA) is a powerful method to identify, estimate, and utilize the components of gene expression heterogeneity due to unknown and/or unmeasured technical, genetic, environmental, or demographic factors. These sources of heterogeneity are common in gene expression studies, and failing to incorporate them into the analysis can obscure results. Using SVA increases the biological accuracy and reproducibility of gene expression studies by identifying these sources of heterogeneity and correctly accounting for them in the analysis. Results Here we have developed a web application called SVAw (Surrogate variable analysis Web app) that provides a user friendly interface for SVA analyses of genome-wide expression studies. The software has been developed based on open source bioconductor SVA package. In our software, we have extended the SVA program functionality in three aspects: (i) the SVAw performs a fully automated and user friendly analysis workflow; (ii) It calculates probe/gene Statistics for both pre and post SVA analysis and provides a table of results for the regression of gene expression on the primary variable of interest before and after correcting for surrogate variables; and (iii) it generates a comprehensive report file, including graphical comparison of the outcome for the user. Conclusions SVAw is a web server freely accessible solution for the surrogate variant analysis of high-throughput datasets and facilitates removing all unwanted and unknown sources of variation. It is freely available for use at http://psychiatry.igm.jhmi.edu/sva. The executable packages for both web and standalone application and the instruction for installation can be downloaded from our web site. PMID:23497726
Does your web site draw new patients?
Wallin, Wendy S
2009-11-01
The absence of scientific data forces orthodontists to guess at how best to design Internet sites that persuade prospective patients to call for appointments. This study was conducted to identify the Web-site factors that lead prospective patients to make appointments or, conversely, to reject a practice. Ten participants actively looking online for an orthodontist were recruited to participate. They reviewed 64 orthodontic Web sites in their geographic areas and rated their likelihood of calling each practice for an appointment. The sessions were videotaped. Analysis of participant comments, navigation patterns, and ratings suggested 25 distinguishing factors. Statistical analysis showed 10 Web-site characteristics that predict the success of an orthodontic Web site in attracting new patients.
A Design Analysis Model for Developing World Wide Web Sites.
ERIC Educational Resources Information Center
Ma, Yan
2002-01-01
Examines the relationship between and among designers, text, and users of the Galter Health Sciences Library Web site at Northwestern University by applying reader-response criticism. Highlights include Web site design; comparison of designers' intentions with the actual organization of knowledge on the Web site; and compares designer's intentions…
Information Architecture for Bilingual Web Sites.
ERIC Educational Resources Information Center
Cunliffe, Daniel; Jones, Helen; Jarvis, Melanie; Egan, Kevin; Huws, Rhian; Munro, Sian
2002-01-01
Discusses creating an information architecture for a bilingual Web site and reports work in progress on the development of a content-based bilingual Web site to facilitate shared resources between speech and language therapists. Considers a structural analysis of existing bilingual Web designs and explains a card-sorting activity conducted with…
SDI-based business processes: A territorial analysis web information system in Spain
NASA Astrophysics Data System (ADS)
Béjar, Rubén; Latre, Miguel Á.; Lopez-Pellicer, Francisco J.; Nogueras-Iso, Javier; Zarazaga-Soria, F. J.; Muro-Medrano, Pedro R.
2012-09-01
Spatial Data Infrastructures (SDIs) provide access to geospatial data and operations through interoperable Web services. These data and operations can be chained to set up specialized geospatial business processes, and these processes can give support to different applications. End users can benefit from these applications, while experts can integrate the Web services in their own business processes and developments. This paper presents an SDI-based territorial analysis Web information system for Spain, which gives access to land cover, topography and elevation data, as well as to a number of interoperable geospatial operations by means of a Web Processing Service (WPS). Several examples illustrate how different territorial analysis business processes are supported. The system has been established by the Spanish National SDI (Infraestructura de Datos Espaciales de España, IDEE) both as an experimental platform for geoscientists and geoinformation system developers, and as a mechanism to contribute to the Spanish citizens knowledge about their territory.
Utilizing Social Bookmarking Tag Space for Web Content Discovery: A Social Network Analysis Approach
ERIC Educational Resources Information Center
Wei, Wei
2010-01-01
Social bookmarking has gained popularity since the advent of Web 2.0. Keywords known as tags are created to annotate web content, and the resulting tag space composed of the tags, the resources, and the users arises as a new platform for web content discovery. Useful and interesting web resources can be located through searching and browsing based…
Evaluating Web-Based Nursing Education's Effects: A Systematic Review and Meta-Analysis.
Kang, Jiwon; Seomun, GyeongAe
2017-09-01
This systematic review and meta-analysis investigated whether using web-based nursing educational programs increases a participant's knowledge and clinical performance. We performed a meta-analysis of studies published between January 2000 and July 2016 and identified through RISS, CINAHL, ProQuest Central, Embase, the Cochrane Library, and PubMed. Eleven studies were eligible for inclusion in this analysis. The results of the meta-analysis demonstrated significant differences not only for the overall effect but also specifically for blended programs and short (2 weeks or 4 weeks) intervention periods. To present more evidence supporting the effectiveness of web-based nursing educational programs, further research is warranted.
Web-based visual analysis for high-throughput genomics
2013-01-01
Background Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. Results We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Conclusions Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput genomics experiments. PMID:23758618
The presence of English and Spanish dyslexia in the Web
NASA Astrophysics Data System (ADS)
Rello, Luz; Baeza-Yates, Ricardo
2012-09-01
In this study we present a lower bound of the prevalence of dyslexia in the Web for English and Spanish. On the basis of analysis of corpora written by dyslexic people, we propose a classification of the different kinds of dyslexic errors. A representative data set of dyslexic words is used to calculate this lower bound in web pages containing English and Spanish dyslexic errors. We also present an analysis of dyslexic errors in major Internet domains, social media sites, and throughout English- and Spanish-speaking countries. To show the independence of our estimations from the presence of other kinds of errors, we compare them with the overall lexical quality of the Web and with the error rate of noncorrected corpora. The presence of dyslexic errors in the Web motivates work in web accessibility for dyslexic users.
Web tools for predictive toxicology model building.
Jeliazkova, Nina
2012-07-01
The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.
Maloney, Stephen; Haas, Romi; Keating, Jenny L; Molloy, Elizabeth; Jolly, Brian; Sims, Jane; Morgan, Prue; Haines, Terry
2012-04-02
The introduction of Web-based education and open universities has seen an increase in access to professional development within the health professional education marketplace. Economic efficiencies of Web-based education and traditional face-to-face educational approaches have not been compared under randomized controlled trial conditions. To compare costs and effects of Web-based and face-to-face short courses in falls prevention education for health professionals. We designed two short courses to improve the clinical performance of health professionals in exercise prescription for falls prevention. One was developed for delivery in face-to-face mode and the other for online learning. Data were collected on learning outcomes including participation, satisfaction, knowledge acquisition, and change in practice, and combined with costs, savings, and benefits, to enable a break-even analysis from the perspective of the provider, cost-effectiveness analysis from the perspective of the health service, and cost-benefit analysis from the perspective of the participant. Face-to-face and Web-based delivery modalities produced comparable outcomes for participation, satisfaction, knowledge acquisition, and change in practice. Break-even analysis identified the Web-based educational approach to be robustly superior to face-to-face education, requiring a lower number of enrollments for the program to reach its break-even point. Cost-effectiveness analyses from the perspective of the health service and cost-benefit analysis from the perspective of the participant favored face-to-face education, although the outcomes were contingent on the sensitivity analysis applied (eg, the fee structure used). The Web-based educational approach was clearly more efficient from the perspective of the education provider. In the presence of relatively equivocal results for comparisons from other stakeholder perspectives, it is likely that providers would prefer to deliver education via a Web-based medium. Australian New Zealand Clinical Trials Registry (ACTRN): 12610000135011; http://www.anzctr.org.au/trial_view.aspx?id=335135 (Archived by WebCite at http://www.webcitation.org/668POww4L).
Haas, Romi; Keating, Jenny L; Molloy, Elizabeth; Jolly, Brian; Sims, Jane; Morgan, Prue; Haines, Terry
2012-01-01
Background The introduction of Web-based education and open universities has seen an increase in access to professional development within the health professional education marketplace. Economic efficiencies of Web-based education and traditional face-to-face educational approaches have not been compared under randomized controlled trial conditions. Objective To compare costs and effects of Web-based and face-to-face short courses in falls prevention education for health professionals. Methods We designed two short courses to improve the clinical performance of health professionals in exercise prescription for falls prevention. One was developed for delivery in face-to-face mode and the other for online learning. Data were collected on learning outcomes including participation, satisfaction, knowledge acquisition, and change in practice, and combined with costs, savings, and benefits, to enable a break-even analysis from the perspective of the provider, cost-effectiveness analysis from the perspective of the health service, and cost-benefit analysis from the perspective of the participant. Results Face-to-face and Web-based delivery modalities produced comparable outcomes for participation, satisfaction, knowledge acquisition, and change in practice. Break-even analysis identified the Web-based educational approach to be robustly superior to face-to-face education, requiring a lower number of enrollments for the program to reach its break-even point. Cost-effectiveness analyses from the perspective of the health service and cost-benefit analysis from the perspective of the participant favored face-to-face education, although the outcomes were contingent on the sensitivity analysis applied (eg, the fee structure used). Conclusions The Web-based educational approach was clearly more efficient from the perspective of the education provider. In the presence of relatively equivocal results for comparisons from other stakeholder perspectives, it is likely that providers would prefer to deliver education via a Web-based medium. Trial Registration Australian New Zealand Clinical Trials Registry (ACTRN): 12610000135011; http://www.anzctr.org.au/trial_view.aspx?id=335135 (Archived by WebCite at http://www.webcitation.org/668POww4L) PMID:22469659
Uniformity testing: assessment of a centralized web-based uniformity analysis system.
Klempa, Meaghan C
2011-06-01
Uniformity testing is performed daily to ensure adequate camera performance before clinical use. The aim of this study is to assess the reliability of Beth Israel Deaconess Medical Center's locally built, centralized, Web-based uniformity analysis system by examining the differences between manufacturer and Web-based National Electrical Manufacturers Association integral uniformity calculations measured in the useful field of view (FOV) and the central FOV. Manufacturer and Web-based integral uniformity calculations measured in the useful FOV and the central FOV were recorded over a 30-d period for 4 cameras from 3 different manufacturers. These data were then statistically analyzed. The differences between the uniformity calculations were computed, in addition to the means and the SDs of these differences for each head of each camera. There was a correlation between the manufacturer and Web-based integral uniformity calculations in the useful FOV and the central FOV over the 30-d period. The average differences between the manufacturer and Web-based useful FOV calculations ranged from -0.30 to 0.099, with SD ranging from 0.092 to 0.32. For the central FOV calculations, the average differences ranged from -0.163 to 0.055, with SD ranging from 0.074 to 0.24. Most of the uniformity calculations computed by this centralized Web-based uniformity analysis system are comparable to the manufacturers' calculations, suggesting that this system is reasonably reliable and effective. This finding is important because centralized Web-based uniformity analysis systems are advantageous in that they test camera performance in the same manner regardless of the manufacturer.
MetaboAnalystR: an R package for flexible and reproducible analysis of metabolomics data.
Chong, Jasmine; Xia, Jianguo
2018-06-28
The MetaboAnalyst web application has been widely used for metabolomics data analysis and interpretation. Despite its user-friendliness, the web interface has presented its inherent limitations (especially for advanced users) with regard to flexibility in creating customized workflow, support for reproducible analysis, and capacity in dealing with large data. To address these limitations, we have developed a companion R package (MetaboAnalystR) based on the R code base of the web server. The package has been thoroughly tested to ensure that the same R commands will produce identical results from both interfaces. MetaboAnalystR complements the MetaboAnalyst web server to facilitate transparent, flexible and reproducible analysis of metabolomics data. MetaboAnalystR is freely available from https://github.com/xia-lab/MetaboAnalystR. Supplementary data are available at Bioinformatics online.
iSeq: Web-Based RNA-seq Data Analysis and Visualization.
Zhang, Chao; Fan, Caoqi; Gan, Jingbo; Zhu, Ping; Kong, Lei; Li, Cheng
2018-01-01
Transcriptome sequencing (RNA-seq) is becoming a standard experimental methodology for genome-wide characterization and quantification of transcripts at single base-pair resolution. However, downstream analysis of massive amount of sequencing data can be prohibitively technical for wet-lab researchers. A functionally integrated and user-friendly platform is required to meet this demand. Here, we present iSeq, an R-based Web server, for RNA-seq data analysis and visualization. iSeq is a streamlined Web-based R application under the Shiny framework, featuring a simple user interface and multiple data analysis modules. Users without programming and statistical skills can analyze their RNA-seq data and construct publication-level graphs through a standardized yet customizable analytical pipeline. iSeq is accessible via Web browsers on any operating system at http://iseq.cbi.pku.edu.cn .
DIANA-microT web server v5.0: service integration into miRNA functional analysis workflows.
Paraskevopoulou, Maria D; Georgakilas, Georgios; Kostoulas, Nikos; Vlachos, Ioannis S; Vergoulis, Thanasis; Reczko, Martin; Filippidis, Christos; Dalamagas, Theodore; Hatzigeorgiou, A G
2013-07-01
MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANA-microT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA-gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANA-microT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines.
DIANA-microT web server v5.0: service integration into miRNA functional analysis workflows
Paraskevopoulou, Maria D.; Georgakilas, Georgios; Kostoulas, Nikos; Vlachos, Ioannis S.; Vergoulis, Thanasis; Reczko, Martin; Filippidis, Christos; Dalamagas, Theodore; Hatzigeorgiou, A.G.
2013-01-01
MicroRNAs (miRNAs) are small endogenous RNA molecules that regulate gene expression through mRNA degradation and/or translation repression, affecting many biological processes. DIANA-microT web server (http://www.microrna.gr/webServer) is dedicated to miRNA target prediction/functional analysis, and it is being widely used from the scientific community, since its initial launch in 2009. DIANA-microT v5.0, the new version of the microT server, has been significantly enhanced with an improved target prediction algorithm, DIANA-microT-CDS. It has been updated to incorporate miRBase version 18 and Ensembl version 69. The in silico-predicted miRNA–gene interactions in Homo sapiens, Mus musculus, Drosophila melanogaster and Caenorhabditis elegans exceed 11 million in total. The web server was completely redesigned, to host a series of sophisticated workflows, which can be used directly from the on-line web interface, enabling users without the necessary bioinformatics infrastructure to perform advanced multi-step functional miRNA analyses. For instance, one available pipeline performs miRNA target prediction using different thresholds and meta-analysis statistics, followed by pathway enrichment analysis. DIANA-microT web server v5.0 also supports a complete integration with the Taverna Workflow Management System (WMS), using the in-house developed DIANA-Taverna Plug-in. This plug-in provides ready-to-use modules for miRNA target prediction and functional analysis, which can be used to form advanced high-throughput analysis pipelines. PMID:23680784
Analysis of Elementary School Web Sites
ERIC Educational Resources Information Center
Hartshorne, Richard; Friedman, Adam; Algozzine, Bob; Kaur, Daljit
2008-01-01
While researchers have studied the use and value of educational software for many years, study of school Web sites and/or their effectiveness is limited. In this investigation, we identified goals and functions of school Web sites and used the foundations of effective Web site design to develop an evaluation checklist. We then applied these…
Characteristics of Food Industry Web Sites and "Advergames" Targeting Children
ERIC Educational Resources Information Center
Culp, Jennifer; Bell, Robert A.; Cassady, Diana
2010-01-01
Objective: To assess the content of food industry Web sites targeting children by describing strategies used to prolong their visits and foster brand loyalty; and to document health-promoting messages on these Web sites. Design: A content analysis was conducted of Web sites advertised on 2 children's networks, Cartoon Network and Nickelodeon. A…
Library Web Sites in Pakistan: An Analysis of Content
ERIC Educational Resources Information Center
Qutab, Saima; Mahmood, Khalid
2009-01-01
Purpose: The purpose of this paper is to investigate library web sites in Pakistan, to analyse their content and navigational strengths and weaknesses and to give recommendations for developing better web sites and quality assessment studies. Design/methodology/approach: Survey of web sites of 52 academic, special, public and national libraries in…
Graph Structure in Three National Academic Webs: Power Laws with Anomalies.
ERIC Educational Resources Information Center
Thelwall, Mike; Wilkinson, David
2003-01-01
Explains how the Web can be modeled as a mathematical graph and analyzes the graph structures of three national university publicly indexable Web sites from Australia, New Zealand, and the United Kingdom. Topics include commercial search engines and academic Web link research; method-analysis environment and data sets; and power laws. (LRW)
McCary, Matthew A; Mores, Robin; Farfan, Monica A; Wise, David H
2016-03-01
Although invasive plants are a major source of terrestrial ecosystem degradation worldwide, it remains unclear which trophic levels above the base of the food web are most vulnerable to plant invasions. We performed a meta-analysis of 38 independent studies from 32 papers to examine how invasive plants alter major groupings of primary and secondary consumers in three globally distributed ecosystems: wetlands, woodlands and grasslands. Within each ecosystem we examined if green (grazing) food webs are more sensitive to plant invasions compared to brown (detrital) food webs. Invasive plants have strong negative effects on primary consumers (detritivores, bacterivores, fungivores, and/or herbivores) in woodlands and wetlands, which become less abundant in both green and brown food webs in woodlands and green webs in wetlands. Plant invasions increased abundances of secondary consumers (predators and/or parasitoids) only in woodland brown food webs and green webs in wetlands. Effects of invasive plants on grazing and detrital food webs clearly differed between ecosystems. Overall, invasive plants had the most pronounced effects on the trophic structure of wetlands and woodlands, but caused no detectable changes to grassland trophic structure. © 2016 John Wiley & Sons Ltd/CNRS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.
2008-05-04
This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroicmore » effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.« less
NASA Astrophysics Data System (ADS)
Wibonele, Kasanda J.; Zhang, Yanqing
2002-03-01
A web data mining system using granular computing and ASP programming is proposed. This is a web based application, which allows web users to submit survey data for many different companies. This survey is a collection of questions that will help these companies develop and improve their business and customer service with their clients by analyzing survey data. This web application allows users to submit data anywhere. All the survey data is collected into a database for further analysis. An administrator of this web application can login to the system and view all the data submitted. This web application resides on a web server, and the database resides on the MS SQL server.
Jung, Tae-Sung; Yeo, Hock Chuan; Reddy, Satty G; Cho, Wan-Sup; Lee, Dong-Yup
2009-11-01
WEbcoli is a WEb application for in silico designing, analyzing and engineering Escherichia coli metabolism. It is devised and implemented using advanced web technologies, thereby leading to enhanced usability and dynamic web accessibility. As a main feature, the WEbcoli system provides a user-friendly rich web interface, allowing users to virtually design and synthesize mutant strains derived from the genome-scale wild-type E.coli model and to customize pathways of interest through a graph editor. In addition, constraints-based flux analysis can be conducted for quantifying metabolic fluxes and charactering the physiological and metabolic states under various genetic and/or environmental conditions. WEbcoli is freely accessible at http://webcoli.org. cheld@nus.edu.sg.
NGNP Data Management and Analysis System Analysis and Web Delivery Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cynthia D. Gentillon
2011-09-01
Projects for the Very High Temperature Reactor (VHTR) Technology Development Office provide data in support of Nuclear Regulatory Commission licensing of the very high temperature reactor. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high-temperature and high-fluence environments. The NGNP Data Management and Analysis System (NDMAS) at the Idaho National Laboratory has been established to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities formore » displaying the data in meaningful ways and for data analysis to identify useful relationships among the measured quantities. The capabilities are described from the perspective of NDMAS users, starting with those who just view experimental data and analytical results on the INL NDMAS web portal. Web display and delivery capabilities are described in detail. Also the current web pages that show Advanced Gas Reactor, Advanced Graphite Capsule, and High Temperature Materials test results are itemized. Capabilities available to NDMAS developers are more extensive, and are described using a second series of examples. Much of the data analysis efforts focus on understanding how thermocouple measurements relate to simulated temperatures and other experimental parameters. Statistical control charts and correlation monitoring provide an ongoing assessment of instrument accuracy. Data analysis capabilities are virtually unlimited for those who use the NDMAS web data download capabilities and the analysis software of their choice. Overall, the NDMAS provides convenient data analysis and web delivery capabilities for studying a very large and rapidly increasing database of well-documented, pedigreed data.« less
Unfreezing the behaviour of two orb spiders.
Zschokke, S; Vollrath, F
1995-12-01
Spider's webs reflect the builders behaviour pattern; yet there are aspects of the construction behaviour that cannot be "read" from the geometry of the finished web alone. Using computerised image analysis we developed an automatic surveillance method to track a spider's path during web-building. Thus we collected data on two orb-weaving spiders--the cribellate Uloborus walckenaerius and the ecribellate Araneus diadematus--for web geometry, movement pattern and time allocation. Representatives of these two species built webs of similar geometry but they used different movement patterns both spatially (which we describe qualitatively) and temporally (which we analyse quantitatively). Most importantly, temporal analysis showed that the two spiders differed significantly in some but not all web-building stages; and from this we deduce that Uloborus--unlike Araneus--was constrained by speed of silk production during the construction of its capture but not its auxiliary spiral.
Biological Web Service Repositories Review
Urdidiales‐Nieto, David; Navas‐Delgado, Ismael
2016-01-01
Abstract Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re‐execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re‐check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well‐known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. PMID:27783459
3Drefine: an interactive web server for efficient protein structure refinement
Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin
2016-01-01
3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371
The Role of Virtual Reference in Library Web Site Design: A Qualitative Source for Usage Data
ERIC Educational Resources Information Center
Powers, Amanda Clay; Shedd, Julie; Hill, Clay
2011-01-01
Gathering qualitative information about usage behavior of library Web sites is a time-consuming process requiring the active participation of patron communities. Libraries that collect virtual reference transcripts, however, hold valuable data regarding how the library Web site is used that could benefit Web designers. An analysis of virtual…
Can They Plan to Teach with Web 2.0? Future Teachers' Potential Use of the Emerging Web
ERIC Educational Resources Information Center
Kale, Ugur
2014-01-01
This study examined pre-service teachers' potential use of Web 2.0 technologies for teaching. A coding scheme incorporating the Technological Pedagogical Content Knowledge (TPACK) framework guided the analysis of pre-service teachers' Web 2.0-enhanced learning activity descriptions. The results indicated that while pre-service teachers were able…
ERIC Educational Resources Information Center
Dodge, Lucy
The report describes San Jose College's (California) two Web site management and design programs, and provides employment information and job market analysis for the field. The College's Web Site Administration and Web Application Solutions programs offer classes designed to give students the necessary skills in administering a Web site and in…
GSCALite: A Web Server for Gene Set Cancer Analysis.
Liu, Chun-Jie; Hu, Fei-Fei; Xia, Mengxuan; Han, Leng; Zhang, Qiong; Guo, An-Yuan
2018-05-22
The availability of cancer genomic data makes it possible to analyze genes related to cancer. Cancer is usually the result of a set of genes and the signal of a single gene could be covered by background noise. Here, we present a web server named Gene Set Cancer Analysis (GSCALite) to analyze a set of genes in cancers with the following functional modules. (i) Differential expression in tumor vs normal, and the survival analysis; (ii) Genomic variations and their survival analysis; (iii) Gene expression associated cancer pathway activity; (iv) miRNA regulatory network for genes; (v) Drug sensitivity for genes; (vi) Normal tissue expression and eQTL for genes. GSCALite is a user-friendly web server for dynamic analysis and visualization of gene set in cancer and drug sensitivity correlation, which will be of broad utilities to cancer researchers. GSCALite is available on http://bioinfo.life.hust.edu.cn/web/GSCALite/. guoay@hust.edu.cn or zhangqiong@hust.edu.cn. Supplementary data are available at Bioinformatics online.
Unipept web services for metaproteomics analysis.
Mesuere, Bart; Willems, Toon; Van der Jeugt, Felix; Devreese, Bart; Vandamme, Peter; Dawyndt, Peter
2016-06-01
Unipept is an open source web application that is designed for metaproteomics analysis with a focus on interactive datavisualization. It is underpinned by a fast index built from UniProtKB and the NCBI taxonomy that enables quick retrieval of all UniProt entries in which a given tryptic peptide occurs. Unipept version 2.4 introduced web services that provide programmatic access to the metaproteomics analysis features. This enables integration of Unipept functionality in custom applications and data processing pipelines. The web services are freely available at http://api.unipept.ugent.be and are open sourced under the MIT license. Unipept@ugent.be Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Pathview Web: user friendly pathway visualization and data integration
Pant, Gaurav; Bhavnasi, Yeshvant K.; Blanchard, Steven G.; Brouwer, Cory
2017-01-01
Abstract Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. PMID:28482075
NASA Astrophysics Data System (ADS)
Hu, H.; Ge, Y. J.
2013-11-01
With the social networking and network socialisation have brought more text information and social relationships into our daily lives, the question of whether big data can be fully used to study the phenomenon and discipline of natural sciences has prompted many specialists and scholars to innovate their research. Though politics were integrally involved in the hyperlinked word issues since 1990s, automatic assembly of different geospatial web and distributed geospatial information systems utilizing service chaining have explored and built recently, the information collection and data visualisation of geo-events have always faced the bottleneck of traditional manual analysis because of the sensibility, complexity, relativity, timeliness and unexpected characteristics of political events. Based on the framework of Heritrix and the analysis of web-based text, word frequency, sentiment tendency and dissemination path of the Huangyan Island incident is studied here by combining web crawler technology and the text analysis method. The results indicate that tag cloud, frequency map, attitudes pie, individual mention ratios and dissemination flow graph based on the data collection and processing not only highlight the subject and theme vocabularies of related topics but also certain issues and problems behind it. Being able to express the time-space relationship of text information and to disseminate the information regarding geo-events, the text analysis of network information based on focused web crawler technology can be a tool for understanding the formation and diffusion of web-based public opinions in political events.
CovalentDock Cloud: a web server for automated covalent docking.
Ouyang, Xuchang; Zhou, Shuo; Ge, Zemei; Li, Runtao; Kwoh, Chee Keong
2013-07-01
Covalent binding is an important mechanism for many drugs to gain its function. We developed a computational algorithm to model this chemical event and extended it to a web server, the CovalentDock Cloud, to make it accessible directly online without any local installation and configuration. It provides a simple yet user-friendly web interface to perform covalent docking experiments and analysis online. The web server accepts the structures of both the ligand and the receptor uploaded by the user or retrieved from online databases with valid access id. It identifies the potential covalent binding patterns, carries out the covalent docking experiments and provides visualization of the result for user analysis. This web server is free and open to all users at http://docking.sce.ntu.edu.sg/.
CovalentDock Cloud: a web server for automated covalent docking
Ouyang, Xuchang; Zhou, Shuo; Ge, Zemei; Li, Runtao; Kwoh, Chee Keong
2013-01-01
Covalent binding is an important mechanism for many drugs to gain its function. We developed a computational algorithm to model this chemical event and extended it to a web server, the CovalentDock Cloud, to make it accessible directly online without any local installation and configuration. It provides a simple yet user-friendly web interface to perform covalent docking experiments and analysis online. The web server accepts the structures of both the ligand and the receptor uploaded by the user or retrieved from online databases with valid access id. It identifies the potential covalent binding patterns, carries out the covalent docking experiments and provides visualization of the result for user analysis. This web server is free and open to all users at http://docking.sce.ntu.edu.sg/. PMID:23677616
NASA Astrophysics Data System (ADS)
Utanto, Yuli; Widhanarto, Ghanis Putra; Maretta, Yoris Adi
2017-03-01
This study aims to develop a web-based portfolio model. The model developed in this study could reveal the effectiveness of the new model in experiments conducted at research respondents in the department of curriculum and educational technology FIP Unnes. In particular, the further research objectives to be achieved through this development of research, namely: (1) Describing the process of implementing a portfolio in a web-based model; (2) Assessing the effectiveness of web-based portfolio model for the final task, especially in Web-Based Learning courses. This type of research is the development of research Borg and Gall (2008: 589) says "educational research and development (R & D) is a process used to develop and validate educational production". The series of research and development carried out starting with exploration and conceptual studies, followed by testing and evaluation, and also implementation. For the data analysis, the technique used is simple descriptive analysis, analysis of learning completeness, which then followed by prerequisite test for normality and homogeneity to do T - test. Based on the data analysis, it was concluded that: (1) a web-based portfolio model can be applied to learning process in higher education; (2) The effectiveness of web-based portfolio model with field data from the respondents of large group trial participants (field trial), the number of respondents who reached mastery learning (a score of 60 and above) were 24 people (92.3%) in which it indicates that the web-based portfolio model is effective. The conclusion of this study is that a web-based portfolio model is effective. The implications of the research development of this model, the next researcher is expected to be able to use the guideline of the development model based on the research that has already been conducted to be developed on other subjects.
Vinagre, Catarina; Mendonça, Vanessa; Narciso, Luís; Madeira, Carolina
2015-09-01
The characterization of food web structure, energy pathways and trophic linkages is essential for the understanding of ecosystem functioning. Isotopic analysis was performed on food web components of the rocky intertidal ecosystem in four sites along the Portuguese west coast. The aim was to 1) determine the general food web structure, 2) estimate the trophic level of the dominant organisms and 3) track the incorporation of organic carbon of different origins in the diet of the top consumers. In this food web, fish are top consumers, followed by shrimp. Anemones and gastropods are intermediate consumers, while bivalves and zooplankton are primary consumers. Macroalgae Bifurcaria bifurcata, Ulva lactuca, Fucus vesiculosus, Codium sp. and phytoplankton are the dominant producers. Two energy pathways were identified, pelagic and benthic. Reliance on the benthic energy pathway was high for many of the consumers but not as high as previously observed in subtidal coastal food webs. The maximum TL was 3.3, which is indicative of a relatively short food web. It is argued that the diet of top consumers relies directly on low levels of the food web to a considerable extent, instead of on intermediate levels, which shortens the trophic length of the food web. Copyright © 2015 Elsevier Ltd. All rights reserved.
Barnum, Thomas R; Drake, John M; Colón-Gaud, Checo; Rugenski, Amanda T; Frauendorf, Therese C; Connelly, Scott; Kilham, Susan S; Whiles, Matt R; Lips, Karen R; Pringle, Catherine M
2015-08-01
Species losses are predicted to simplify food web structure, and disease-driven amphibian declines in Central America offer an opportunity to test this prediction. Assessment of insect community composition, combined with gut content analyses, was used to generate periphyton-insect food webs for a Panamanian stream, both pre- and post-amphibian decline. We then used network analysis to assess the effects of amphibian declines on food web structure. Although 48% of consumer taxa, including many insect taxa, were lost between pre- and post-amphibian decline sampling dates, connectance declined by less than 3%. We then quantified the resilience of food web structure by calculating the number of expected cascading extirpations from the loss of tadpoles. This analysis showed the expected effects of species loss on connectance and linkage density to be more than 60% and 40%, respectively, than were actually observed. Instead, new trophic linkages in the post-decline food web reorganized the food web topology, changing the identity of "hub" taxa, and consequently reducing the effects of amphibian declines on many food web attributes. Resilience of food web attributes was driven by a combination of changes in consumer diets, particularly those of insect predators, as well as the appearance of generalist insect consumers, suggesting that food web structure is maintained by factors independent of the original trophic linkages.
NASA Astrophysics Data System (ADS)
Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian
2016-04-01
Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.
SeWeR: a customizable and integrated dynamic HTML interface to bioinformatics services.
Basu, M K
2001-06-01
Sequence analysis using Web Resources (SeWeR) is an integrated, Dynamic HTML (DHTML) interface to commonly used bioinformatics services available on the World Wide Web. It is highly customizable, extendable, platform neutral, completely server-independent and can be hosted as a web page as well as being used as stand-alone software running within a web browser.
ERIC Educational Resources Information Center
Palaigeorgiou, G.; Triantafyllakos, G.; Tsinakos, A.
2011-01-01
Following the increasing calls for a more skeptical analysis of web 2.0 and the empowerment of learners' voices in formulating upcoming technologies, this paper elaborates on the participatory design of a web learning environment. A total of 117 undergraduate students from two Greek Informatics Departments participated in 25 participatory design…
A cross disciplinary study of link decay and the effectiveness of mitigation techniques
2013-01-01
Background The dynamic, decentralized world-wide-web has become an essential part of scientific research and communication. Researchers create thousands of web sites every year to share software, data and services. These valuable resources tend to disappear over time. The problem has been documented in many subject areas. Our goal is to conduct a cross-disciplinary investigation of the problem and test the effectiveness of existing remedies. Results We accessed 14,489 unique web pages found in the abstracts within Thomson Reuters' Web of Science citation index that were published between 1996 and 2010 and found that the median lifespan of these web pages was 9.3 years with 62% of them being archived. Survival analysis and logistic regression were used to find significant predictors of URL lifespan. The availability of a web page is most dependent on the time it is published and the top-level domain names. Similar statistical analysis revealed biases in current solutions: the Internet Archive favors web pages with fewer layers in the Universal Resource Locator (URL) while WebCite is significantly influenced by the source of publication. We also created a prototype for a process to submit web pages to the archives and increased coverage of our list of scientific webpages in the Internet Archive and WebCite by 22% and 255%, respectively. Conclusion Our results show that link decay continues to be a problem across different disciplines and that current solutions for static web pages are helping and can be improved. PMID:24266891
A cross disciplinary study of link decay and the effectiveness of mitigation techniques.
Hennessey, Jason; Ge, Steven
2013-01-01
The dynamic, decentralized world-wide-web has become an essential part of scientific research and communication. Researchers create thousands of web sites every year to share software, data and services. These valuable resources tend to disappear over time. The problem has been documented in many subject areas. Our goal is to conduct a cross-disciplinary investigation of the problem and test the effectiveness of existing remedies. We accessed 14,489 unique web pages found in the abstracts within Thomson Reuters' Web of Science citation index that were published between 1996 and 2010 and found that the median lifespan of these web pages was 9.3 years with 62% of them being archived. Survival analysis and logistic regression were used to find significant predictors of URL lifespan. The availability of a web page is most dependent on the time it is published and the top-level domain names. Similar statistical analysis revealed biases in current solutions: the Internet Archive favors web pages with fewer layers in the Universal Resource Locator (URL) while WebCite is significantly influenced by the source of publication. We also created a prototype for a process to submit web pages to the archives and increased coverage of our list of scientific webpages in the Internet Archive and WebCite by 22% and 255%, respectively. Our results show that link decay continues to be a problem across different disciplines and that current solutions for static web pages are helping and can be improved.
Easy Web Interfaces to IDL Code for NSTX Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
W.M. Davis
Reusing code is a well-known Software Engineering practice to substantially increase the efficiency of code production, as well as to reduce errors and debugging time. A variety of "Web Tools" for the analysis and display of raw and analyzed physics data are in use on NSTX [1], and new ones can be produced quickly from existing IDL [2] code. A Web Tool with only a few inputs, and which calls an IDL routine written in the proper style, can be created in less than an hour; more typical Web Tools with dozens of inputs, and the need for some adaptationmore » of existing IDL code, can be working in a day or so. Efficiency is also increased for users of Web Tools because o f the familiar interface of the web browser, and not needing X-windows, accounts, passwords, etc. Web Tools were adapted for use by PPPL physicists accessing EAST data stored in MDSplus with only a few man-weeks of effort; adapting to additional sites should now be even easier. An overview of Web Tools in use on NSTX, and a list of the most useful features, is also presented.« less
COMAN: a web server for comprehensive metatranscriptomics analysis.
Ni, Yueqiong; Li, Jun; Panagiotou, Gianni
2016-08-11
Microbiota-oriented studies based on metagenomic or metatranscriptomic sequencing have revolutionised our understanding on microbial ecology and the roles of both clinical and environmental microbes. The analysis of massive metatranscriptomic data requires extensive computational resources, a collection of bioinformatics tools and expertise in programming. We developed COMAN (Comprehensive Metatranscriptomics Analysis), a web-based tool dedicated to automatically and comprehensively analysing metatranscriptomic data. COMAN pipeline includes quality control of raw reads, removal of reads derived from non-coding RNA, followed by functional annotation, comparative statistical analysis, pathway enrichment analysis, co-expression network analysis and high-quality visualisation. The essential data generated by COMAN are also provided in tabular format for additional analysis and integration with other software. The web server has an easy-to-use interface and detailed instructions, and is freely available at http://sbb.hku.hk/COMAN/ CONCLUSIONS: COMAN is an integrated web server dedicated to comprehensive functional analysis of metatranscriptomic data, translating massive amount of reads to data tables and high-standard figures. It is expected to facilitate the researchers with less expertise in bioinformatics in answering microbiota-related biological questions and to increase the accessibility and interpretation of microbiota RNA-Seq data.
Web-Based Trainer for Electrical Circuit Analysis
ERIC Educational Resources Information Center
Weyten, L.; Rombouts, P.; De Maeyer, J.
2009-01-01
A Web-based system for training electric circuit analysis is presented in this paper. It is centered on symbolic analysis techniques and it not only verifies the student's final answer, but it also tracks and coaches him/her through all steps of his/her reasoning path. The system mimics homework assignments, enhanced by immediate personalized…
The Development of a Web-Based Virtual Environment for Teaching Qualitative Analysis of Structures
ERIC Educational Resources Information Center
O'Dwyer, D. W.; Logan-Phelan, T. M.; O'Neill, E. A.
2007-01-01
The current paper describes the design and development of a qualitative analysis course and an interactive web-based teaching and assessment tool called VSE (virtual structural environment). The widespread reliance on structural analysis programs requires engineers to be able to verify computer output by carrying out qualitative analyses.…
Calculators Contact Us About Our Site About Our Products USA.gov is the U.S. Government's official web portal to all federal, state, and local government web resources and services. WPC's Surface Analysis analysis overlaid with IR satellite imagery (IR Satellite Imagery) Latest image Loop: [3] [7] Days Latest
Web 2.0 in the Professional LIS Literature: An Exploratory Analysis
ERIC Educational Resources Information Center
Aharony, Noa
2011-01-01
This paper presents a statistical descriptive analysis and a thorough content analysis of descriptors and journal titles extracted from the Library and Information Science Abstracts (LISA) database, focusing on the subject of Web 2.0 and its main applications: blog, wiki, social network and tags.The primary research questions include: whether the…
Gene Ontology-Based Analysis of Zebrafish Omics Data Using the Web Tool Comparative Gene Ontology.
Ebrahimie, Esmaeil; Fruzangohar, Mario; Moussavi Nik, Seyyed Hani; Newman, Morgan
2017-10-01
Gene Ontology (GO) analysis is a powerful tool in systems biology, which uses a defined nomenclature to annotate genes/proteins within three categories: "Molecular Function," "Biological Process," and "Cellular Component." GO analysis can assist in revealing functional mechanisms underlying observed patterns in transcriptomic, genomic, and proteomic data. The already extensive and increasing use of zebrafish for modeling genetic and other diseases highlights the need to develop a GO analytical tool for this organism. The web tool Comparative GO was originally developed for GO analysis of bacterial data in 2013 ( www.comparativego.com ). We have now upgraded and elaborated this web tool for analysis of zebrafish genetic data using GOs and annotations from the Gene Ontology Consortium.
minepath.org: a free interactive pathway analysis web server.
Koumakis, Lefteris; Roussos, Panos; Potamias, George
2017-07-03
( www.minepath.org ) is a web-based platform that elaborates on, and radically extends the identification of differentially expressed sub-paths in molecular pathways. Besides the network topology, the underlying MinePath algorithmic processes exploit exact gene-gene molecular relationships (e.g. activation, inhibition) and are able to identify differentially expressed pathway parts. Each pathway is decomposed into all its constituent sub-paths, which in turn are matched with corresponding gene expression profiles. The highly ranked, and phenotype inclined sub-paths are kept. Apart from the pathway analysis algorithm, the fundamental innovation of the MinePath web-server concerns its advanced visualization and interactive capabilities. To our knowledge, this is the first pathway analysis server that introduces and offers visualization of the underlying and active pathway regulatory mechanisms instead of genes. Other features include live interaction, immediate visualization of functional sub-paths per phenotype and dynamic linked annotations for the engaged genes and molecular relations. The user can download not only the results but also the corresponding web viewer framework of the performed analysis. This feature provides the flexibility to immediately publish results without publishing source/expression data, and get all the functionality of a web based pathway analysis viewer. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Scientific Uses and Directions of SPDF Data Services
NASA Technical Reports Server (NTRS)
Fung, Shing
2007-01-01
From a science user's perspective, the multi-mission data and orbit services of NASA's Space Physics Data Facility (SPDF) project perform as a working and highly functional heliophysics virtual observatory. CDAWeb enables plots, listings and file downloads for current data across the boundaries of missions and instrument types (and now including data from THEMIS and STEREO), VSPO access to a wide range of distributed data sources. SSCWeb, Helioweb and our 3D Animated Orbit Viewer (TIPSOD) provide position data and query logic for most missions currently-important to heliophysics science. OMNIWeb with its new extension to 1- and 5- minute resolution provides interplanetary parameters at the Earth's bow shock as a unique value-added data product. To enable easier integrated use of our capabilities by developers and by the emerging heliophysics VxOs, our data and services are available through webservices-based APls as well as through our direct user interfaces. SPDF has also now developed draft descriptions of its holdings in SPASE-compliant XML In addition to showcasing recent enhancements to SPDF capabilities, we will use these systems and our experience in developing them: to demonstrate a few typical science use cases; to discuss key scope and design issues among users, service providers and end data providers; and to identify key areas where existing capabilities and effective interface design are still inadequate to meet community needs.
Modifying the Heliophysics Data Policy to Better Enable Heliophysics Research
NASA Technical Reports Server (NTRS)
Hayes, Jeffrey; Roberts, D. Aaron; Bredekamp, Joseph
2008-01-01
The Heliophysics (HP) Science Data Management Policy, adopted by HP in June 2007, has helped to provide a structure for the HP data lifecycle. It provides guidelines for Project Data Management Plans and related documents, initiates Resident Archives to maintain data services after a mission ends, and outlines a route to the unification of data finding, access, and distribution through Virtual observatories. Recently we have filled in missing pieces that assure more coherence and a home for the VxOs (through the 'Heliophsyics Data and Model Consortium'), and provide greater clarity with respect to long term archiving. In particular, the new policy which has been vetted with many community members, details the 'Final Archives' that are to provide long-term data access. These are distinguished from RAs in that they provide little additional service beyond servicing data, but critical to their success is that the final archival materials include calibrated data in useful formats such as one finds in CDAWeb and various ASCII or FITS archives. Having a clear goal for legacy products, to be detailed as part of the Mission Archives Plans presented at Senior Reviews, will help to avoid the situation so common in the past of having archival products that preserve bits well but not readily usable information. We hope to avoid the need for the large numbers of 'data upgrade' projects that have been necessary in recent years.
Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses.
Falagas, Matthew E; Pitsouni, Eleni I; Malietzis, George A; Pappas, Georgios
2008-02-01
The evolution of the electronic age has led to the development of numerous medical databases on the World Wide Web, offering search facilities on a particular subject and the ability to perform citation analysis. We compared the content coverage and practical utility of PubMed, Scopus, Web of Science, and Google Scholar. The official Web pages of the databases were used to extract information on the range of journals covered, search facilities and restrictions, and update frequency. We used the example of a keyword search to evaluate the usefulness of these databases in biomedical information retrieval and a specific published article to evaluate their utility in performing citation analysis. All databases were practical in use and offered numerous search facilities. PubMed and Google Scholar are accessed for free. The keyword search with PubMed offers optimal update frequency and includes online early articles; other databases can rate articles by number of citations, as an index of importance. For citation analysis, Scopus offers about 20% more coverage than Web of Science, whereas Google Scholar offers results of inconsistent accuracy. PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range, of help both in keyword searching and citation analysis, but it is currently limited to recent articles (published after 1995) compared with Web of Science. Google Scholar, as for the Web in general, can help in the retrieval of even the most obscure information but its use is marred by inadequate, less often updated, citation information.
Dongre, A R; Chacko, T V; Banu, S; Bhandary, S; Sahasrabudhe, R A; Philip, S; Deshmukh, P R
2010-11-01
In medical education, using the World Wide Web is a new approach for building the capacity of faculty. However, there is little information available on medical education researchers' needs and their collective learning outcomes in such on-line environments. Hence, the present study attempted: 1)to identify needs for capacity-building of fellows in a faculty development program on the topic of data analysis; and 2) to describe, analyze and understand the collective learning outcomes of the fellows during this need-based on-line session. The present research is based on quantitative (on-line survey for needs assessment) and qualitative (contents of e-mails exchanged in listserv discussion) data which were generated during the October 2009 Mentoring and Learning (M-L) Web discussion on the topic of data analysis. The data sources were shared e-mail responses during the process of planning and executing the M-L Web discussion. Content analysis was undertaken and the categories of discussion were presented as a simple non-hierarchical typology which represents the collective learning of the project fellows. We identified the types of learning needs on the topic 'Analysis of Data' to be addressed for faculty development in the field of education research. This need-based M-L Web discussion could then facilitate collective learning on such topics as 'basic concepts in statistics', tests of significance, Likert scale analysis, bivariate correlation, and simple regression analysis and content analysis of qualitative data. Steps like identifying the learning needs for an on-line M-L Web discussion, addressing the immediate needs of learners and creating a flexible reflective learning environment on the M-L Web facilitated the collective learning of the fellows on the topic of data analysis. Our outcomes can be useful in the design of on-line pedagogical strategies for supporting research in medical education.
Development of web-GIS system for analysis of georeferenced geophysical data
NASA Astrophysics Data System (ADS)
Okladnikov, I.; Gordov, E. P.; Titov, A. G.; Bogomolov, V. Y.; Genina, E.; Martynova, Y.; Shulgina, T. M.
2012-12-01
Georeferenced datasets (meteorological databases, modeling and reanalysis results, remote sensing products, etc.) are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated web-GIS information-computational system for analysis of georeferenced climatological and meteorological data has been created. The information-computational system consists of 4 basic parts: computational kernel developed using GNU Data Language (GDL), a set of PHP-controllers run within specialized web-portal, JavaScript class libraries for development of typical components of web mapping application graphical user interface (GUI) based on AJAX technology, and an archive of geophysical datasets. Computational kernel comprises of a number of dedicated modules for querying and extraction of data, mathematical and statistical data analysis, visualization, and preparing output files in geoTIFF and netCDF format containing processing results. Specialized web-portal consists of a web-server Apache, complying OGC standards Geoserver software which is used as a base for presenting cartographical information over the Web, and a set of PHP-controllers implementing web-mapping application logic and governing computational kernel. JavaScript libraries aiming at graphical user interface development are based on GeoExt library combining ExtJS Framework and OpenLayers software. The archive of geophysical data consists of a number of structured environmental datasets represented by data files in netCDF, HDF, GRIB, ESRI Shapefile formats. For processing by the system are available: two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, DWD Global Precipitation Climatology Centre's data, GMAO Modern Era-Retrospective analysis for Research and Applications, meteorological observational data for the territory of the former USSR for the 20th century, results of modeling by global and regional climatological models, and others. The system is already involved into a scientific research process. Particularly, recently the system was successfully used for analysis of Siberia climate changes and its impact in the region. The Web-GIS information-computational system for geophysical data analysis provides specialists involved into multidisciplinary research projects with reliable and practical instruments for complex analysis of climate and ecosystems changes on global and regional scales. Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified web-interface in a common graphical web-browser. This work is partially supported by the Ministry of education and science of the Russian Federation (contract #07.514.114044), projects IV.31.1.5, IV.31.2.7, RFBR grants #10-07-00547a, #11-05-01190a, and integrated project SB RAS #131.
3Drefine: an interactive web server for efficient protein structure refinement.
Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin
2016-07-08
3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
MAGMA: analysis of two-channel microarrays made easy.
Rehrauer, Hubert; Zoller, Stefan; Schlapbach, Ralph
2007-07-01
The web application MAGMA provides a simple and intuitive interface to identify differentially expressed genes from two-channel microarray data. While the underlying algorithms are not superior to those of similar web applications, MAGMA is particularly user friendly and can be used without prior training. The user interface guides the novice user through the most typical microarray analysis workflow consisting of data upload, annotation, normalization and statistical analysis. It automatically generates R-scripts that document MAGMA's entire data processing steps, thereby allowing the user to regenerate all results in his local R installation. The implementation of MAGMA follows the model-view-controller design pattern that strictly separates the R-based statistical data processing, the web-representation and the application logic. This modular design makes the application flexible and easily extendible by experts in one of the fields: statistical microarray analysis, web design or software development. State-of-the-art Java Server Faces technology was used to generate the web interface and to perform user input processing. MAGMA's object-oriented modular framework makes it easily extendible and applicable to other fields and demonstrates that modern Java technology is also suitable for rather small and concise academic projects. MAGMA is freely available at www.magma-fgcz.uzh.ch.
A Methodology for the Development of RESTful Semantic Web Services for Gene Expression Analysis
Guardia, Gabriela D. A.; Pires, Luís Ferreira; Vêncio, Ricardo Z. N.; Malmegrim, Kelen C. R.; de Farias, Cléver R. G.
2015-01-01
Gene expression studies are generally performed through multi-step analysis processes, which require the integrated use of a number of analysis tools. In order to facilitate tool/data integration, an increasing number of analysis tools have been developed as or adapted to semantic web services. In recent years, some approaches have been defined for the development and semantic annotation of web services created from legacy software tools, but these approaches still present many limitations. In addition, to the best of our knowledge, no suitable approach has been defined for the functional genomics domain. Therefore, this paper aims at defining an integrated methodology for the implementation of RESTful semantic web services created from gene expression analysis tools and the semantic annotation of such services. We have applied our methodology to the development of a number of services to support the analysis of different types of gene expression data, including microarray and RNASeq. All developed services are publicly available in the Gene Expression Analysis Services (GEAS) Repository at http://dcm.ffclrp.usp.br/lssb/geas. Additionally, we have used a number of the developed services to create different integrated analysis scenarios to reproduce parts of two gene expression studies documented in the literature. The first study involves the analysis of one-color microarray data obtained from multiple sclerosis patients and healthy donors. The second study comprises the analysis of RNA-Seq data obtained from melanoma cells to investigate the role of the remodeller BRG1 in the proliferation and morphology of these cells. Our methodology provides concrete guidelines and technical details in order to facilitate the systematic development of semantic web services. Moreover, it encourages the development and reuse of these services for the creation of semantically integrated solutions for gene expression analysis. PMID:26207740
A Methodology for the Development of RESTful Semantic Web Services for Gene Expression Analysis.
Guardia, Gabriela D A; Pires, Luís Ferreira; Vêncio, Ricardo Z N; Malmegrim, Kelen C R; de Farias, Cléver R G
2015-01-01
Gene expression studies are generally performed through multi-step analysis processes, which require the integrated use of a number of analysis tools. In order to facilitate tool/data integration, an increasing number of analysis tools have been developed as or adapted to semantic web services. In recent years, some approaches have been defined for the development and semantic annotation of web services created from legacy software tools, but these approaches still present many limitations. In addition, to the best of our knowledge, no suitable approach has been defined for the functional genomics domain. Therefore, this paper aims at defining an integrated methodology for the implementation of RESTful semantic web services created from gene expression analysis tools and the semantic annotation of such services. We have applied our methodology to the development of a number of services to support the analysis of different types of gene expression data, including microarray and RNASeq. All developed services are publicly available in the Gene Expression Analysis Services (GEAS) Repository at http://dcm.ffclrp.usp.br/lssb/geas. Additionally, we have used a number of the developed services to create different integrated analysis scenarios to reproduce parts of two gene expression studies documented in the literature. The first study involves the analysis of one-color microarray data obtained from multiple sclerosis patients and healthy donors. The second study comprises the analysis of RNA-Seq data obtained from melanoma cells to investigate the role of the remodeller BRG1 in the proliferation and morphology of these cells. Our methodology provides concrete guidelines and technical details in order to facilitate the systematic development of semantic web services. Moreover, it encourages the development and reuse of these services for the creation of semantically integrated solutions for gene expression analysis.
The aware toolbox for the detection of law infringements on web pages
NASA Astrophysics Data System (ADS)
Shahab, Asif; Kieninger, Thomas; Dengel, Andreas
2010-01-01
In the project Aware we aim to develop an automatic assistant for the detection of law infringements on web pages. The motivation for this project is that many authors of web pages are at some points infringing copyrightor other laws, mostly without being aware of that fact, and are more and more often confronted with costly legal warnings. As the legal environment is constantly changing, an important requirement of Aware is that the domain knowledge can be maintained (and initially defined) by numerous legal experts remotely working without further assistance of the computer scientists. Consequently, the software platform was chosen to be a web-based generic toolbox that can be configured to suit individual analysis experts, definitions of analysis flow, information gathering and report generation. The report generated by the system summarizes all critical elements of a given web page and provides case specific hints to the page author and thus forms a new type of service. Regarding the analysis subsystems, Aware mainly builds on existing state-of-the-art technologies. Their usability has been evaluated for each intended task. In order to control the heterogeneous analysis components and to gather the information, a lightweight scripting shell has been developed. This paper describes the analysis technologies, ranging from text based information extraction, over optical character recognition and phonetic fuzzy string matching to a set of image analysis and retrieval tools; as well as the scripting language to define the analysis flow.
Sensor Web Interoperability Testbed Results Incorporating Earth Observation Satellites
NASA Technical Reports Server (NTRS)
Frye, Stuart; Mandl, Daniel J.; Alameh, Nadine; Bambacus, Myra; Cappelaere, Pat; Falke, Stefan; Derezinski, Linda; Zhao, Piesheng
2007-01-01
This paper describes an Earth Observation Sensor Web scenario based on the Open Geospatial Consortium s Sensor Web Enablement and Web Services interoperability standards. The scenario demonstrates the application of standards in describing, discovering, accessing and tasking satellites and groundbased sensor installations in a sequence of analysis activities that deliver information required by decision makers in response to national, regional or local emergencies.
ERIC Educational Resources Information Center
Chang, Chia-Jung; Liu, Chen-Chung; Shen, Yan-Jhih
2012-01-01
Collaborative web exploration, in which learners work together to explore the World Wide Web, has become a key learning activity in education contexts. Learners can use a shared computer with a shared display to explore the web together. However, such a shared-computer approach may limit active participation among learners. To address this issue,…
How Teachers Use and Manage Their Blogs? A Cluster Analysis of Teachers' Blogs in Taiwan
ERIC Educational Resources Information Center
Liu, Eric Zhi-Feng; Hou, Huei-Tse
2013-01-01
The development of Web 2.0 has ushered in a new set of web-based tools, including blogs. This study focused on how teachers use and manage their blogs. A sample of 165 teachers' blogs in Taiwan was analyzed by factor analysis, cluster analysis and qualitative content analysis. First, the teachers' blogs were analyzed according to six criteria…
Density estimation using the trapping web design: A geometric analysis
Link, W.A.; Barker, R.J.
1994-01-01
Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.
NASA Technical Reports Server (NTRS)
Laakso, J. H.; Straayer, J. W.
1973-01-01
Three large scale advanced composite shear web components were tested and analyzed to evaluate application of the design concept to a space shuttle orbiter thrust structure. The shear web design concept consisted of a titanium-clad + or - 45 deg boron/epoxy web laminate stiffened with vertical boron/epoxy reinforced aluminum stiffeners. The design concept was evaluated to be efficient and practical for the application that was studied. Because of the effects of buckling deflections, a requirement is identified for shear buckling resistant design to maximize the efficiency of highly-loaded advanced composite shear webs. An approximate analysis of prebuckling deflections is presented and computer-aided design results, which consider prebuckling deformations, indicate that the design concept offers a theoretical weight saving of 31 percent relative to all metal construction. Recommendations are made for design concept options and analytical methods that are appropriate for production hardware.
Automated geospatial Web Services composition based on geodata quality requirements
NASA Astrophysics Data System (ADS)
Cruz, Sérgio A. B.; Monteiro, Antonio M. V.; Santos, Rafael
2012-10-01
Service-Oriented Architecture and Web Services technologies improve the performance of activities involved in geospatial analysis with a distributed computing architecture. However, the design of the geospatial analysis process on this platform, by combining component Web Services, presents some open issues. The automated construction of these compositions represents an important research topic. Some approaches to solving this problem are based on AI planning methods coupled with semantic service descriptions. This work presents a new approach using AI planning methods to improve the robustness of the produced geospatial Web Services composition. For this purpose, we use semantic descriptions of geospatial data quality requirements in a rule-based form. These rules allow the semantic annotation of geospatial data and, coupled with the conditional planning method, this approach represents more precisely the situations of nonconformities with geodata quality that may occur during the execution of the Web Service composition. The service compositions produced by this method are more robust, thus improving process reliability when working with a composition of chained geospatial Web Services.
Biological Web Service Repositories Review.
Urdidiales-Nieto, David; Navas-Delgado, Ismael; Aldana-Montes, José F
2017-05-01
Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re-execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re-check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well-known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
A User-centered Model for Web Site Design
Kinzie, Mable B.; Cohn, Wendy F.; Julian, Marti F.; Knaus, William A.
2002-01-01
As the Internet continues to grow as a delivery medium for health information, the design of effective Web sites becomes increasingly important. In this paper, the authors provide an overview of one effective model for Web site design, a user-centered process that includes techniques for needs assessment, goal/task analysis, user interface design, and rapid prototyping. They detail how this approach was employed to design a family health history Web site, Health Heritage
Study on online community user motif using web usage mining
NASA Astrophysics Data System (ADS)
Alphy, Meera; Sharma, Ajay
2016-04-01
The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.
Tanikawa, Akio; Shinkai, Akira; Miyashita, Tadashi
2014-11-01
The evolutionary process of the unique web architectures of spiders of the sub-family Cyrtarachninae, which includes the triangular web weaver, bolas spider, and webless spider, is thought to be derived from reduction of orbicular 'spanning-thread webs' resembling ordinal orb webs. A molecular phylogenetic analysis was conducted to explore this hypothesis using orbicular web spiders Cyrtarachne, Paraplectana, Poecilopachys, triangular web spider Pasilobus, bolas spiders Ordgarius and Mastophora, and webless spider Celaenia. The phylogeny inferred from partial sequences of mt-COI, nuclear 18S-rRNA and 28S-rRNA showed that the common ancestor of these spiders diverged into two clades: a spanning-thread web clade and a bolas or webless clade. This finding suggests that the triangular web evolved by reduction of an orbicular spanning web, but that bolas spiders evolved in the early stage, which does not support the gradual web reduction hypothesis.
2016-10-26
U.S. Army Research Institute of Environmental Medicine (USARIEM) designed and conducted a total of three web - administered job analysis...for seven of the Army’s most physically demanding jobs, researchers from the USARIEM and Human Performance Systems, Inc. designed three web ...quality of some item responses. 3) This survey was web -administered, and thus participants had limited opportunity to seek feedback about question
Web-based multimedia information retrieval for clinical application research
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Hoo, Kent S., Jr.; Zhang, Hong; Ching, Wan; Zhang, Ming; Wong, Stephen T. C.
2001-08-01
We described a web-based data warehousing method for retrieving and analyzing neurological multimedia information. The web-based method supports convenient access, effective search and retrieval of clinical textual and image data, and on-line analysis. To improve the flexibility and efficiency of multimedia information query and analysis, a three-tier, multimedia data warehouse for epilepsy research has been built. The data warehouse integrates clinical multimedia data related to epilepsy from disparate sources and archives them into a well-defined data model.
ProteMiner-SSM: a web server for efficient analysis of similar protein tertiary substructures.
Chang, Darby Tien-Hau; Chen, Chien-Yu; Chung, Wen-Chin; Oyang, Yen-Jen; Juan, Hsueh-Fen; Huang, Hsuan-Cheng
2004-07-01
Analysis of protein-ligand interactions is a fundamental issue in drug design. As the detailed and accurate analysis of protein-ligand interactions involves calculation of binding free energy based on thermodynamics and even quantum mechanics, which is highly expensive in terms of computing time, conformational and structural analysis of proteins and ligands has been widely employed as a screening process in computer-aided drug design. In this paper, a web server called ProteMiner-SSM designed for efficient analysis of similar protein tertiary substructures is presented. In one experiment reported in this paper, the web server has been exploited to obtain some clues about a biochemical hypothesis. The main distinction in the software design of the web server is the filtering process incorporated to expedite the analysis. The filtering process extracts the residues located in the caves of the protein tertiary structure for analysis and operates with O(nlogn) time complexity, where n is the number of residues in the protein. In comparison, the alpha-hull algorithm, which is a widely used algorithm in computer graphics for identifying those instances that are on the contour of a three-dimensional object, features O(n2) time complexity. Experimental results show that the filtering process presented in this paper is able to speed up the analysis by a factor ranging from 3.15 to 9.37 times. The ProteMiner-SSM web server can be found at http://proteminer.csie.ntu.edu.tw/. There is a mirror site at http://p4.sbl.bc.sinica.edu.tw/proteminer/.
Who Goes There? Measuring Library Web Site Usage.
ERIC Educational Resources Information Center
Bauer, Kathleen
2000-01-01
Discusses how libraries can gather data on the use of their Web sites. Highlights include Web server log files, including the common log file, referrer log file, and agent log file; log file limitations; privacy concerns; and choosing log analysis software, both free and commercial. (LRW)
A usability evaluation exploring the design of American Nurses Association state web sites.
Alexander, Gregory L; Wakefield, Bonnie J; Anbari, Allison B; Lyons, Vanessa; Prentice, Donna; Shepherd, Marilyn; Strecker, E Bradley; Weston, Marla J
2014-08-01
National leaders are calling for opportunities to facilitate the Future of Nursing. Opportunities can be encouraged through state nurses association Web sites, which are part of the American Nurses Association, that are well designed, with appropriate content, and in a language professional nurses understand. The American Nurses Association and constituent state nurses associations provide information about nursing practice, ethics, credentialing, and health on Web sites. We conducted usability evaluations to determine compliance with heuristic and ethical principles for Web site design. We purposefully sampled 27 nursing association Web sites and used 68 heuristic and ethical criteria to perform systematic usability assessments of nurse association Web sites. Web site analysis included seven double experts who were all RNs trained in usability analysis. The extent to which heuristic and ethical criteria were met ranged widely from one state that met 0% of the criteria for "help and documentation" to states that met greater than 92% of criteria for "visibility of system status" and "aesthetic and minimalist design." Suggested improvements are simple yet make an impact on a first-time visitor's impression of the Web site. For example, adding internal navigation and tracking features and providing more details about the application process through help and frequently asked question documentation would facilitate better use. Improved usability will improve effectiveness, efficiency, and consumer satisfaction with these Web sites.
miTRATA: a web-based tool for microRNA Truncation and Tailing Analysis.
Patel, Parth; Ramachandruni, S Deepthi; Kakrana, Atul; Nakano, Mayumi; Meyers, Blake C
2016-02-01
We describe miTRATA, the first web-based tool for microRNA Truncation and Tailing Analysis--the analysis of 3' modifications of microRNAs including the loss or gain of nucleotides relative to the canonical sequence. miTRATA is implemented in Python (version 3) and employs parallel processing modules to enhance its scalability when analyzing multiple small RNA (sRNA) sequencing datasets. It utilizes miRBase, currently version 21, as a source of known microRNAs for analysis. miTRATA notifies user(s) via email to download as well as visualize the results online. miTRATA's strengths lie in (i) its biologist-focused web interface, (ii) improved scalability via parallel processing and (iii) its uniqueness as a webtool to perform microRNA truncation and tailing analysis. miTRATA is developed in Python and PHP. It is available as a web-based application from https://wasabi.dbi.udel.edu/∼apps/ta/. meyers@dbi.udel.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Pathview Web: user friendly pathway visualization and data integration.
Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory
2017-07-03
Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Buzzell, Paul R; Chamberlain, Valerie M; Pintauro, Stephen J
2002-12-01
This study examined the effectiveness of a series of Web-based, multimedia tutorials on methods of human body composition analysis. Tutorials were developed around four body composition topics: hydrodensitometry (underwater weighing), dual-energy X-ray absorptiometry, bioelectrical impedance analysis, and total body electrical conductivity. Thirty-two students enrolled in the course were randomly assigned to learn the material through either the Web-based tutorials only ("Computer"), a traditional lecture format ("Lecture"), or lectures supplemented with Web-based tutorials ("Both"). All students were administered a validated pretest before randomization and an identical posttest at the completion of the course. The reliability of the test was 0.84. The mean score changes from pretest to posttest were not significantly different among the groups (65.4 plus minus 17.31, 78.82 plus minus 21.50, and 76 plus minus 21.22 for the Computer, Both, and Lecture groups, respectively). Additionally, a Likert-type assessment found equally positive attitudes toward all three formats. The results indicate that Web-based tutorials are as effective as the traditional lecture format for teaching these topics.
ERIC Educational Resources Information Center
Blummer, Barbara; Kenton, Jeffrey M.
2014-01-01
Web 2.0 tools offer academic libraries new avenues for delivering services and resources to students. In this research we report on a content analysis of 100 US community college libraries' Websites for the availability of Web 2.0 applications. We found Web 2.0 tools utilized by 97% of our sample population and many of these sites contained more…
Graphite Web: web tool for gene set analysis exploiting pathway topology
Sales, Gabriele; Calura, Enrica; Martini, Paolo; Romualdi, Chiara
2013-01-01
Graphite web is a novel web tool for pathway analyses and network visualization for gene expression data of both microarray and RNA-seq experiments. Several pathway analyses have been proposed either in the univariate or in the global and multivariate context to tackle the complexity and the interpretation of expression results. These methods can be further divided into ‘topological’ and ‘non-topological’ methods according to their ability to gain power from pathway topology. Biological pathways are, in fact, not only gene lists but can be represented through a network where genes and connections are, respectively, nodes and edges. To this day, the most used approaches are non-topological and univariate although they miss the relationship among genes. On the contrary, topological and multivariate approaches are more powerful, but difficult to be used by researchers without bioinformatic skills. Here we present Graphite web, the first public web server for pathway analysis on gene expression data that combines topological and multivariate pathway analyses with an efficient system of interactive network visualizations for easy results interpretation. Specifically, Graphite web implements five different gene set analyses on three model organisms and two pathway databases. Graphite Web is freely available at http://graphiteweb.bio.unipd.it/. PMID:23666626
Semantic Web technologies for the big data in life sciences.
Wu, Hongyan; Yamaguchi, Atsuko
2014-08-01
The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.
Web site development: applying aesthetics to promote breast health education and awareness.
Thomas, Barbara; Goldsmith, Susan B; Forrest, Anne; Marshall, Renée
2002-01-01
This article describes the process of establishing a Web site as part of a collaborative project using visual art to promote breast health education. The need for a more "user-friendly" comprehensive breast health Web site that is aesthetically rewarding was identified after an analysis of current Web sites available through the World Wide Web. Two predetermined sets of criteria, accountability and aesthetics, were used to analyze these sites and to generate ideas for creating a breast health education Web site using visual art. Results of the analyses conducted are included as well as the factors to consider for incorporating into a Web site. The process specified is thorough and can be applied to establish a Web site that is aesthetically rewarding and informative for a variety of educational purposes.
Bianco, Luca; Riccadonna, Samantha; Lavezzo, Enrico; Falda, Marco; Formentin, Elide; Cavalieri, Duccio; Toppo, Stefano; Fontana, Paolo
2017-02-01
Pathway Inspector is an easy-to-use web application helping researchers to find patterns of expression in complex RNAseq experiments. The tool combines two standard approaches for RNAseq analysis: the identification of differentially expressed genes and a topology-based analysis of enriched pathways. Pathway Inspector is equipped with ad hoc interactive graphical interfaces simplifying the discovery of modulated pathways and the integration of the differentially expressed genes in the corresponding pathway topology. Pathway Inspector is available at the website http://admiral.fmach.it/PI and has been developed in Python, making use of the Django Web Framework. Contact:paolo.fontana@fmach.it
Understanding and Improving Knowledge Transactions in Command and Control
2003-06-01
implications for the development of tools to facilitate efficient and effectiv and knowledge exchange. Cognitive task analysis (CTA) in support...makers]?” *quotes taken from K-web cognitive task analysis , Global 2000 and Global 2001 War Games, interviews with Carl Vinson K-Web users following
CrazyEgg Reports for Single Page Analysis
CrazyEgg provides an in depth look at visitor behavior on one page. While you can use GA to do trend analysis of your web area, CrazyEgg helps diagnose the design of a single Web page by visually displaying all visitor clicks during a specified time.
Fernandez, Nicolas F.; Gundersen, Gregory W.; Rahman, Adeeb; Grimes, Mark L.; Rikova, Klarisa; Hornbeck, Peter; Ma’ayan, Avi
2017-01-01
Most tools developed to visualize hierarchically clustered heatmaps generate static images. Clustergrammer is a web-based visualization tool with interactive features such as: zooming, panning, filtering, reordering, sharing, performing enrichment analysis, and providing dynamic gene annotations. Clustergrammer can be used to generate shareable interactive visualizations by uploading a data table to a web-site, or by embedding Clustergrammer in Jupyter Notebooks. The Clustergrammer core libraries can also be used as a toolkit by developers to generate visualizations within their own applications. Clustergrammer is demonstrated using gene expression data from the cancer cell line encyclopedia (CCLE), original post-translational modification data collected from lung cancer cells lines by a mass spectrometry approach, and original cytometry by time of flight (CyTOF) single-cell proteomics data from blood. Clustergrammer enables producing interactive web based visualizations for the analysis of diverse biological data. PMID:28994825
Frieder, Jessica E; Peterson, Stephanie M; Woodward, Judy; Crane, Jaelee; Garner, Marlane
2009-01-01
This paper describes a technically driven, collaborative approach to assessing the function of problem behavior using web-based technology. A case example is provided to illustrate the process used in this pilot project. A school team conducted a functional analysis with a child who demonstrated challenging behaviors in a preschool setting. Behavior analysts at a university setting provided the school team with initial workshop trainings, on-site visits, e-mail and phone communication, as well as live web-based feedback on functional analysis sessions. The school personnel implemented the functional analysis with high fidelity and scored the data reliably. Outcomes of the project suggest that there is great potential for collaboration via the use of web-based technologies for ongoing assessment and development of effective interventions. However, an empirical evaluation of this model should be conducted before wide-scale adoption is recommended.
Surveying the Commons: Current Implementation of Information Commons Web sites
ERIC Educational Resources Information Center
Leeder, Christopher
2009-01-01
This study assessed the content of 72 academic library Information Commons (IC) Web sites using content analysis, quantitative assessment and qualitative surveys of site administrators to analyze current implementation by the academic library community. Results show that IC Web sites vary widely in content, design and functionality, with few…
Effectiveness of Web-Based Psychological Interventions for Depression: A Meta-Analysis
ERIC Educational Resources Information Center
Cowpertwait, Louise; Clarke, Dave
2013-01-01
Web-based psychological interventions aim to make psychological treatments more accessible and minimize clinician input, but their effectiveness requires further examination. The purposes of the present study are to evaluate the outcomes of web-based interventions for treating depressed adults using meta-analytic techniques, and to examine…
ERIC Educational Resources Information Center
Kumar, David Devraj; Dunn, Jessica
2018-01-01
Analysis of self-reflections of undergraduate education students in a project involving web-supported counterintuitive science demonstrations is reported in this paper. Participating students (N = 19) taught science with counterintuitive demonstrations in local elementary school classrooms and used web-based resources accessed via wireless USB…
A Framework for Open, Flexible and Distributed Learning.
ERIC Educational Resources Information Center
Khan, Badrul H.
Designing open, flexible distance learning systems on the World Wide Web requires thoughtful analysis and investigation combined with an understanding of both the Web's attributes and resources and the ways instructional design principles can be applied to tap the Web's potential. A framework for open, flexible, and distributed learning has been…
An Analysis of Academic Library Web Pages for Faculty
ERIC Educational Resources Information Center
Gardner, Susan J.; Juricek, John Eric; Xu, F. Grace
2008-01-01
Web sites are increasingly used by academic libraries to promote key services and collections to teaching faculty. This study analyzes the content, location, language, and technological features of fifty-four academic library Web pages designed especially for faculty to expose patterns in the development of these pages.
WebArray: an online platform for microarray data analysis
Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng
2005-01-01
Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165
ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis
Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas
2016-01-01
Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/. PMID:26882475
Internal Carotid Artery Web as the Cause of Recurrent Cryptogenic Ischemic Stroke.
Antigüedad-Muñoz, Jon; de la Riva, Patricia; Arenaza Choperena, Gorka; Muñoz Lopetegi, Amaia; Andrés Marín, Naiara; Fernández-Eulate, Gorka; Moreno Valladares, Manuel; Martínez Zabaleta, Maite
2018-05-01
Carotid artery web is considered an exceptional cause of recurrent ischemic strokes in the affected arterial territory. The underlying pathology proposed for this entity is an atypical fibromuscular dysplasia. We present the case of a 43-year-old woman with no cardiovascular risk factors who had experienced 2 cryptogenic ischemic strokes in the same arterial territory within an 11-month period. Although all diagnostic tests initially yielded normal results, detailed analysis of the computed tomography angiography images revealed a carotid web; catheter angiography subsequently confirmed the diagnosis. Carotid surgery was performed, since which time the patient has remained completely asymptomatic. The histological finding of intimal hyperplasia is consistent with previously reported cases of carotid artery web. Carotid artery web is an infrequent cause of stroke, and this diagnosis requires a high level of suspicion plus a detailed analysis of vascular imaging studies. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Using EMBL-EBI services via Web interface and programmatically via Web Services
Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish
2015-01-01
The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. PMID:25501941
Software Framework for Development of Web-GIS Systems for Analysis of Georeferenced Geophysical Data
NASA Astrophysics Data System (ADS)
Okladnikov, I.; Gordov, E. P.; Titov, A. G.
2011-12-01
Georeferenced datasets (meteorological databases, modeling and reanalysis results, remote sensing products, etc.) are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated software framework for rapid development of providing such support information-computational systems based on Web-GIS technologies has been created. The software framework consists of 3 basic parts: computational kernel developed using ITTVIS Interactive Data Language (IDL), a set of PHP-controllers run within specialized web portal, and JavaScript class library for development of typical components of web mapping application graphical user interface (GUI) based on AJAX technology. Computational kernel comprise of number of modules for datasets access, mathematical and statistical data analysis and visualization of results. Specialized web-portal consists of web-server Apache, complying OGC standards Geoserver software which is used as a base for presenting cartographical information over the Web, and a set of PHP-controllers implementing web-mapping application logic and governing computational kernel. JavaScript library aiming at graphical user interface development is based on GeoExt library combining ExtJS Framework and OpenLayers software. Based on the software framework an information-computational system for complex analysis of large georeferenced data archives was developed. Structured environmental datasets available for processing now include two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, meteorological observational data for the territory of the former USSR for the 20th century, and others. Current version of the system is already involved into a scientific research process. Particularly, recently the system was successfully used for analysis of Siberia climate changes and its impact in the region. The software framework presented allows rapid development of Web-GIS systems for geophysical data analysis thus providing specialists involved into multidisciplinary research projects with reliable and practical instruments for complex analysis of climate and ecosystems changes on global and regional scales. This work is partially supported by RFBR grants #10-07-00547, #11-05-01190, and SB RAS projects 4.31.1.5, 4.31.2.7, 4, 8, 9, 50 and 66.
NASA Astrophysics Data System (ADS)
Ryazanova, A. A.; Okladnikov, I. G.; Gordov, E. P.
2017-11-01
The frequency of occurrence and magnitude of precipitation and temperature extreme events show positive trends in several geographical regions. These events must be analyzed and studied in order to better understand their impact on the environment, predict their occurrences, and mitigate their effects. For this purpose, we augmented web-GIS called “CLIMATE” to include a dedicated statistical package developed in the R language. The web-GIS “CLIMATE” is a software platform for cloud storage processing and visualization of distributed archives of spatial datasets. It is based on a combined use of web and GIS technologies with reliable procedures for searching, extracting, processing, and visualizing the spatial data archives. The system provides a set of thematic online tools for the complex analysis of current and future climate changes and their effects on the environment. The package includes new powerful methods of time-dependent statistics of extremes, quantile regression and copula approach for the detailed analysis of various climate extreme events. Specifically, the very promising copula approach allows obtaining the structural connections between the extremes and the various environmental characteristics. The new statistical methods integrated into the web-GIS “CLIMATE” can significantly facilitate and accelerate the complex analysis of climate extremes using only a desktop PC connected to the Internet.
The Adversarial Route Analysis Tool: A Web Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casson, William H. Jr.
2012-08-02
The Adversarial Route Analysis Tool is a type of Google maps for adversaries. It's a web-based Geospatial application similar to Google Maps. It helps the U.S. government plan operations that predict where an adversary might be. It's easily accessible and maintainble and it's simple to use without much training.
An Evaluative Methodology for Virtual Communities Using Web Analytics
ERIC Educational Resources Information Center
Phippen, A. D.
2004-01-01
The evaluation of virtual community usage and user behaviour has its roots in social science approaches such as interview, document analysis and survey. Little evaluation is carried out using traffic or protocol analysis. Business approaches to evaluating customer/business web site usage are more advanced, in particular using advanced web…
Web 2.0 applications in medicine: trends and topics in the literature.
Boudry, Christophe
2015-04-01
The World Wide Web has changed research habits, and these changes were further expanded when "Web 2.0" became popular in 2005. Bibliometrics is a helpful tool used for describing patterns of publication, for interpreting progression over time, and the geographical distribution of research in a given field. Few studies employing bibliometrics, however, have been carried out on the correlative nature of scientific literature and Web 2.0. The aim of this bibliometric analysis was to provide an overview of Web 2.0 implications in the biomedical literature. The objectives were to assess the growth rate of literature, key journals, authors, and country contributions, and to evaluate whether the various Web 2.0 applications were expressed within this biomedical literature, and if so, how. A specific query with keywords chosen to be representative of Web 2.0 applications was built for the PubMed database. Articles related to Web 2.0 were downloaded in Extensible Markup Language (XML) and were processed through developed hypertext preprocessor (PHP) scripts, then imported to Microsoft Excel 2010 for data processing. A total of 1347 articles were included in this study. The number of articles related to Web 2.0 has been increasing from 2002 to 2012 (average annual growth rate was 106.3% with a maximum of 333% in 2005). The United States was by far the predominant country for authors, with 514 articles (54.0%; 514/952). The second and third most productive countries were the United Kingdom and Australia, with 87 (9.1%; 87/952) and 44 articles (4.6%; 44/952), respectively. Distribution of number of articles per author showed that the core population of researchers working on Web 2.0 in the medical field could be estimated at approximately 75. In total, 614 journals were identified during this analysis. Using Bradford's law, 27 core journals were identified, among which three (Studies in Health Technology and Informatics, Journal of Medical Internet Research, and Nucleic Acids Research) produced more than 35 articles related to Web 2.0 over the period studied. A total of 274 words in the field of Web 2.0 were found after manual sorting of the 15,878 words appearing in title and abstract fields for articles. Word frequency analysis reveals "blog" as the most recurrent, followed by "wiki", "Web 2.0", "social media", "Facebook", "social networks", "blogger", "cloud computing", "Twitter", and "blogging". All categories of Web 2.0 applications were found, indicating the successful integration of Web 2.0 into the biomedical field. This study shows that the biomedical community is engaged in the use of Web 2.0 and confirms its high level of interest in these tools. Therefore, changes in the ways researchers use information seem to be far from over.
Wildeboer, Gina; Kelders, Saskia M; van Gemert-Pijnen, Julia E W C
2016-12-01
Research has shown that web-based interventions concerning mental health can be effective, although there is a broad range in effect sizes. Why some interventions are more effective than others is not clear. Persuasive technology is one of the aspects which has a positive influence on changing attitude and/or behavior, and can contribute to better outcomes. According to the Persuasive Systems Design Model there are various principles that can be deployed. It is unknown whether the number and combinations of principles used in a web-based intervention affect the effectiveness. Another issue in web-based interventions is adherence. Little is known about the relationship of adherence on the effectiveness of web-based interventions. This study examines whether there is a relationship between the number and combinations of persuasive technology principles used in web-based interventions and the effectiveness. Also the influence of adherence on effectiveness of web-based interventions is investigated. This study elaborates on the systematic review by [37] and therefore the articles were derived from that study. Only web-based interventions were included that were intended to be used on more than one occasion and studies were excluded when no information on adherence was provided. 48 interventions targeted at mental health were selected for the current study. A within-group (WG) and between-group (BG) meta-analysis were performed and subsequently subgroup analyses regarding the relationship between the number and combinations of persuasive technology principles and effectiveness. The influence of adherence on the effectiveness was examined through a meta-regression analysis. For the WG meta-analysis 40 treatment groups were included. The BG meta-analysis included 19 studies. The mean pooled effect size in the WG meta-analysis was large and significant (Hedges' g=0.94), while for the BG meta-analysis this was moderate to large and significant (Hedges' g=0.78) in favor of the web-based interventions. With regard to the number of persuasive technology principles, the differences between the effect sizes in the subgroups were significant in the WG subgroup analyses for the total number of principles and for the number of principles in the three categories Primary Task Support, Dialogue Support, and Social Support. In the BG subgroup analyses only the difference in Primary Task Support was significant. An increase in the total number of principles and Dialogue Support principles yielded larger effect sizes in the WG subgroup analysis, indicating that more principles lead to better outcomes. The number of principles in the Primary Task Support (WG and BG) and Social Support (WG) did not show an upward trend but had varying effect sizes. We identified a number of combinations of principles that were more effective, but only in the WG analyses. The association between adherence and effectiveness was not significant. There is a relationship between the number of persuasive technology principles and the effectiveness of web-based interventions concerning mental health, however this does not always mean that implementing more principles leads to better outcomes. Regarding the combinations of principles, specific principles seemed to work well together (e.g. tunneling and tailoring; reminders and similarity; social learning and comparison), but adding another principle can diminish the effectiveness (e.g. tunneling, tailoring and reduction). In this study, an increase in adherence was not associated with larger effect sizes. The findings of this study can help developers to decide which persuasive principles to include to make web-based interventions more persuasive. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Empirical analysis of web-based user-object bipartite networks
NASA Astrophysics Data System (ADS)
Shang, Ming-Sheng; Lü, Linyuan; Zhang, Yi-Cheng; Zhou, Tao
2010-05-01
Understanding the structure and evolution of web-based user-object networks is a significant task since they play a crucial role in e-commerce nowadays. This letter reports the empirical analysis on two large-scale web sites, audioscrobbler.com and del.icio.us, where users are connected with music groups and bookmarks, respectively. The degree distributions and degree-degree correlations for both users and objects are reported. We propose a new index, named collaborative similarity, to quantify the diversity of tastes based on the collaborative selection. Accordingly, the correlation between degree and selection diversity is investigated. We report some novel phenomena well characterizing the selection mechanism of web users and outline the relevance of these phenomena to the information recommendation problem.
Nawka, Marie Teresa; Sedlacik, Jan; Frölich, Andreas; Bester, Maxim; Fiehler, Jens; Buhk, Jan-Hendrik
2018-02-10
To evaluate multiparametric MRI including non-contrast and contrast-enhanced morphological and angiographic techniques for intracranial aneurysms treated with the single-layer Woven EndoBridge (WEB) embolization system applying simultaneous digital subtraction angiography (DSA) as the reference of standard. We retrospectively identified all patients with incidental and acute ruptured intracranial aneurysms treated with a WEB device (WEB SL and WEB SLS) between March 2014 and June 2016 in our neurovascular center with early (within 7 days) postinterventional multiparametric MRI as well as mid-term (5-8 months) follow-up MRI and DSA available. Occlusion rates were recorded both in DSA and MR angiography (MRA). In MRI, signal intensities within the WEB as well as in the occluded dome distal to the WEB, if present, were measured by region-of-interest (ROI) analysis. Twenty-five patients fulfilled the inclusion criteria. Rates of complete/adequate occlusion at mid-term follow-up were 84% with both MRA and DSA. A strong signal loss within the WEB was observed in all MR sequences at initial and follow-up examinations. ROI analysis did not reveal significant differences in non-contrast (P=0.946) and contrast-enhanced imaging (P=0.377). A T1-hyperintense thrombus in the non-WEB-carrying dome was a frequent observation. Signal intensity measurements in multiparametric MRI suggest that neither contrast-enhanced MRA nor morphological sequences are capable of revealing reliable information on the WEB lumen, presumably due to radio frequency shielding. MRI is therefore not suitable for confirming complete thrombus formation within the WEB. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Nelson, J.; Ames, D. P.; Jones, N.; Tarboton, D. G.; Li, Z.; Qiao, X.; Crawley, S.
2016-12-01
As water resources data continue to move to the web in the form of well-defined, open access, machine readable web services provided by government, academic, and private institutions, there is increased opportunity to move additional parts of the water science workflow to the web (e.g. analysis, modeling, decision support, and collaboration.) Creating such web-based functionality can be extremely time-consuming and resource-intensive and can lead the erstwhile water scientist down a veritable cyberinfrastructure rabbit hole, through an unintended tunnel of transformation to become a Cyber-Wonderland software engineer. We posit that such transformations were never the intention of the research programs that fund earth science cyberinfrastructure, nor is it in the best interest of water researchers to spend exorbitant effort developing and deploying such technologies. This presentation will introduce a relatively simple and ready-to-use water science web app environment funded by the National Science Foundation that couples the new HydroShare data publishing system with the Tethys Platform web app development toolkit. The coupled system has already been shown to greatly lower the barrier to deploying of web based visualization and analysis tools for the CUAHSI Water Data Center and for the National Weather Service's National Water Model. The design and implementation of the developed web app architecture will be presented together key examples of existing apps created using this system. In each of the cases presented, water resources students with basic programming skills were able to develop and deploy highly functional web apps in a relatively short period of time (days to weeks) - allowing the focus to remain on water science rather on cyberinfrastructure. This presentation is accompanied by an open invitation for new collaborations that use the HydroShare-Tethys web app environment.
Portz, Jennifer Dickman; LaMendola, Walter F
2018-05-21
Web-based self-management (web-based SM) interventions provide a potential resource for older adults to engage in their own chronic disease management. The purpose of this study is to investigate the effect of age on participation, retention, and utilization of a web-based SM intervention. This study reports the results of a secondary data analysis of the effects of age in a randomized trial of a web-based diabetes SM intervention. Participation, reasons for nonenrollment, retention, reasons for disenrollment, and website utilization were examined by age using discriminant function, survival analysis, and multivariate analysis of variance as appropriate. Website utilization by all participants dropped after 6 months but did not vary significantly with age. Though older adults (>60 of age) were less likely to choose to participate (F = 57.20, p < 0.001), a slight majority of participants in the experiment (53%) were over 66 years of age. Enrolled older adults utilized website management tools at a rate equivalent to younger participants. At termination, they often reported the experiment as burdensome, but tended to stay in the study longer than younger participants. Web-based SM offers a feasible approach for older adults with chronic disease to engage in their health management, but it needs to be improved. Those older adults who passed the rigorous screens for this experiment and chose to participate may have been more likely than younger participants to utilize web-based SM intervention tools. They were more persistent in their use of the web-based SM to try to improve health outcomes and formed definitive opinions about its utility before termination.
NASA Astrophysics Data System (ADS)
Aufdenkampe, A. K.; Mayorga, E.; Tarboton, D. G.; Sazib, N. S.; Horsburgh, J. S.; Cheetham, R.
2016-12-01
The Model My Watershed Web app (http://wikiwatershed.org/model/) was designed to enable citizens, conservation practitioners, municipal decision-makers, educators, and students to interactively select any area of interest anywhere in the continental USA to: (1) analyze real land use and soil data for that area; (2) model stormwater runoff and water-quality outcomes; and (3) compare how different conservation or development scenarios could modify runoff and water quality. The BiG CZ Data Portal is a web application for scientists for intuitive, high-performance map-based discovery, visualization, access and publication of diverse earth and environmental science data via a map-based interface that simultaneously performs geospatial analysis of selected GIS and satellite raster data for a selected area of interest. The two web applications share a common codebase (https://github.com/WikiWatershed and https://github.com/big-cz), high performance geospatial analysis engine (http://geotrellis.io/ and https://github.com/geotrellis) and deployment on the Amazon Web Services (AWS) cloud cyberinfrastructure. Users can use "on-the-fly" rapid watershed delineation over the national elevation model to select their watershed or catchment of interest. The two web applications also share the goal of enabling the scientists, resource managers and students alike to share data, analyses and model results. We will present these functioning web applications and their potential to substantially lower the bar for studying and understanding our water resources. We will also present work in progress, including a prototype system for enabling citizen-scientists to register open-source sensor stations (http://envirodiy.org/mayfly/) to stream data into these systems, so that they can be reshared using Water One Flow web services.
Web Services as Public Services: Are We Supporting Our Busiest Service Point?
ERIC Educational Resources Information Center
Riley-Huff, Debra A.
2009-01-01
This article is an analysis of academic library organizational culture, patterns, and processes as they relate to Web services. Data gathered in a research survey is examined in an attempt to reveal current departmental and administrative attitudes, practices, and support for Web services in the library research environment. (Contains 10 tables.)
ERIC Educational Resources Information Center
Yesiltas, Erkan
2016-01-01
Web pedagogical content knowledge generally takes pedagogical knowledge, content knowledge, and Web knowledge as basis. It is a structure emerging through the interaction of these three components. Content knowledge refers to knowledge of subjects to be taught. Pedagogical knowledge involves knowledge of process, implementation, learning methods,…
Scrutinizing the Cybersell: Teen-Targeted Web Sites as Texts
ERIC Educational Resources Information Center
Crovitz, Darren
2007-01-01
Darren Crovitz explains that the explosive growth of Web-based content and communication in recent years compels us to teach students how to examine the "rhetorical nature and ethical dimensions of the online world." He demonstrates successful approaches to accomplish this goal through his analysis of the selling techniques of two Web sites…
Economics: A Discriminant Analysis of Students' Perceptions of Web-Based Learning.
ERIC Educational Resources Information Center
Usip, Ebenge E.; Bee, Richard H.
1998-01-01
Users and nonusers of Web-based instruction (WBI) in an undergraduate statistics classes at Youngstown State University were surveyed. Users concluded that distance learning via the Web was a good method of obtaining general information and useful tool in improving their academic performance. Nonusers thought the university should provide…
A Semiotic Analysis of Icons on the World Wide Web.
ERIC Educational Resources Information Center
Ma, Yan
The World Wide Web allows users to interact with a graphic interface to search information in a hypermedia and multimedia environment. Graphics serve as reference points on the World Wide Web for searching and retrieving information. This study analyzed the culturally constructed syntax patterns, or codes, embedded in the icons of library…
Usability and Gratifications--Towards a Website Analysis Model.
ERIC Educational Resources Information Center
Bunz, Ulla K.
This paper discusses Web site usability issues. Specifically, it assumes that the usability of a Web site depends more on the perception of the user than on the objectively assessable usability criteria of the Web site. Two pilot studies, based on theoretical notions of uses and gratifications theory and similar theories, are presented. In the…
ERIC Educational Resources Information Center
Jakobi, Patricia
1999-01-01
Analysis of Web site images of aging to identify positive and negative representations can help teach students about social perceptions of older adults. Another learning experience involves consideration of the needs of older adults in Web site design. (SK)
Comprehensive Analysis of Semantic Web Reasoners and Tools: A Survey
ERIC Educational Resources Information Center
Khamparia, Aditya; Pandey, Babita
2017-01-01
Ontologies are emerging as best representation techniques for knowledge based context domains. The continuing need for interoperation, collaboration and effective information retrieval has lead to the creation of semantic web with the help of tools and reasoners which manages personalized information. The future of semantic web lies in an ontology…
How Commercial Banks Use the World Wide Web: A Content Analysis.
ERIC Educational Resources Information Center
Leovic, Lydia K.
New telecommunications vehicles expand the possible ways that business is conducted. The hypermedia portion of the Internet, the World Wide Web, is such a telecommunications device. The Web is presently one of the most flexible and dynamic methods for electronic information dissemination. The level of technological sophistication necessary to…
NMRPro: an integrated web component for interactive processing and visualization of NMR spectra.
Mohamed, Ahmed; Nguyen, Canh Hao; Mamitsuka, Hiroshi
2016-07-01
The popularity of using NMR spectroscopy in metabolomics and natural products has driven the development of an array of NMR spectral analysis tools and databases. Particularly, web applications are well used recently because they are platform-independent and easy to extend through reusable web components. Currently available web applications provide the analysis of NMR spectra. However, they still lack the necessary processing and interactive visualization functionalities. To overcome these limitations, we present NMRPro, a web component that can be easily incorporated into current web applications, enabling easy-to-use online interactive processing and visualization. NMRPro integrates server-side processing with client-side interactive visualization through three parts: a python package to efficiently process large NMR datasets on the server-side, a Django App managing server-client interaction, and SpecdrawJS for client-side interactive visualization. Demo and installation instructions are available at http://mamitsukalab.org/tools/nmrpro/ mohamed@kuicr.kyoto-u.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Jiménez, J; López, A M; Cruz, J; Esteban, F J; Navas, J; Villoslada, P; Ruiz de Miras, J
2014-10-01
This study presents a Web platform (http://3dfd.ujaen.es) for computing and analyzing the 3D fractal dimension (3DFD) from volumetric data in an efficient, visual and interactive way. The Web platform is specially designed for working with magnetic resonance images (MRIs) of the brain. The program estimates the 3DFD by calculating the 3D box-counting of the entire volume of the brain, and also of its 3D skeleton. All of this is done in a graphical, fast and optimized way by using novel technologies like CUDA and WebGL. The usefulness of the Web platform presented is demonstrated by its application in a case study where an analysis and characterization of groups of 3D MR images is performed for three neurodegenerative diseases: Multiple Sclerosis, Intrauterine Growth Restriction and Alzheimer's disease. To the best of our knowledge, this is the first Web platform that allows the users to calculate, visualize, analyze and compare the 3DFD from MRI images in the cloud. Copyright © 2014 Elsevier Inc. All rights reserved.
webPIPSA: a web server for the comparison of protein interaction properties
Richter, Stefan; Wenzel, Anne; Stein, Matthias; Gabdoulline, Razif R.; Wade, Rebecca C.
2008-01-01
Protein molecular interaction fields are key determinants of protein functionality. PIPSA (Protein Interaction Property Similarity Analysis) is a procedure to compare and analyze protein molecular interaction fields, such as the electrostatic potential. PIPSA may assist in protein functional assignment, classification of proteins, the comparison of binding properties and the estimation of enzyme kinetic parameters. webPIPSA is a web server that enables the use of PIPSA to compare and analyze protein electrostatic potentials. While PIPSA can be run with downloadable software (see http://projects.eml.org/mcm/software/pipsa), webPIPSA extends and simplifies a PIPSA run. This allows non-expert users to perform PIPSA for their protein datasets. With input protein coordinates, the superposition of protein structures, as well as the computation and analysis of electrostatic potentials, is automated. The results are provided as electrostatic similarity matrices from an all-pairwise comparison of the proteins which can be subjected to clustering and visualized as epograms (tree-like diagrams showing electrostatic potential differences) or heat maps. webPIPSA is freely available at: http://pipsa.eml.org. PMID:18420653
Model Driven Development of Web Services and Dynamic Web Services Composition
2005-01-01
27 2.4.1 Feature-Oriented Domain Analysis ( FODA ).......................................27 2.4.2 The need of automation for Feature-Oriented...Diagram Algebra FDL Feature Description Language FODA Feature-Oriented Domain Analysis FSM Finite State Machine GDM Generative Domain...Oriented Domain Analysis ( FODA ) in Section 2.4 and Aspect-Oriented Generative Do- main Modeling (AOGDM) in Section 2.5, which not only represent two
Bianco, Luca; Riccadonna, Samantha; Lavezzo, Enrico; Falda, Marco; Formentin, Elide; Cavalieri, Duccio; Toppo, Stefano
2017-01-01
Abstract Summary: Pathway Inspector is an easy-to-use web application helping researchers to find patterns of expression in complex RNAseq experiments. The tool combines two standard approaches for RNAseq analysis: the identification of differentially expressed genes and a topology-based analysis of enriched pathways. Pathway Inspector is equipped with ad hoc interactive graphical interfaces simplifying the discovery of modulated pathways and the integration of the differentially expressed genes in the corresponding pathway topology. Availability and Implementation: Pathway Inspector is available at the website http://admiral.fmach.it/PI and has been developed in Python, making use of the Django Web Framework. Contact: paolo.fontana@fmach.it PMID:28158604
JBrowse: a dynamic web platform for genome visualization and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buels, Robert; Yao, Eric; Diesh, Colin M.
JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. JBrowse is a maturemore » web application suitable for genome visualization and analysis.« less
JBrowse: a dynamic web platform for genome visualization and analysis.
Buels, Robert; Yao, Eric; Diesh, Colin M; Hayes, Richard D; Munoz-Torres, Monica; Helt, Gregg; Goodstein, David M; Elsik, Christine G; Lewis, Suzanna E; Stein, Lincoln; Holmes, Ian H
2016-04-12
JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. JBrowse is a mature web application suitable for genome visualization and analysis.
Expert system for web based collaborative CAE
NASA Astrophysics Data System (ADS)
Hou, Liang; Lin, Zusheng
2006-11-01
An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.
Experimental evaluation of two 36 inch by 47 inch graphite/epoxy sandwich shear webs
NASA Technical Reports Server (NTRS)
Bush, H. G.
1975-01-01
The design is described and test of two large (36 in. x 47 in.) graphite/epoxy sandwich shear webs. One sandwich web was designed to exhibit strength failure of the facings at a shear load of 7638 lbs/in., which is a characteristic loading for the space shuttle orbiter main engine thrust beam structure. The second sandwich web was designed to exhibit general instability failure at a shear load of 5000 lbs/in., to identify problem areas of stability critical sandwich webs and to assess the adequacy of contemporary analysis techniques.
Using EMBL-EBI Services via Web Interface and Programmatically via Web Services.
Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish
2014-12-12
The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. Copyright © 2014 John Wiley & Sons, Inc.
Using Heuristic Task Analysis to Create Web-Based Instructional Design Theory
ERIC Educational Resources Information Center
Fiester, Herbert R.
2010-01-01
The first purpose of this study was to identify procedural and heuristic knowledge used when creating web-based instruction. The second purpose of this study was to develop suggestions for improving the Heuristic Task Analysis process, a technique for eliciting, analyzing, and representing expertise in cognitively complex tasks. Three expert…
USDA-ARS?s Scientific Manuscript database
Dynamic Assessment of Microbial Ecology (DAME) is a shiny-based web application for interactive analysis and visualization of microbial sequencing data. DAME provides researchers not familiar with R programming the ability to access the most current R functions utilized for ecology and gene sequenci...
Analysis of Scifinder Scholar and Web of Science Citation Searches.
ERIC Educational Resources Information Center
Whitley, Katherine M.
2002-01-01
With "Chemical Abstracts" and "Science Citation Index" both now available for citation searching, this study compares the duplication and uniqueness of citing references for works of chemistry researchers for the years 1999-2001. The two indexes cover very similar source material. This analysis of SciFinder Scholar and Web of…
RSAT 2018: regulatory sequence analysis tools 20th anniversary.
Nguyen, Nga Thi Thuy; Contreras-Moreira, Bruno; Castro-Mondragon, Jaime A; Santana-Garcia, Walter; Ossio, Raul; Robles-Espinoza, Carla Daniela; Bahin, Mathieu; Collombet, Samuel; Vincens, Pierre; Thieffry, Denis; van Helden, Jacques; Medina-Rivera, Alejandra; Thomas-Chollier, Morgane
2018-05-02
RSAT (Regulatory Sequence Analysis Tools) is a suite of modular tools for the detection and the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, including from genome-wide datasets like ChIP-seq/ATAC-seq, (ii) motif scanning, (iii) motif analysis (quality assessment, comparisons and clustering), (iv) analysis of regulatory variations, (v) comparative genomics. Six public servers jointly support 10 000 genomes from all kingdoms. Six novel or refactored programs have been added since the 2015 NAR Web Software Issue, including updated programs to analyse regulatory variants (retrieve-variation-seq, variation-scan, convert-variations), along with tools to extract sequences from a list of coordinates (retrieve-seq-bed), to select motifs from motif collections (retrieve-matrix), and to extract orthologs based on Ensembl Compara (get-orthologs-compara). Three use cases illustrate the integration of new and refactored tools to the suite. This Anniversary update gives a 20-year perspective on the software suite. RSAT is well-documented and available through Web sites, SOAP/WSDL (Simple Object Access Protocol/Web Services Description Language) web services, virtual machines and stand-alone programs at http://www.rsat.eu/.
Karyotaki, E; Kleiboer, A; Smit, F; Turner, D T; Pastor, A M; Andersson, G; Berger, T; Botella, C; Breton, J M; Carlbring, P; Christensen, H; de Graaf, E; Griffiths, K; Donker, T; Farrer, L; Huibers, M J H; Lenndin, J; Mackinnon, A; Meyer, B; Moritz, S; Riper, H; Spek, V; Vernmark, K; Cuijpers, P
2015-10-01
It is well known that web-based interventions can be effective treatments for depression. However, dropout rates in web-based interventions are typically high, especially in self-guided web-based interventions. Rigorous empirical evidence regarding factors influencing dropout in self-guided web-based interventions is lacking due to small study sample sizes. In this paper we examined predictors of dropout in an individual patient data meta-analysis to gain a better understanding of who may benefit from these interventions. A comprehensive literature search for all randomized controlled trials (RCTs) of psychotherapy for adults with depression from 2006 to January 2013 was conducted. Next, we approached authors to collect the primary data of the selected studies. Predictors of dropout, such as socio-demographic, clinical, and intervention characteristics were examined. Data from 2705 participants across ten RCTs of self-guided web-based interventions for depression were analysed. The multivariate analysis indicated that male gender [relative risk (RR) 1.08], lower educational level (primary education, RR 1.26) and co-morbid anxiety symptoms (RR 1.18) significantly increased the risk of dropping out, while for every additional 4 years of age, the risk of dropping out significantly decreased (RR 0.94). Dropout can be predicted by several variables and is not randomly distributed. This knowledge may inform tailoring of online self-help interventions to prevent dropout in identified groups at risk.
2015-05-01
1 LEXICAL LINK ANALYSIS (LLA) APPLICATION: IMPROVING WEB SERVICE TO DEFENSE ACQUISITION VISIBILITY ENVIRONMENT(DAVE) May 13-14, 2015 Dr. Ying...REPORT DATE MAY 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Lexical Link Analysis (LLA) Application...Making 3 2 1 3 L L A Methods • Lexical Link Analysis (LLA) Core – LLA Reports and Visualizations • Collaborative Learning Agents (CLA) for
Vidjil: A Web Platform for Analysis of High-Throughput Repertoire Sequencing.
Duez, Marc; Giraud, Mathieu; Herbert, Ryan; Rocher, Tatiana; Salson, Mikaël; Thonier, Florian
2016-01-01
The B and T lymphocytes are white blood cells playing a key role in the adaptive immunity. A part of their DNA, called the V(D)J recombinations, is specific to each lymphocyte, and enables recognition of specific antigenes. Today, with new sequencing techniques, one can get billions of DNA sequences from these regions. With dedicated Repertoire Sequencing (RepSeq) methods, it is now possible to picture population of lymphocytes, and to monitor more accurately the immune response as well as pathologies such as leukemia. Vidjil is an open-source platform for the interactive analysis of high-throughput sequencing data from lymphocyte recombinations. It contains an algorithm gathering reads into clonotypes according to their V(D)J junctions, a web application made of a sample, experiment and patient database and a visualization for the analysis of clonotypes along the time. Vidjil is implemented in C++, Python and Javascript and licensed under the GPLv3 open-source license. Source code, binaries and a public web server are available at http://www.vidjil.org and at http://bioinfo.lille.inria.fr/vidjil. Using the Vidjil web application consists of four steps: 1. uploading a raw sequence file (typically a FASTQ); 2. running RepSeq analysis software; 3. visualizing the results; 4. annotating the results and saving them for future use. For the end-user, the Vidjil web application needs no specific installation and just requires a connection and a modern web browser. Vidjil is used by labs in hematology or immunology for research and clinical applications.
NASA Astrophysics Data System (ADS)
Cautun, Marius; van de Weygaert, Rien; Jones, Bernard J. T.; Frenk, Carlos S.
2014-07-01
The cosmic web is the largest scale manifestation of the anisotropic gravitational collapse of matter. It represents the transitional stage between linear and non-linear structures and contains easily accessible information about the early phases of structure formation processes. Here we investigate the characteristics and the time evolution of morphological components. Our analysis involves the application of the NEXUS Multiscale Morphology Filter technique, predominantly its NEXUS+ version, to high resolution and large volume cosmological simulations. We quantify the cosmic web components in terms of their mass and volume content, their density distribution and halo populations. We employ new analysis techniques to determine the spatial extent of filaments and sheets, like their total length and local width. This analysis identifies clusters and filaments as the most prominent components of the web. In contrast, while voids and sheets take most of the volume, they correspond to underdense environments and are devoid of group-sized and more massive haloes. At early times the cosmos is dominated by tenuous filaments and sheets, which, during subsequent evolution, merge together, such that the present-day web is dominated by fewer, but much more massive, structures. The analysis of the mass transport between environments clearly shows how matter flows from voids into walls, and then via filaments into cluster regions, which form the nodes of the cosmic web. We also study the properties of individual filamentary branches, to find long, almost straight, filaments extending to distances larger than 100 h-1 Mpc. These constitute the bridges between massive clusters, which seem to form along approximatively straight lines.
Vidjil: A Web Platform for Analysis of High-Throughput Repertoire Sequencing
Duez, Marc; Herbert, Ryan; Rocher, Tatiana; Salson, Mikaël; Thonier, Florian
2016-01-01
Background The B and T lymphocytes are white blood cells playing a key role in the adaptive immunity. A part of their DNA, called the V(D)J recombinations, is specific to each lymphocyte, and enables recognition of specific antigenes. Today, with new sequencing techniques, one can get billions of DNA sequences from these regions. With dedicated Repertoire Sequencing (RepSeq) methods, it is now possible to picture population of lymphocytes, and to monitor more accurately the immune response as well as pathologies such as leukemia. Methods and Results Vidjil is an open-source platform for the interactive analysis of high-throughput sequencing data from lymphocyte recombinations. It contains an algorithm gathering reads into clonotypes according to their V(D)J junctions, a web application made of a sample, experiment and patient database and a visualization for the analysis of clonotypes along the time. Vidjil is implemented in C++, Python and Javascript and licensed under the GPLv3 open-source license. Source code, binaries and a public web server are available at http://www.vidjil.org and at http://bioinfo.lille.inria.fr/vidjil. Using the Vidjil web application consists of four steps: 1. uploading a raw sequence file (typically a FASTQ); 2. running RepSeq analysis software; 3. visualizing the results; 4. annotating the results and saving them for future use. For the end-user, the Vidjil web application needs no specific installation and just requires a connection and a modern web browser. Vidjil is used by labs in hematology or immunology for research and clinical applications. PMID:27835690
SWS: accessing SRS sites contents through Web Services.
Romano, Paolo; Marra, Domenico
2008-03-26
Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.
A performance study of WebDav access to storages within the Belle II collaboration
NASA Astrophysics Data System (ADS)
Pardi, S.; Russo, G.
2017-10-01
WebDav and HTTP are becoming popular protocols for data access in the High Energy Physics community. The most used Grid and Cloud storage solutions provide such kind of interfaces, in this scenario tuning and performance evaluation became crucial aspects to promote the adoption of these protocols within the Belle II community. In this work, we present the results of a large-scale test activity, made with the goal to evaluate performances and reliability of the WebDav protocol, and study a possible adoption for the user analysis. More specifically, we considered a pilot infrastructure composed by a set of storage elements configured with the WebDav interface, hosted at the Belle II sites. The performance tests include a comparison with xrootd and gridftp. As reference tests we used a set of analysis jobs running under the Belle II software framework, accessing the input data with the ROOT I/O library, in order to simulate as much as possible a realistic user activity. The final analysis shows the possibility to achieve promising performances with WebDav on different storage systems, and gives an interesting feedback, for Belle II community and for other high energy physics experiments.
Food-web stability signals critical transitions in temperate shallow lakes
Kuiper, Jan J.; van Altena, Cassandra; de Ruiter, Peter C.; van Gerven, Luuk P. A.; Janse, Jan H.; Mooij, Wolf M.
2015-01-01
A principal aim of ecologists is to identify critical levels of environmental change beyond which ecosystems undergo radical shifts in their functioning. Both food-web theory and alternative stable states theory provide fundamental clues to mechanisms conferring stability to natural systems. Yet, it is unclear how the concept of food-web stability is associated with the resilience of ecosystems susceptible to regime change. Here, we use a combination of food web and ecosystem modelling to show that impending catastrophic shifts in shallow lakes are preceded by a destabilizing reorganization of interaction strengths in the aquatic food web. Analysis of the intricate web of trophic interactions reveals that only few key interactions, involving zooplankton, diatoms and detritus, dictate the deterioration of food-web stability. Our study exposes a tight link between food-web dynamics and the dynamics of the whole ecosystem, implying that trophic organization may serve as an empirical indicator of ecosystem resilience. PMID:26173798
Characteristics of food industry web sites and "advergames" targeting children.
Culp, Jennifer; Bell, Robert A; Cassady, Diana
2010-01-01
To assess the content of food industry Web sites targeting children by describing strategies used to prolong their visits and foster brand loyalty; and to document health-promoting messages on these Web sites. A content analysis was conducted of Web sites advertised on 2 children's networks, Cartoon Network and Nickelodeon. A total of 290 Web pages and 247 unique games on 19 Internet sites were examined. Games, found on 81% of Web sites, were the most predominant promotion strategy used. All games had at least 1 brand identifier, with logos being most frequently used. On average Web sites contained 1 "healthful" message for every 45 exposures to brand identifiers. Food companies use Web sites to extend their television advertising to promote brand loyalty among children. These sites almost exclusively promoted food items high in sugar and fat. Health professionals need to monitor food industry marketing practices used in "new media." Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Oosthoek, J. H. P.; Flahaut, J.; Rossi, A. P.; Baumann, P.; Misev, D.; Campalani, P.; Unnithan, V.
2014-06-01
PlanetServer is a WebGIS system, currently under development, enabling the online analysis of Compact Reconnaissance Imaging Spectrometer (CRISM) hyperspectral data from Mars. It is part of the EarthServer project which builds infrastructure for online access and analysis of huge Earth Science datasets. Core functionality consists of the rasdaman Array Database Management System (DBMS) for storage, and the Open Geospatial Consortium (OGC) Web Coverage Processing Service (WCPS) for data querying. Various WCPS queries have been designed to access spatial and spectral subsets of the CRISM data. The client WebGIS, consisting mainly of the OpenLayers javascript library, uses these queries to enable online spatial and spectral analysis. Currently the PlanetServer demonstration consists of two CRISM Full Resolution Target (FRT) observations, surrounding the NASA Curiosity rover landing site. A detailed analysis of one of these observations is performed in the Case Study section. The current PlanetServer functionality is described step by step, and is tested by focusing on detecting mineralogical evidence described in earlier Gale crater studies. Both the PlanetServer methodology and its possible use for mineralogical studies will be further discussed. Future work includes batch ingestion of CRISM data and further development of the WebGIS and analysis tools.
Mental Constructions and Constructions of Web Sites: Learner and Teacher Points of View
ERIC Educational Resources Information Center
Hazzan, Orit
2004-01-01
This research focuses on knowledge and ways in which knowledge may be constructed in the learner's mind. Specifically, it addresses the Web as a cognitive supporter for learning, organising and constructing a new domain of knowledge. In particular, the research analyses student reflection on constructing web sites. The analysis is based on an…
ERIC Educational Resources Information Center
Stanford, Roger John
2012-01-01
Web-conferencing software was chosen for course delivery to provide flexible options for students at a two-year technical college. Students used technology to access a live, synchronous microeconomics course over the internet instead of a traditional face-to-face lecture. This investigation studied the impact of implementing web-conferencing…
Resource Needs and Pedagogical Value of Web Mapping for Spatial Thinking
ERIC Educational Resources Information Center
Manson, Steven; Shannon, Jerry; Eria, Sami; Kne, Len; Dyke, Kevin; Nelson, Sara; Batra, Lalit; Bonsal, Dudley; Kernik, Melinda; Immich, Jennifer; Matson, Laura
2014-01-01
Web mapping involves publishing and using maps via the Internet, and can range from presenting static maps to offering dynamic data querying and spatial analysis. Web mapping is seen as a promising way to support development of spatial thinking in the classroom but there are unanswered questions about how this promise plays out in reality. This…
Research on Webbed Connectivity in a Web-Based Learning Environment: Online Social Work Education
ERIC Educational Resources Information Center
Noble, Dorinda; Russell, Amy Catherine
2013-01-01
This paper describes the preliminary data and analysis of how students in an online MSW program perceive their experiences, interactions, and responses to learning structure, material, and technology in the Web environment. The student perceptions, which have been used to refine the online program, highlight how important it is to students to feel…
An Analysis of HTML and CSS Syntax Errors in a Web Development Course
ERIC Educational Resources Information Center
Park, Thomas H.; Dorn, Brian; Forte, Andrea
2015-01-01
Many people are first exposed to code through web development, yet little is known about the barriers beginners face in these formative experiences. In this article, we describe a study of undergraduate students enrolled in an introductory web development course taken by both computing majors and general education students. Using data collected…
An Analysis of Multiple Factors Affecting Retention in Web-Based Community College Courses
ERIC Educational Resources Information Center
Doherty, William
2006-01-01
The current study examined four factors affecting retention in Web-based community college courses. Analyses were conducted on student demographics, student learning styles, course communication and external factors. The results suggest that Web-based courses are more attractive to busy students who are also more likely to fail or drop the course.…
Searching the Web: The Public and Their Queries.
ERIC Educational Resources Information Center
Spink, Amanda; Wolfram, Dietmar; Jansen, Major B. J.; Saracevic, Tefko
2001-01-01
Reports findings from a study of searching behavior by over 200,000 users of the Excite search engine. Analysis of over one million queries revealed most people use few search terms, few modified queries, view few Web pages, and rarely use advanced search features. Concludes that Web searching by the public differs significantly from searching of…
Learning to Design WebQuests: An Exploration in Preservice Social Studies Education
ERIC Educational Resources Information Center
Bates, Alisa
2008-01-01
Effective uses of technology in social studies methods courses is an under-researched field. This study focused on the development of WebQuests to engage teacher candidate's exploration of the Internet as an authentic medium for inquiry in social studies education. Analysis of appropriateness of tasks in the WebQuests, depth of ideas and audience…
ERIC Educational Resources Information Center
Young, Shelley Shwu-Ching; Huang, Yi-Long; Jang, Jyh-Shing Roger
2000-01-01
Describes the development and implementation process of a Web-based science museum in Taiwan. Topics include use of the Internet; lifelong distance learning; museums and the Internet; objectives of the science museum; funding; categories of exhibitions; analysis of Web users; homepage characteristics; graphics and the effect on speed; and future…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-30
....commentworks.com/ftc/scistewartconsent by following the instructions on the web-based form. If you prefer to... Wide Web, at http://www.ftc.gov/os/actions.shtm . A paper copy can be obtained from the FTC Public... extent practicable, on the public Commission Web site, at http://www.ftc.gov/os/publiccomments.shtm . As...
Web-Based OPACs in Indian Academic Libraries: A Functional Comparison
ERIC Educational Resources Information Center
Kapoor, Kanta; Goyal, O. P.
2007-01-01
Purpose: The paper seeks to provide a comparative analysis of the functionality of five web-based OPACs available in Indian academic libraries. Design/methodology/approach: Same-topic searches were carried out by three researchers on the web-based OPACs of Libsys, VTLS's iPortal, NewGenLib, Troodon, and Alice for Windows, implemented in five…
An Analysis of Dialogistic Presence on Community College Web Sites in Nine Mega-States
ERIC Educational Resources Information Center
Shadinger, David Allen
2010-01-01
The institutional web site is ubiquitous and has emerged as nearly universal in its utilization as a recruiting and informational tool for the twenty-first century community college. Community colleges have embraced utilization of the Internet through the establishment of institutional web sites containing volumes of information, forms, and links.…
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-01-01
Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools. PMID:18021453
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-11-19
Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools.
Accredited hand surgery fellowship Web sites: analysis of content and accessibility.
Trehan, Samir K; Morrell, Nathan T; Akelman, Edward
2015-04-01
To assess the accessibility and content of accredited hand surgery fellowship Web sites. A list of all accredited hand surgery fellowships was obtained from the online database of the American Society for Surgery of the Hand (ASSH). Fellowship program information on the ASSH Web site was recorded. All fellowship program Web sites were located via Google search. Fellowship program Web sites were analyzed for accessibility and content in 3 domains: program overview, application information/recruitment, and education. At the time of this study, there were 81 accredited hand surgery fellowships with 169 available positions. Thirty of 81 programs (37%) had a functional link on the ASSH online hand surgery fellowship directory; however, Google search identified 78 Web sites. Three programs did not have a Web site. Analysis of content revealed that most Web sites contained contact information, whereas information regarding the anticipated clinical, research, and educational experiences during fellowship was less often present. Furthermore, information regarding past and present fellows, salary, application process/requirements, call responsibilities, and case volume was frequently lacking. Overall, 52 of 81 programs (64%) had the minimal online information required for residents to independently complete the fellowship application process. Hand fellowship program Web sites could be accessed either via the ASSH online directory or Google search, except for 3 programs that did not have Web sites. Although most fellowship program Web sites contained contact information, other content such as application information/recruitment and education, was less frequently present. This study provides comparative data regarding the clinical and educational experiences outlined on hand fellowship program Web sites that are of relevance to residents, fellows, and academic hand surgeons. This study also draws attention to various ways in which the hand surgery fellowship application process can be made more user-friendly and efficient. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Charbonneau, Deborah H
2013-09-01
As the Internet is a source of information for many health consumers, there is a need to evaluate the information about prescription drugs provided on pharmaceutical manufacturers' web sites. Using a sample of pharmaceutical manufacturers' web sites for the treatment of menopause, the main objective of this study was to evaluate consumer-oriented information about benefits and risks of prescription drugs for the treatment of menopause provided on pharmaceutical web sites. Pharmaceutical manufacturers' web sites for analysis were identified using a list of U.S. FDA-approved hormone therapies for the treatment of menopause. This study revealed substantial gaps in how benefits and risk information were presented on the web sites. Specifically, information about the benefits was prominent while risk information was incomplete and challenging to find. Further, references to the scientific literature to support claims advertised about prescription drug benefits were not provided. Given the lack of scientific evidence to support claims of benefits and limited disclosure about risks, more information is needed for consumers to be able to weigh the benefits and risks of these treatments for menopause. Overall, these findings provide guidance for evaluating drug information provided on pharmaceutical web sites. © 2013 The author. Health Information and Libraries Journal © 2013 Health Libraries Group.
BIOSMILE web search: a web application for annotating biomedical entities and relations.
Dai, Hong-Jie; Huang, Chi-Hsin; Lin, Ryan T K; Tsai, Richard Tzong-Han; Hsu, Wen-Lian
2008-07-01
BIOSMILE web search (BWS), a web-based NCBI-PubMed search application, which can analyze articles for selected biomedical verbs and give users relational information, such as subject, object, location, manner, time, etc. After receiving keyword query input, BWS retrieves matching PubMed abstracts and lists them along with snippets by order of relevancy to protein-protein interaction. Users can then select articles for further analysis, and BWS will find and mark up biomedical relations in the text. The analysis results can be viewed in the abstract text or in table form. To date, BWS has been field tested by over 30 biologists and questionnaires have shown that subjects are highly satisfied with its capabilities and usability. BWS is accessible free of charge at http://bioservices.cse.yzu.edu.tw/BWS.
Jue, J Jane S; Metlay, Joshua P
2011-11-01
Web-based health resources on college websites have the potential to reach a substantial number of college students. The objective of this study was to characterize how colleges use their websites to educate about and promote health. This study was a cross-sectional analysis of websites from a nationally representative sample of 426 US colleges. Reviewers abstracted information about Web-based health resources from college websites, namely health information, Web links to outside health resources, and interactive Web-based health programs. Nearly 60% of US colleges provided health resources on their websites, 49% provided health information, 48% provided links to outside resources, and 28% provided interactive Web-based health programs. The most common topics of Web-based health resources were mental health and general health. We found widespread presence of Web-based health resources available from various delivery modes and covering a range of health topics. Although further research in this new modality is warranted, Web-based health resources hold promise for reaching more US college students.
NASA Technical Reports Server (NTRS)
Coe, H. H.; Lynch, J. E.
1973-01-01
Three-dimensional stress distributions were calculated for both a regular drilled ball with a stiffening web. The balls were 20.6 mm (0.8125 in.) in diameter and had a 12.6 mm (0.496 in.) diameter concentric hole. The stiffening web was 1.5 mm (0.06 in.) thick. The calculations showed that a large reversing tangential stress at the hole bore was reduced by one-half by the addition of the web.
Online nutrition information for pregnant women: a content analysis.
Storr, Tayla; Maher, Judith; Swanepoel, Elizabeth
2017-04-01
Pregnant women actively seek health information online, including nutrition and food-related topics. However, the accuracy and readability of this information have not been evaluated. The aim of this study was to describe and evaluate pregnancy-related food and nutrition information available online. Four search engines were used to search for pregnancy-related nutrition web pages. Content analysis of web pages was performed. Web pages were assessed against the 2013 Australian Dietary Guidelines to assess accuracy. Flesch-Kincaid (F-K), Simple Measure of Gobbledygook (SMOG), Gunning Fog Index (FOG) and Flesch reading ease (FRE) formulas were used to assess readability. Data was analysed descriptively. Spearman's correlation was used to assess the relationship between web page characteristics. Kruskal-Wallis test was used to check for differences among readability and other web page characteristics. A total of 693 web pages were included. Web page types included commercial (n = 340), not-for-profit (n = 113), blogs (n = 112), government (n = 89), personal (n = 36) and educational (n = 3). The accuracy of online nutrition information varied with 39.7% of web pages containing accurate information, 22.8% containing mixed information and 37.5% containing inaccurate information. The average reading grade of all pages analysed measured by F-K, SMOG and FOG was 11.8. The mean FRE was 51.6, a 'fairly difficult to read' score. Only 0.5% of web pages were written at or below grade 6 according to F-K, SMOG and FOG. The findings suggest that accuracy of pregnancy-related nutrition information is a problem on the internet. Web page readability is generally difficult and means that the information may not be accessible to those who cannot read at a sophisticated level. © 2016 John Wiley & Sons Ltd. © 2016 John Wiley & Sons Ltd.
PanWeb: A web interface for pan-genomic analysis.
Pantoja, Yan; Pinheiro, Kenny; Veras, Allan; Araújo, Fabrício; Lopes de Sousa, Ailton; Guimarães, Luis Carlos; Silva, Artur; Ramos, Rommel T J
2017-01-01
With increased production of genomic data since the advent of next-generation sequencing (NGS), there has been a need to develop new bioinformatics tools and areas, such as comparative genomics. In comparative genomics, the genetic material of an organism is directly compared to that of another organism to better understand biological species. Moreover, the exponentially growing number of deposited prokaryote genomes has enabled the investigation of several genomic characteristics that are intrinsic to certain species. Thus, a new approach to comparative genomics, termed pan-genomics, was developed. In pan-genomics, various organisms of the same species or genus are compared. Currently, there are many tools that can perform pan-genomic analyses, such as PGAP (Pan-Genome Analysis Pipeline), Panseq (Pan-Genome Sequence Analysis Program) and PGAT (Prokaryotic Genome Analysis Tool). Among these software tools, PGAP was developed in the Perl scripting language and its reliance on UNIX platform terminals and its requirement for an extensive parameterized command line can become a problem for users without previous computational knowledge. Thus, the aim of this study was to develop a web application, known as PanWeb, that serves as a graphical interface for PGAP. In addition, using the output files of the PGAP pipeline, the application generates graphics using custom-developed scripts in the R programming language. PanWeb is freely available at http://www.computationalbiology.ufpa.br/panweb.
IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application
NASA Astrophysics Data System (ADS)
Gopu, A.; Hayashi, S.; Young, M. D.
2014-05-01
Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.
Web 2.0 Applications in Medicine: Trends and Topics in the Literature
2015-01-01
Background The World Wide Web has changed research habits, and these changes were further expanded when “Web 2.0” became popular in 2005. Bibliometrics is a helpful tool used for describing patterns of publication, for interpreting progression over time, and the geographical distribution of research in a given field. Few studies employing bibliometrics, however, have been carried out on the correlative nature of scientific literature and Web 2.0. Objective The aim of this bibliometric analysis was to provide an overview of Web 2.0 implications in the biomedical literature. The objectives were to assess the growth rate of literature, key journals, authors, and country contributions, and to evaluate whether the various Web 2.0 applications were expressed within this biomedical literature, and if so, how. Methods A specific query with keywords chosen to be representative of Web 2.0 applications was built for the PubMed database. Articles related to Web 2.0 were downloaded in Extensible Markup Language (XML) and were processed through developed hypertext preprocessor (PHP) scripts, then imported to Microsoft Excel 2010 for data processing. Results A total of 1347 articles were included in this study. The number of articles related to Web 2.0 has been increasing from 2002 to 2012 (average annual growth rate was 106.3% with a maximum of 333% in 2005). The United States was by far the predominant country for authors, with 514 articles (54.0%; 514/952). The second and third most productive countries were the United Kingdom and Australia, with 87 (9.1%; 87/952) and 44 articles (4.6%; 44/952), respectively. Distribution of number of articles per author showed that the core population of researchers working on Web 2.0 in the medical field could be estimated at approximately 75. In total, 614 journals were identified during this analysis. Using Bradford’s law, 27 core journals were identified, among which three (Studies in Health Technology and Informatics, Journal of Medical Internet Research, and Nucleic Acids Research) produced more than 35 articles related to Web 2.0 over the period studied. A total of 274 words in the field of Web 2.0 were found after manual sorting of the 15,878 words appearing in title and abstract fields for articles. Word frequency analysis reveals “blog” as the most recurrent, followed by “wiki”, “Web 2.0”, ”social media”, “Facebook”, “social networks”, “blogger”, “cloud computing”, “Twitter”, and “blogging”. All categories of Web 2.0 applications were found, indicating the successful integration of Web 2.0 into the biomedical field. Conclusions This study shows that the biomedical community is engaged in the use of Web 2.0 and confirms its high level of interest in these tools. Therefore, changes in the ways researchers use information seem to be far from over. PMID:25842175
Kokol, Peter; Vošner, Helena Blažun
2018-01-01
The overall aim of the present study was to compare the coverage of existing research funding information for articles indexed in Scopus, Web of Science, and PubMed databases. The numbers of articles with funding information published in 2015 were identified in the three selected databases and compared using bibliometric analysis of a sample of twenty-eight prestigious medical journals. Frequency analysis of the number of articles with funding information showed statistically significant differences between Scopus, Web of Science, and PubMed databases. The largest proportion of articles with funding information was found in Web of Science (29.0%), followed by PubMed (14.6%) and Scopus (7.7%). The results show that coverage of funding information differs significantly among Scopus, Web of Science, and PubMed databases in a sample of the same medical journals. Moreover, we found that, currently, funding data in PubMed is more difficult to obtain and analyze compared with that in the other two databases.
Solar cells and modules from dentritic web silicon
NASA Technical Reports Server (NTRS)
Campbell, R. B.; Rohatgi, A.; Seman, E. J.; Davis, J. R.; Rai-Choudhury, P.; Gallagher, B. D.
1980-01-01
Some of the noteworthy features of the processes developed in the fabrication of solar cell modules are the handling of long lengths of web, the use of cost effective dip coating of photoresist and antireflection coatings, selective electroplating of the grid pattern and ultrasonic bonding of the cell interconnect. Data on the cells is obtained by means of dark I-V analysis and deep level transient spectroscopy. A histogram of over 100 dentritic web solar cells fabricated in a number of runs using different web crystals shows an average efficiency of over 13%, with some efficiencies running above 15%. Lower cell efficiency is generally associated with low minority carrier time due to recombination centers sometimes present in the bulk silicon. A cost analysis of the process sequence using a 25 MW production line indicates a selling price of $0.75/peak watt in 1986. It is concluded that the efficiency of dentritic web cells approaches that of float zone silicon cells, reduced somewhat by the lower bulk lifetime of the former.
Neve, Melinda; Morgan, Philip J; Collins, Clare E
2011-10-12
There is a paucity of information in the scientific literature on the effectiveness of commercial weight loss programs, including Web-based programs. The potential of Web-based weight loss programs has been acknowledged, but their ability to achieve significant weight loss has not been proven. The objectives were to evaluate the weight change achieved within a large cohort of individuals enrolled in a commercial Web-based weight loss program for 12 or 52 weeks and to describe participants' program use in relation to weight change. Participants enrolled in an Australian commercial Web-based weight loss program from August 15, 2007, through May 31, 2008. Self-reported weekly weight records were used to determine weight change after 12- and 52-week subscriptions. The primary analysis estimated weight change using generalized linear mixed models (GLMMs) for all participants who subscribed for 12 weeks and also for those who subscribed for 52 weeks. A sensitivity analysis was conducted using the last observation carried forward (LOCF) method. Website use (ie, the number of days participants logged on, made food or exercise entries to the Web-based diary, or posted to the discussion forum) was described from program enrollment to 12 and 52 weeks, and differences in website use by percentage weight change category were tested using Kruskal-Wallis test for equality of populations. Participants (n = 9599) had a mean (standard deviation [SD]) age of 35.7 (9.5) years and were predominantly female (86% or 8279/9599) and obese (61% or 5866/9599). Results from the primary GLMM analysis including all enrollees found the mean percentage weight change was -6.2% among 12-week subscribers (n = 6943) and -6.9% among 52-week subscribers (n = 2656). Sensitivity analysis using LOCF revealed an average weight change of -3.0% and -3.5% after 12 and 52 weeks respectively. The use of all website features increased significantly (P < .01) as percentage weight change improved. The weight loss achieved by 12- and 52-week subscribers of a commercial Web-based weight loss program is likely to be in the range of the primary and sensitivity analysis results. While this suggests that, on average, clinically important weight loss may be achieved, further research is required to evaluate the efficacy of this commercial Web-based weight loss program prospectively using objective measures. The potential association between greater website use and increased weight loss also requires further evaluation, as strategies to improve participants' use of Web-based program features may be required.
Providing Multi-Page Data Extraction Services with XWRAPComposer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ling; Zhang, Jianjun; Han, Wei
2008-04-30
Dynamic Web data sources – sometimes known collectively as the Deep Web – increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deepmore » Web. To address these challenges, we present DYNABOT, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DYNABOT has three unique characteristics. First, DYNABOT utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DYNABOT employs a modular, self-tuning system architecture for focused crawling of the Deep Web using service class descriptions. Third, DYNABOT incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less
SCit: web tools for protein side chain conformation analysis.
Gautier, R; Camproux, A-C; Tufféry, P
2004-07-01
SCit is a web server providing services for protein side chain conformation analysis and side chain positioning. Specific services use the dependence of the side chain conformations on the local backbone conformation, which is described using a structural alphabet that describes the conformation of fragments of four-residue length in a limited library of structural prototypes. Based on this concept, SCit uses sets of rotameric conformations dependent on the local backbone conformation of each protein for side chain positioning and the identification of side chains with unlikely conformations. The SCit web server is accessible at http://bioserv.rpbs.jussieu.fr/SCit.
Longitudinal analysis of meta-analysis literatures in the database of ISI Web of Science.
Zhu, Changtai; Jiang, Ting; Cao, Hao; Sun, Wenguang; Chen, Zhong; Liu, Jinming
2015-01-01
The meta-analysis is regarded as an important evidence for making scientific decision. The database of ISI Web of Science collected a great number of high quality literatures including meta-analysis literatures. However, it is significant to understand the general characteristics of meta-analysis literatures to outline the perspective of meta-analysis. In this present study, we summarized and clarified some features on these literatures in the database of ISI Web of Science. We retrieved the meta-analysis literatures in the database of ISI Web of Science including SCI-E, SSCI, A&HCI, CPCI-S, CPCI-SSH, CCR-E, and IC. The annual growth rate, literature category, language, funding, index citation, agencies and countries/territories of the meta-analysis literatures were analyzed, respectively. A total of 95,719 records, which account for 0.38% (99% CI: 0.38%-0.39%) of all literatures, were found in the database. From 1997 to 2012, the annual growth rate of meta-analysis literatures was 18.18%. The literatures involved in many categories, languages, fundings, citations, publication agencies, and countries/territories. Interestingly, the index citation frequencies of the meta-analysis were significantly higher than that of other type literatures such as multi-centre study, randomize controlled trial, cohort study, case control study, and cases report (P<0.0001). The increasing numbers, intensively global influence and high citations revealed that the meta-analysis has been becoming more and more prominent in recent years. In future, in order to promote the validity of meta-analysis, the CONSORT and PRISMA standard should be continuously popularized in the field of evidence-based medicine.
NASA Astrophysics Data System (ADS)
Cole, M.; Bambacus, M.; Lynnes, C.; Sauer, B.; Falke, S.; Yang, W.
2007-12-01
NASA's vast array of scientific data within its Distributed Active Archive Centers (DAACs) is especially valuable to both traditional research scientists as well as the emerging market of Earth Science Information Partners. For example, the air quality science and management communities are increasingly using satellite derived observations in their analyses and decision making. The Air Quality Cluster in the Federation of Earth Science Information Partners (ESIP) uses web infrastructures of interoperability, or Service Oriented Architecture (SOA), to extend data exploration, use, and analysis and provides a user environment for DAAC products. In an effort to continually offer these NASA data to the broadest research community audience, and reusing emerging technologies, both NASA's Goddard Earth Science (GES) and Land Process (LP) DAACs have engaged in a web services pilot project. Through these projects both GES and LP have exposed data through the Open Geospatial Consortiums (OGC) Web Services standards. Reusing several different existing applications and implementation techniques, GES and LP successfully exposed a variety data, through distributed systems to be ingested into multiple end-user systems. The results of this project will enable researchers world wide to access some of NASA's GES & LP DAAC data through OGC protocols. This functionality encourages inter-disciplinary research while increasing data use through advanced technologies. This paper will concentrate on the implementation and use of OGC Web Services, specifically Web Map and Web Coverage Services (WMS, WCS) at GES and LP DAACs, and the value of these services within scientific applications, including integration with the DataFed air quality web infrastructure and in the development of data analysis web applications.
ERIC Educational Resources Information Center
Nickles, George
2007-01-01
This article describes using Work Action Analysis (WAA) as a method for identifying requirements for a web-based portal that supports a professional development program. WAA is a cognitive systems engineering method for modeling multi-agent systems to support design and evaluation. A WAA model of the professional development program of the…
WaveNet: A Web-Based Metocean Data Access, Processing and Analysis Tool; Part 5 - WW3 Database
2015-02-01
Program ( CDIP ); and Part 4 for the Great Lakes Observing System/Coastal Forecasting System (GLOS/GLCFS). Using step-by-step instructions, this Part 5...Demirbilek, Z., L. Lin, and D. Wilson. 2014a. WaveNet: A web-based metocean data access, processing, and analysis tool; part 3– CDIP database
ERIC Educational Resources Information Center
Cohen, Anat; Nachmias, Rafi
2009-01-01
This paper describes the implementation of a quantitative cost effectiveness analyzer for Web-supported academic instruction that was developed in Tel Aviv University during a long term study. The paper presents the cost effectiveness analysis of Tel Aviv University campus. Cost and benefit of 3,453 courses were analyzed, exemplifying campus-wide…
ERIC Educational Resources Information Center
Lee, Cynthia; Wong, Kelvin C. K.; Cheung, William K.; Lee, Fion S. L.
2009-01-01
The paper first describes a web-based essay critiquing system developed by the authors using latent semantic analysis (LSA), an automatic text analysis technique, to provide students with immediate feedback on content and organisation for revision whenever there is an internet connection. It reports on its effectiveness in enhancing adult EFL…
Wilderness on the internet: identifying wilderness information domains
Chuck Burgess
2000-01-01
Data collected from an online needs assessment revealed that Web site visitors with an interest in wilderness seek several different types of information. In order to gain further insight into the process of Web use for wilderness information, a follow-up analysis was conducted. This analysis was exploratory in nature, with the goal of identifying information domains...
ERIC Educational Resources Information Center
Rodda, S. N.; Lubman, D. I.; Cheetham, A.; Dowling, N. A.; Jackson, A. C.
2015-01-01
Despite the exponential growth of non-appointment-based web counselling, there is limited information on what happens in a single session intervention. This exploratory study, involving a thematic analysis of 85 counselling transcripts of people seeking help for problem gambling, aimed to describe the presentation and content of online…
Assessing the Quality of Academic Libraries on the Web: The Development and Testing of Criteria.
ERIC Educational Resources Information Center
Chao, Hungyune
2002-01-01
This study develops and tests an instrument useful for evaluating the quality of academic library Web sites. Discusses criteria for print materials and human-computer interfaces; user-based perspectives; the use of factor analysis; a survey of library experts; testing reliability through analysis of variance; and regression models. (Contains 53…
Information Tailoring Enhancements for Large Scale Social Data
2016-03-15
i.com) 1 Work Performed within This Reporting Period .................................................... 2 1.1 Implemented Temporal Analytics ...following tasks. Implemented Temporal Analysis Algorithms for Advanced Analytics in Scraawl. We implemented our backend web service design for the...temporal analysis and we created a prototyope GUI web service of Scraawl analytics dashboard. Upgraded Scraawl computational framework to increase
Analysis of Java Client/Server and Web Programming Tools for Development of Educational Systems.
ERIC Educational Resources Information Center
Muldner, Tomasz
This paper provides an analysis of old and new programming tools for development of client/server programs, particularly World Wide Web-based programs. The focus is on development of educational systems that use interactive shared workspaces to provide portable and expandable solutions. The paper begins with a short description of relevant terms.…
Web-Based Tools for Modelling and Analysis of Multivariate Data: California Ozone Pollution Activity
ERIC Educational Resources Information Center
Dinov, Ivo D.; Christou, Nicolas
2011-01-01
This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting…
JBrowse: A dynamic web platform for genome visualization and analysis
Buels, Robert; Yao, Eric; Diesh, Colin M.; ...
2016-04-12
Background: JBrowse is a fast and full-featured genome browser built with JavaScript and HTML5. It is easily embedded into websites or apps but can also be served as a standalone web page. Results: Overall improvements to speed and scalability are accompanied by specific enhancements that support complex interactive queries on large track sets. Analysis functions can readily be added using the plugin framework; most visual aspects of tracks can also be customized, along with clicks, mouseovers, menus, and popup boxes. JBrowse can also be used to browse local annotation files offline and to generate high-resolution figures for publication. Conclusions: JBrowsemore » is a mature web application suitable for genome visualization and analysis.« less
Evaluation of Web-Based Ostomy Patient Support Resources.
Pittman, Joyce; Nichols, Thom; Rawl, Susan M
To evaluate currently available, no-cost, Web-based patient support resources designed for those who have recently undergone ostomy surgery. Descriptive, correlational study using telephone survey. The sample comprised 202 adults who had ostomy surgery within the previous 24 months in 1 of 5 hospitals within a large healthcare organization in the Midwestern United States. Two of the hospitals were academic teaching hospitals, and 3 were community hospitals. The study was divided into 2 phases: (1) gap analysis of 4 Web sites (labeled A-D) based on specific criteria; and (2) telephone survey of individuals with an ostomy. In phase 1, a comprehensive checklist based on best practice standards was developed to conduct the gap analysis. In phase 2, data were collected from 202 participants by trained interviewers via 1-time structured telephone interviews that required approximately 30 minutes to complete. Descriptive analyses were performed, along with correlational analysis of relationships among Web site usage, acceptability and satisfaction, demographic characteristics, and medical history. Gap analysis revealed that Web site D, managed by a patient advocacy group, received the highest total content score of 155/176 (88%) and the highest usability score of 31.7/35 (91%). Two hundred two participants completed the telephone interview, with 96 (48%) reporting that they used the Internet as a source of information. Sixty participants (30%) reported that friends or family member had searched the Internet for ostomy information on their behalf, and 148 (75%) indicated they were confident they could get information about ostomies on the Internet. Of the 90 participants (45%) who reported using the Internet to locate ostomy information, 73 (82%) found the information on the Web easy to understand, 28 (31%) reported being frustrated during their search for information, 24 (27%) indicated it took a lot of effort to get the information they needed, and 39 (43%) were concerned about the quality of the information. Web-based patient support resources may be a cost-effective approach to providing essential ostomy information, self-management training, and support. Additional research is needed to examine the efficacy of Web-based patient support interventions to improve ostomy self-management knowledge, skills, and outcomes for patients.
Quality analysis of patient information about knee arthroscopy on the World Wide Web.
Sambandam, Senthil Nathan; Ramasamy, Vijayaraj; Priyanka, Priyanka; Ilango, Balakrishnan
2007-05-01
This study was designed to ascertain the quality of patient information available on the World Wide Web on the topic of knee arthroscopy. For the purpose of quality analysis, we used a pool of 232 search results obtained from 7 different search engines. We used a modified assessment questionnaire to assess the quality of these Web sites. This questionnaire was developed based on similar studies evaluating Web site quality and includes items on illustrations, accessibility, availability, accountability, and content of the Web site. We also compared results obtained with different search engines and tried to establish the best possible search strategy to attain the most relevant, authentic, and adequate information with minimum time consumption. For this purpose, we first compared 100 search results from the single most commonly used search engine (AltaVista) with the pooled sample containing 20 search results from each of the 7 different search engines. The search engines used were metasearch (Copernic and Mamma), general search (Google, AltaVista, and Yahoo), and health topic-related search engines (MedHunt and Healthfinder). The phrase "knee arthroscopy" was used as the search terminology. Excluding the repetitions, there were 117 Web sites available for quality analysis. These sites were analyzed for accessibility, relevance, authenticity, adequacy, and accountability by use of a specially designed questionnaire. Our analysis showed that most of the sites providing patient information on knee arthroscopy contained outdated information, were inadequate, and were not accountable. Only 16 sites were found to be providing reasonably good patient information and hence can be recommended to patients. Understandably, most of these sites were from nonprofit organizations and educational institutions. Furthermore, our study revealed that using multiple search engines increases patients' chances of obtaining more relevant information rather than using a single search engine. Our study shows the difficulties encountered by patients in obtaining information regarding knee arthroscopy and highlights the duty of knee surgeons in helping patients to identify the relevant and authentic information in the most efficient manner from the World Wide Web. This study highlights the importance of the role of orthopaedic surgeons in helping their patients to identify the best possible information on the World Wide Web.
Web Based Personal Nutrition Management Tool
NASA Astrophysics Data System (ADS)
Bozkurt, Selen; Zayim, Neşe; Gülkesen, Kemal Hakan; Samur, Mehmet Kemal
Internet is being used increasingly as a resource for accessing health-related information because of its several advantages. Therefore, Internet tailoring becomes quite preferable in health education and personal health management recently. Today, there are many web based health programs de-signed for individuals. Among these studies nutrition and weight management is popular because, obesity has become a heavy burden for populations worldwide. In this study, we designed a web based personal nutrition education and management tool, The Nutrition Web Portal, in order to enhance patients’ nutrition knowledge, and provide behavioral change against obesity. The present paper reports analysis, design and development processes of The Nutrition Web Portal.
Semantic Web and Contextual Information: Semantic Network Analysis of Online Journalistic Texts
NASA Astrophysics Data System (ADS)
Lim, Yon Soo
This study examines why contextual information is important to actualize the idea of semantic web, based on a case study of a socio-political issue in South Korea. For this study, semantic network analyses were conducted regarding English-language based 62 blog posts and 101 news stories on the web. The results indicated the differences of the meaning structures between blog posts and professional journalism as well as between conservative journalism and progressive journalism. From the results, this study ascertains empirical validity of current concerns about the practical application of the new web technology, and discusses how the semantic web should be developed.
ERIC Educational Resources Information Center
Kimmons, Royce
2017-01-01
This study seeks to evaluate the basic Priority 1 web accessibility of all college and university websites in the US (n = 3141). Utilizing web scraping and automated content analysis, the study establishes that even in the case of high-priority, simple-to-address accessibility requirements, colleges and universities generally fail to make their…
What Teens Want to Know: Sexual Health Questions Submitted to a Teen Web Site
ERIC Educational Resources Information Center
Vickberg, Suzanne M. Johnson; Kohn, Julia E.; Franco, Lydia M.; Criniti, Shannon
2003-01-01
In 1999 Planned Parenthood[R] Federation of America (PPFA[R]) launched teenwire.com[SM], a Web site for young people. This study was designed to determine teens' reproductive health information needs. Selected for analysis were 1,219 submissions to the Ask the Experts section of the Web site. Each submission was independently coded by three of the…
Designed a web crawler which oriented network public opinion data acquisition
NASA Astrophysics Data System (ADS)
Lu, Shan; Ma, Hui; Gao, Ying
2015-12-01
The paper describes the meaning of network public opinion and the network public opinion research of data acquisition technique. Designed and implemented a web crawler which oriented network public opinion data acquisition. Insufficient analysis of the generic web crawler, using asynchronous Socket, DNS cache, and queue downloads to improve its bottom story frame, increase the speed of collecting.
ERIC Educational Resources Information Center
Huang, Wen-Hao David; Hood, Denice Ward; Yoo, Sun Joo
2014-01-01
Web 2.0 applications have been widely applied for teaching and learning in US higher education in recent years. Their potential impact on learning motivation and learner performance, however, has not attracted substantial research efforts. To better understand how Web 2.0 applications might impact learners' motivation in higher education…
Web 2.0 Technologies for Effective Knowledge Management in Organizations: A Qualitative Analysis
ERIC Educational Resources Information Center
Nath, Anupam Kumar
2012-01-01
A new generation of Internet-based collaborative tools, commonly known as Web 2.0, has increased in popularity, availability, and power in the last few years (Kane and Fichman, 2009). Web 2.0 is a set of Internet-based applications that harness network effects by facilitating collaborative and participative computing (O'Reilly, 2006).…
ERIC Educational Resources Information Center
Wang, Tzu-Hua; Wang, Wei-Lung; Wang, Kuo-Hua; Huang, Shih-Chieh
The study attempted to adapt two web tools, FFS system (Frontpage Feedback System) and WATA system (Web-based Assessment and Test Analysis System), to construct a Hi-FAME (High Feedback-Assessment-Multimedia-Environment) Model in WBI (Web-based Instruction) to facilitate pre-service teacher training. Participants were 30 junior pre-service…
What a User Wants: Redesigning a Library's Web Site Based on a Card-Sort Analysis
ERIC Educational Resources Information Center
Robbins, Laura Pope; Esposito, Lisa; Kretz, Chris; Aloi, Michael
2007-01-01
Web site usability concerns anyone with a Web site to maintain. Libraries, however, are often the biggest offenders in terms of usability. In our efforts to provide users with everything they need for research, we often overwhelm them with sites that are confusing in structure, difficult to navigate, and weighed down with jargon. Dowling College…
Focused Crawling of the Deep Web Using Service Class Descriptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocco, D; Liu, L; Critchlow, T
2004-06-21
Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address thesemore » challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less
Optimizing Crawler4j using MapReduce Programming Model
NASA Astrophysics Data System (ADS)
Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.
2017-06-01
World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.
e-Ana and e-Mia: A Content Analysis of Pro–Eating Disorder Web Sites
Schenk, Summer; Wilson, Jenny L.; Peebles, Rebecka
2010-01-01
Objectives. The Internet offers Web sites that describe, endorse, and support eating disorders. We examined the features of pro–eating disorder Web sites and the messages to which users may be exposed. Methods. We conducted a systematic content analysis of 180 active Web sites, noting site logistics, site accessories, “thinspiration” material (images and prose intended to inspire weight loss), tips and tricks, recovery, themes, and perceived harm. Results. Practically all (91%) of the Web sites were open to the public, and most (79%) had interactive features. A large majority (84%) offered pro-anorexia content, and 64% provided pro-bulimia content. Few sites focused on eating disorders as a lifestyle choice. Thinspiration material appeared on 85% of the sites, and 83% provided overt suggestions on how to engage in eating-disordered behaviors. Thirty-eight percent of the sites included recovery-oriented information or links. Common themes were success, control, perfection, and solidarity. Conclusions. Pro–eating disorder Web sites present graphic material to encourage, support, and motivate site users to continue their efforts with anorexia and bulimia. Continued monitoring will offer a valuable foundation to build a better understanding of the effects of these sites on their users. PMID:20558807
PIQMIe: a web server for semi-quantitative proteomics data management and analysis
Kuzniar, Arnold; Kanaar, Roland
2014-01-01
We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. PMID:24861615
PIQMIe: a web server for semi-quantitative proteomics data management and analysis.
Kuzniar, Arnold; Kanaar, Roland
2014-07-01
We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
RSAT 2015: Regulatory Sequence Analysis Tools
Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A.; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M.; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques
2015-01-01
RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. PMID:25904632
Web-based analysis and publication of flow cytometry experiments.
Kotecha, Nikesh; Krutzik, Peter O; Irish, Jonathan M
2010-07-01
Cytobank is a Web-based application for storage, analysis, and sharing of flow cytometry experiments. Researchers use a Web browser to log in and use a wide range of tools developed for basic and advanced flow cytometry. In addition to providing access to standard cytometry tools from any computer, Cytobank creates a platform and community for developing new analysis and publication tools. Figure layouts created on Cytobank are designed to allow transparent access to the underlying experiment annotation and data processing steps. Since all flow cytometry files and analysis data are stored on a central server, experiments and figures can be viewed or edited by anyone with the proper permission, from any computer with Internet access. Once a primary researcher has performed the initial analysis of the data, collaborators can engage in experiment analysis and make their own figure layouts using the gated, compensated experiment files. Cytobank is available to the scientific community at http://www.cytobank.org. (c) 2010 by John Wiley & Sons, Inc.
Web-Based Analysis and Publication of Flow Cytometry Experiments
Kotecha, Nikesh; Krutzik, Peter O.; Irish, Jonathan M.
2014-01-01
Cytobank is a web-based application for storage, analysis, and sharing of flow cytometry experiments. Researchers use a web browser to log in and use a wide range of tools developed for basic and advanced flow cytometry. In addition to providing access to standard cytometry tools from any computer, Cytobank creates a platform and community for developing new analysis and publication tools. Figure layouts created on Cytobank are designed to allow transparent access to the underlying experiment annotation and data processing steps. Since all flow cytometry files and analysis data are stored on a central server, experiments and figures can be viewed or edited by anyone with the proper permissions from any computer with Internet access. Once a primary researcher has performed the initial analysis of the data, collaborators can engage in experiment analysis and make their own figure layouts using the gated, compensated experiment files. Cytobank is available to the scientific community at www.cytobank.org PMID:20578106
Capturing Trust in Social Web Applications
NASA Astrophysics Data System (ADS)
O'Donovan, John
The Social Web constitutes a shift in information flow from the traditional Web. Previously, content was provided by the owners of a website, for consumption by the end-user. Nowadays, these websites are being replaced by Social Web applications which are frameworks for the publication of user-provided content. Traditionally, Web content could be `trusted' to some extent based on the site it originated from. Algorithms such as Google's PageRank were (and still are) used to compute the importance of a website, based on analysis of underlying link topology. In the Social Web, analysis of link topology merely tells us about the importance of the information framework which hosts the content. Consumers of information still need to know about the importance/reliability of the content they are reading, and therefore about the reliability of the producers of that content. Research into trust and reputation of the producers of information in the Social Web is still very much in its infancy. Every day, people are forced to make trusting decisions about strangers on the Web based on a very limited amount of information. For example, purchasing a product from an eBay seller with a `reputation' of 99%, downloading a file from a peer-to-peer application such as Bit-Torrent, or allowing Amazon.com tell you what products you will like. Even something as simple as reading comments on a Web-blog requires the consumer to make a trusting decision about the quality of that information. In all of these example cases, and indeed throughout the Social Web, there is a pressing demand for increased information upon which we can make trusting decisions. This chapter examines the diversity of sources from which trust information can be harnessed within Social Web applications and discusses a high level classification of those sources. Three different techniques for harnessing and using trust from a range of sources are presented. These techniques are deployed in two sample Social Web applications—a recommender system and an online auction. In all cases, it is shown that harnessing an increased amount of information upon which to make trust decisions greatly enhances the user experience with the Social Web application.
WEBnm@ v2.0: Web server and services for comparing protein flexibility.
Tiwari, Sandhya P; Fuglebakk, Edvin; Hollup, Siv M; Skjærven, Lars; Cragnolini, Tristan; Grindhaug, Svenn H; Tekle, Kidane M; Reuter, Nathalie
2014-12-30
Normal mode analysis (NMA) using elastic network models is a reliable and cost-effective computational method to characterise protein flexibility and by extension, their dynamics. Further insight into the dynamics-function relationship can be gained by comparing protein motions between protein homologs and functional classifications. This can be achieved by comparing normal modes obtained from sets of evolutionary related proteins. We have developed an automated tool for comparative NMA of a set of pre-aligned protein structures. The user can submit a sequence alignment in the FASTA format and the corresponding coordinate files in the Protein Data Bank (PDB) format. The computed normalised squared atomic fluctuations and atomic deformation energies of the submitted structures can be easily compared on graphs provided by the web user interface. The web server provides pairwise comparison of the dynamics of all proteins included in the submitted set using two measures: the Root Mean Squared Inner Product and the Bhattacharyya Coefficient. The Comparative Analysis has been implemented on our web server for NMA, WEBnm@, which also provides recently upgraded functionality for NMA of single protein structures. This includes new visualisations of protein motion, visualisation of inter-residue correlations and the analysis of conformational change using the overlap analysis. In addition, programmatic access to WEBnm@ is now available through a SOAP-based web service. Webnm@ is available at http://apps.cbu.uib.no/webnma . WEBnm@ v2.0 is an online tool offering unique capability for comparative NMA on multiple protein structures. Along with a convenient web interface, powerful computing resources, and several methods for mode analyses, WEBnm@ facilitates the assessment of protein flexibility within protein families and superfamilies. These analyses can give a good view of how the structures move and how the flexibility is conserved over the different structures.
Thurzo, A; Stanko, P; Urbanova, W; Lysy, J; Suchancova, B; Makovnik, M; Javorka, V
2010-01-01
Authors evaluated the effect of the WEB 2.0 environment on dental education and estimated the difference in retention of knowledge by cephalometric analysis in orthodontics between conventional education and off-line e-learning. Five years of experience with complex web-based e-learning system allowed the evaluation by retrospective analysis and on-line questionnaire. The results revealed the current trends in on-line behavior of students based on the WEB 2.0 innovative technologies like Ajax. Results confirmed an increasing number of resources with a rising frequency of e-learning materials. The study confirmed that e-learning of the same subject is more efficient in immediate examination after the lecture with even better results after 12 and 24 months against the control group (Tab. 3, Fig. 1, Ref. 26).
NASA Astrophysics Data System (ADS)
Zhu, Z.; Bi, J.; Wang, X.; Zhu, W.
2014-02-01
As an important sub-topic of the natural process of carbon emission data public information platform construction, coalfield spontaneous combustion of carbon emission WebGIS system has become an important study object. In connection with data features of coalfield spontaneous combustion carbon emissions (i.e. a wide range of data, which is rich and complex) and the geospatial characteristics, data is divided into attribute data and spatial data. Based on full analysis of the data, completed the detailed design of the Oracle database and stored on the Oracle database. Through Silverlight rich client technology and the expansion of WCF services, achieved the attribute data of web dynamic query, retrieval, statistical, analysis and other functions. For spatial data, we take advantage of ArcGIS Server and Silverlight-based API to invoke GIS server background published map services, GP services, Image services and other services, implemented coalfield spontaneous combustion of remote sensing image data and web map data display, data analysis, thematic map production. The study found that the Silverlight technology, based on rich client and object-oriented framework for WCF service, can efficiently constructed a WebGIS system. And then, combined with ArcGIS Silverlight API to achieve interactive query attribute data and spatial data of coalfield spontaneous emmission, can greatly improve the performance of WebGIS system. At the same time, it provided a strong guarantee for the construction of public information on China's carbon emission data.
A dynamical classification of the cosmic web
NASA Astrophysics Data System (ADS)
Forero-Romero, J. E.; Hoffman, Y.; Gottlöber, S.; Klypin, A.; Yepes, G.
2009-07-01
In this paper, we propose a new dynamical classification of the cosmic web. Each point in space is classified in one of four possible web types: voids, sheets, filaments and knots. The classification is based on the evaluation of the deformation tensor (i.e. the Hessian of the gravitational potential) on a grid. The classification is based on counting the number of eigenvalues above a certain threshold, λth, at each grid point, where the case of zero, one, two or three such eigenvalues corresponds to void, sheet, filament or a knot grid point. The collection of neighbouring grid points, friends of friends, of the same web type constitutes voids, sheets, filaments and knots as extended web objects. A simple dynamical consideration of the emergence of the web suggests that the threshold should not be null, as in previous implementations of the algorithm. A detailed dynamical analysis would have found different threshold values for the collapse of sheets, filaments and knots. Short of such an analysis a phenomenological approach has been opted for, looking for a single threshold to be determined by analysing numerical simulations. Our cosmic web classification has been applied and tested against a suite of large (dark matter only) cosmological N-body simulations. In particular, the dependence of the volume and mass filling fractions on λth and on the resolution has been calculated for the four web types. We also study the percolation properties of voids and filaments. Our main findings are as follows. (i) Already at λth = 0.1 the resulting web classification reproduces the visual impression of the cosmic web. (ii) Between 0.2 <~ λth <~ 0.4, a system of percolated voids coexists with a net of interconnected filaments. This suggests a reasonable choice for λth as the parameter that defines the cosmic web. (iii) The dynamical nature of the suggested classification provides a robust framework for incorporating environmental information into galaxy formation models, and in particular to semi-analytical models.
Terluin, Berend; Brouwers, Evelien P M; Marchand, Miquelle A G; de Vet, Henrica C W
2018-05-01
Many paper-and-pencil (P&P) questionnaires have been migrated to electronic platforms. Differential item and test functioning (DIF and DTF) analysis constitutes a superior research design to assess measurement equivalence across modes of administration. The purpose of this study was to demonstrate an item response theory (IRT)-based DIF and DTF analysis to assess the measurement equivalence of a Web-based version and the original P&P format of the Four-Dimensional Symptom Questionnaire (4DSQ), measuring distress, depression, anxiety, and somatization. The P&P group (n = 2031) and the Web group (n = 958) consisted of primary care psychology clients. Unidimensionality and local independence of the 4DSQ scales were examined using IRT and Yen's Q3. Bifactor modeling was used to assess the scales' essential unidimensionality. Measurement equivalence was assessed using IRT-based DIF analysis using a 3-stage approach: linking on the latent mean and variance, selection of anchor items, and DIF testing using the Wald test. DTF was evaluated by comparing expected scale scores as a function of the latent trait. The 4DSQ scales proved to be essentially unidimensional in both modalities. Five items, belonging to the distress and somatization scales, displayed small amounts of DIF. DTF analysis revealed that the impact of DIF on the scale level was negligible. IRT-based DIF and DTF analysis is demonstrated as a way to assess the equivalence of Web-based and P&P questionnaire modalities. Data obtained with the Web-based 4DSQ are equivalent to data obtained with the P&P version.
Bayesian analysis of the dynamic cosmic web in the SDSS galaxy survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, Florent; Wandelt, Benjamin; Jasche, Jens, E-mail: florent.leclercq@polytechnique.org, E-mail: jasche@iap.fr, E-mail: wandelt@iap.fr
Recent application of the Bayesian algorithm \\textsc(borg) to the Sloan Digital Sky Survey (SDSS) main sample galaxies resulted in the physical inference of the formation history of the observed large-scale structure from its origin to the present epoch. In this work, we use these inferences as inputs for a detailed probabilistic cosmic web-type analysis. To do so, we generate a large set of data-constrained realizations of the large-scale structure using a fast, fully non-linear gravitational model. We then perform a dynamic classification of the cosmic web into four distinct components (voids, sheets, filaments, and clusters) on the basis of themore » tidal field. Our inference framework automatically and self-consistently propagates typical observational uncertainties to web-type classification. As a result, this study produces accurate cosmographic classification of large-scale structure elements in the SDSS volume. By also providing the history of these structure maps, the approach allows an analysis of the origin and growth of the early traces of the cosmic web present in the initial density field and of the evolution of global quantities such as the volume and mass filling fractions of different structures. For the problem of web-type classification, the results described in this work constitute the first connection between theory and observations at non-linear scales including a physical model of structure formation and the demonstrated capability of uncertainty quantification. A connection between cosmology and information theory using real data also naturally emerges from our probabilistic approach. Our results constitute quantitative chrono-cosmography of the complex web-like patterns underlying the observed galaxy distribution.« less
Mining Social Media and Web Searches For Disease Detection
Yang, Y. Tony; Horneffer, Michael; DiLisio, Nicole
2013-01-01
Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate. PMID:25170475
Web-Based Virtual Laboratory for Food Analysis Course
NASA Astrophysics Data System (ADS)
Handayani, M. N.; Khoerunnisa, I.; Sugiarti, Y.
2018-02-01
Implementation of learning on food analysis course in Program Study of Agro-industrial Technology Education faced problems. These problems include the availability of space and tools in the laboratory that is not comparable with the number of students also lack of interactive learning tools. On the other hand, the information technology literacy of students is quite high as well the internet network is quite easily accessible on campus. This is a challenge as well as opportunities in the development of learning media that can help optimize learning in the laboratory. This study aims to develop web-based virtual laboratory as one of the alternative learning media in food analysis course. This research is R & D (research and development) which refers to Borg & Gall model. The results showed that assessment’s expert of web-based virtual labs developed, in terms of software engineering aspects; visual communication; material relevance; usefulness and language used, is feasible as learning media. The results of the scaled test and wide-scale test show that students strongly agree with the development of web based virtual laboratory. The response of student to this virtual laboratory was positive. Suggestions from students provided further opportunities for improvement web based virtual laboratory and should be considered for further research.
Tell, Johanna; Olander, Ewy; Anderberg, Peter; Berglund, Johan Sanmartin
2018-02-01
The aim of this study was to investigate child health-care coordinators' experiences of being a facilitator for the implementation of a new national child health-care programme in the form of a web-based national guide. The study was based on eight remote, online focus groups, using Skype for Business. A qualitative content analysis was performed. The analysis generated three categories: adapt to a local context, transition challenges and led by strong incentives. There were eight subcategories. In the latent analysis, the theme 'Being a facilitator: a complex role' was formed to express the child health-care coordinators' experiences. Facilitating a national guideline or decision support in a local context is a complex task that requires an advocating and mediating role. For successful implementation, guidelines and decision support, such as a web-based guide and the new child health-care programme, must match professional consensus and needs and be seen as relevant by all. Participation in the development and a strong bottom-up approach was important, making the web-based guide and the programme relevant to whom it is intended to serve, and for successful implementation. The study contributes valuable knowledge when planning to implement a national web-based decision support and policy programme in a local health-care context.
Mining social media and web searches for disease detection.
Yang, Y Tony; Horneffer, Michael; DiLisio, Nicole
2013-04-28
Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate.
MALINA: a web service for visual analytics of human gut microbiota whole-genome metagenomic reads.
Tyakht, Alexander V; Popenko, Anna S; Belenikin, Maxim S; Altukhov, Ilya A; Pavlenko, Alexander V; Kostryukova, Elena S; Selezneva, Oksana V; Larin, Andrei K; Karpova, Irina Y; Alexeev, Dmitry G
2012-12-07
MALINA is a web service for bioinformatic analysis of whole-genome metagenomic data obtained from human gut microbiota sequencing. As input data, it accepts metagenomic reads of various sequencing technologies, including long reads (such as Sanger and 454 sequencing) and next-generation (including SOLiD and Illumina). It is the first metagenomic web service that is capable of processing SOLiD color-space reads, to authors' knowledge. The web service allows phylogenetic and functional profiling of metagenomic samples using coverage depth resulting from the alignment of the reads to the catalogue of reference sequences which are built into the pipeline and contain prevalent microbial genomes and genes of human gut microbiota. The obtained metagenomic composition vectors are processed by the statistical analysis and visualization module containing methods for clustering, dimension reduction and group comparison. Additionally, the MALINA database includes vectors of bacterial and functional composition for human gut microbiota samples from a large number of existing studies allowing their comparative analysis together with user samples, namely datasets from Russian Metagenome project, MetaHIT and Human Microbiome Project (downloaded from http://hmpdacc.org). MALINA is made freely available on the web at http://malina.metagenome.ru. The website is implemented in JavaScript (using Ext JS), Microsoft .NET Framework, MS SQL, Python, with all major browsers supported.
ERIC Educational Resources Information Center
Joo, Soohyung; Kipp, Margaret E. I.
2015-01-01
Introduction: This study examines the structure of Web space in the field of library and information science using multivariate analysis of social tags from the Website, Delicious.com. A few studies have examined mathematical modelling of tags, mainly examining tagging in terms of tripartite graphs, pattern tracing and descriptive statistics. This…
ERIC Educational Resources Information Center
Polat, Elif; Adiguzel, Tufan; Akgun, Ozcan Erkan
2012-01-01
Because there is, currently, no education system for primary school students in grades 1-3 who have specific learning disabilities in Turkey and because such students do not receive sufficient support from face-to-face counseling, a needs analysis was conducted in order to prepare an adaptive, web-assisted learning system according to variables…
Development of a Web-Enabled Informatics Platform for Manipulation of Gene Expression Data
2004-12-01
genomic platforms such as metabolomics and proteomics , and to federated databases for knowledge management. A successful SBIR Phase I completed...measurements that require sophisticated bioinformatic platforms for data archival, management, integration, and analysis if researchers are to derive...web-enabled bioinformatic platform consisting of a Laboratory Information Management System (LIMS), an Analysis Information Management System (AIMS
The Web Measurement Environment (WebME): A Tool for Combining and Modeling Distributed Data
NASA Technical Reports Server (NTRS)
Tesoriero, Roseanne; Zelkowitz, Marvin
1997-01-01
Many organizations have incorporated data collection into their software processes for the purpose of process improvement. However, in order to improve, interpreting the data is just as important as the collection of data. With the increased presence of the Internet and the ubiquity of the World Wide Web, the potential for software processes being distributed among several physically separated locations has also grown. Because project data may be stored in multiple locations and in differing formats, obtaining and interpreting data from this type of environment becomes even more complicated. The Web Measurement Environment (WebME), a Web-based data visualization tool, is being developed to facilitate the understanding of collected data in a distributed environment. The WebME system will permit the analysis of development data in distributed, heterogeneous environments. This paper provides an overview of the system and its capabilities.
Quality evaluation on an e-learning system in continuing professional education of nurses.
Lin, I-Chun; Chien, Yu-Mei; Chang, I-Chiu
2006-01-01
Maintaining high quality in Web-based learning is a powerful means of increasing the overall efficiency and effectiveness of distance learning. Many studies have evaluated Web-based learning but seldom evaluate from the information systems (IS) perspective. This study applied the famous IS Success model in measuring the quality of a Web-based learning system using a Web-based questionnaire for data collection. One hundred and fifty four nurses participated in the survey. Based on confirmatory factor analysis, the variables of the research model fit for measuring the quality of a Web-based learning system. As Web-based education continues to grow worldwide, the results of this study may assist the system adopter (hospital executives), the learner (nurses), and the system designers in making reasonable and informed judgments with regard to the quality of Web-based learning system in continuing professional education.
Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.
2011-01-01
Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.
Presence of pro-tobacco messages on the Web.
Hong, Traci; Cody, Michael J
2002-01-01
Ignored in the finalized Master Settlement Agreement (National Association of Attorneys General, 1998), the unmonitored, unregulated World Wide Web (Web) can operate as a major vehicle for delivering pro-tobacco messages, images, and products to millions of young consumers. A content analysis of 318 randomly sampled pro-tobacco Web sites revealed that tobacco has a pervasive presence on the Web, especially on e-commerce sites and sites featuring hobbies, recreation, and "fetishes." Products can be ordered online on nearly 50% of the sites, but only 23% of the sites included underage verification. Further, only 11% of these sites contain health warnings. Instead, pro-tobacco sites frequently associate smoking with "glamorous" and "alternative" lifestyles, and with images of young males and young (thin, attractive) females. Finally, many of the Web sites offered interactive site features that are potentially appealing to young Web users. Recommendations for future research and counterstrategies are discussed.
Web-based GIS for spatial pattern detection: application to malaria incidence in Vietnam.
Bui, Thanh Quang; Pham, Hai Minh
2016-01-01
There is a great concern on how to build up an interoperable health information system of public health and health information technology within the development of public information and health surveillance programme. Technically, some major issues remain regarding to health data visualization, spatial processing of health data, health information dissemination, data sharing and the access of local communities to health information. In combination with GIS, we propose a technical framework for web-based health data visualization and spatial analysis. Data was collected from open map-servers and geocoded by open data kit package and data geocoding tools. The Web-based system is designed based on Open-source frameworks and libraries. The system provides Web-based analyst tool for pattern detection through three spatial tests: Nearest neighbour, K function, and Spatial Autocorrelation. The result is a web-based GIS, through which end users can detect disease patterns via selecting area, spatial test parameters and contribute to managers and decision makers. The end users can be health practitioners, educators, local communities, health sector authorities and decision makers. This web-based system allows for the improvement of health related services to public sector users as well as citizens in a secure manner. The combination of spatial statistics and web-based GIS can be a solution that helps empower health practitioners in direct and specific intersectional actions, thus provide for better analysis, control and decision-making.
AMMOS2: a web server for protein-ligand-water complexes refinement via molecular mechanics.
Labbé, Céline M; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O; Pajeva, Ilza; Miteva, Maria A
2017-07-03
AMMOS2 is an interactive web server for efficient computational refinement of protein-small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein-ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein-ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein-ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein-ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein-ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein-ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
AMMOS2: a web server for protein–ligand–water complexes refinement via molecular mechanics
Labbé, Céline M.; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O.; Pajeva, Ilza
2017-01-01
Abstract AMMOS2 is an interactive web server for efficient computational refinement of protein–small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein–ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein–ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein–ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein–ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein–ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein–ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. PMID:28486703
Rational analyses of information foraging on the web.
Pirolli, Peter
2005-05-06
This article describes rational analyses and cognitive models of Web users developed within information foraging theory. This is done by following the rational analysis methodology of (a) characterizing the problems posed by the environment, (b) developing rational analyses of behavioral solutions to those problems, and (c) developing cognitive models that approach the realization of those solutions. Navigation choice is modeled as a random utility model that uses spreading activation mechanisms that link proximal cues (information scent) that occur in Web browsers to internal user goals. Web-site leaving is modeled as an ongoing assessment by the Web user of the expected benefits of continuing at a Web site as opposed to going elsewhere. These cost-benefit assessments are also based on spreading activation models of information scent. Evaluations include a computational model of Web user behavior called Scent-Based Navigation and Information Foraging in the ACT Architecture, and the Law of Surfing, which characterizes the empirical distribution of the length of paths of visitors at a Web site. 2005 Lawrence Erlbaum Associates, Inc.
WebGL and web audio software lightweight components for multimedia education
NASA Astrophysics Data System (ADS)
Chang, Xin; Yuksel, Kivanc; Skarbek, Władysław
2017-08-01
The paper presents the results of our recent work on development of contemporary computing platform DC2 for multimedia education usingWebGL andWeb Audio { the W3C standards. Using literate programming paradigm the WEBSA educational tools were developed. It offers for a user (student), the access to expandable collection of WEBGL Shaders and web Audio scripts. The unique feature of DC2 is the option of literate programming, offered for both, the author and the reader in order to improve interactivity to lightweightWebGL andWeb Audio components. For instance users can define: source audio nodes including synthetic sources, destination audio nodes, and nodes for audio processing such as: sound wave shaping, spectral band filtering, convolution based modification, etc. In case of WebGL beside of classic graphics effects based on mesh and fractal definitions, the novel image processing analysis by shaders is offered like nonlinear filtering, histogram of gradients, and Bayesian classifiers.
Web Content Management Systems: An Analysis of Forensic Investigatory Challenges.
Horsman, Graeme
2018-02-26
With an increase in the creation and maintenance of personal websites, web content management systems are now frequently utilized. Such systems offer a low cost and simple solution for those seeking to develop an online presence, and subsequently, a platform from which reported defamatory content, abuse, and copyright infringement has been witnessed. This article provides an introductory forensic analysis of the three current most popular web content management systems available, WordPress, Drupal, and Joomla! Test platforms have been created, and their site structures have been examined to provide guidance for forensic practitioners facing investigations of this type. Result's document available metadata for establishing site ownership, user interactions, and stored content following analysis of artifacts including Wordpress's wp_users, and wp_comments tables, Drupal's "watchdog" records, and Joomla!'s _users, and _content tables. Finally, investigatory limitations documenting the difficulties of investigating WCMS usage are noted, and analysis recommendations are offered. © 2018 American Academy of Forensic Sciences.
Using Web Maps to Analyze the Construction of Global Scale Cognitive Maps
ERIC Educational Resources Information Center
Pingel, Thomas J.
2018-01-01
Game-based Web sites and applications are changing the ways in which students learn the world map. In this study, a Web map-based digital learning tool was used as a study aid for a university-level geography course in order to examine the way in which global scale cognitive maps are constructed. A network analysis revealed that clicks were…
2017-02-01
Image Processing Web Server Administration ...........................17 Fig. 18 Microsoft ASP.NET MVC 4 installation...algorithms are made into client applications that can be accessed from an image processing web service2 developed following Representational State...Transfer (REST) standards by a mobile app, laptop PC, and other devices. Similarly, weather tweets can be accessed via the Weather Digest Web Service
ERIC Educational Resources Information Center
Wang, Tzu-Hua
2011-01-01
This research refers to the self-regulated learning strategies proposed by Pintrich (1999) in developing a multiple-choice Web-based assessment system, the Peer-Driven Assessment Module of the Web-based Assessment and Test Analysis system (PDA-WATA). The major purpose of PDA-WATA is to facilitate learner use of self-regulatory learning behaviors…
ERIC Educational Resources Information Center
DeSchryver, Michael
2012-01-01
This dissertation utilized a multiple case study design to explore how advanced learners synthesize information about ill-structured topics when reading-to-learn and reading-to-do on the Web. Eight graduate students provided data in the form of think-alouds, interviews, screen video, digital trails, and task artifacts. Data analysis was based on…
The Researches on I-beam of different web’s shapes
NASA Astrophysics Data System (ADS)
Shuang, Chao; Zhou, Dong Hua
2018-05-01
When the ratio of height to thickness of girder web is relatively high, generally the local stability of web is enhanced by setting up stiffeners. But setting up stiffeners not only increase the use of material, but also increases the welding work. Therefore, the web can be processed into trapezoid, curve, triangles and rectangle to improve its stability. In order to study the mechanical behavior of the web with different shapes and its local stable bearing capacity, the finite element analysis software ANSYS was used to analyze the six I-beam, and the stress characteristics under different web forms were obtained. The results show that the local stability bearing capacity of the I-beam is improved, especially the shape of the trapezoidal web and the shape of the curved web have a significant effect on the local stability of the I-beam. Finally, based on the study of the local stability of the trapezoidal web and the curved web, the influence of their geometrical dimensions on the local stable bearing capacity is also studied.
NASA Astrophysics Data System (ADS)
Yu, Weishui; Luo, Changshou; Zheng, Yaming; Wei, Qingfeng; Cao, Chengzhong
2017-09-01
To deal with the “last kilometer” problem during the agricultural science and technology information service, we analyzed the feasibility, necessity and advantages of WebApp applied to agricultural information service and discussed the modes of WebApp used in agricultural information service based on the requirements analysis and the function of WebApp. To overcome the existing App’s defects of difficult installation and weak compatibility between the mobile operating systems, the Beijing Agricultural Sci-tech Service Hotline WebApp was developed based on the HTML and JAVA technology. The WebApp has greater compatibility and simpler operation than the Native App, what’s more, it can be linked to the WeChat public platform making it spread easily and run directly without setup process. The WebApp was used to provide agricultural expert consulting services and agriculture information push, obtained a good preliminary application achievement. Finally, we concluded the creative application of WebApp in agricultural consulting services and prospected the development of WebApp in agricultural information service.
SCit: web tools for protein side chain conformation analysis
Gautier, R.; Camproux, A.-C.; Tufféry, P.
2004-01-01
SCit is a web server providing services for protein side chain conformation analysis and side chain positioning. Specific services use the dependence of the side chain conformations on the local backbone conformation, which is described using a structural alphabet that describes the conformation of fragments of four-residue length in a limited library of structural prototypes. Based on this concept, SCit uses sets of rotameric conformations dependent on the local backbone conformation of each protein for side chain positioning and the identification of side chains with unlikely conformations. The SCit web server is accessible at http://bioserv.rpbs.jussieu.fr/SCit. PMID:15215438
Web-Based Instruction and Learning: Analysis and Needs Assessment
NASA Technical Reports Server (NTRS)
Grabowski, Barbara; McCarthy, Marianne; Koszalka, Tiffany
1998-01-01
An analysis and needs assessment was conducted to identify kindergarten through grade 14 (K-14) customer needs with regard to using the World Wide Web (WWW) for instruction and to identify obstacles K-14 teachers face in utilizing NASA Learning Technologies products in the classroom. The needs assessment was conducted as part of the Dryden Learning Technologies Project which is a collaboration between Dryden Flight Research Center (DFRC), Edwards, California and Tne Pennsylvania State University (PSU), University Park, Pennsylvania. The overall project is a multiyear effort to conduct research in the development of teacher training and tools for Web-based science, mathematics and technology instruction and learning.
NASA Astrophysics Data System (ADS)
Saint-Béat, Blanche; Maps, Frédéric; Babin, Marcel
2018-01-01
The extreme and variable environment shapes the functioning of Arctic ecosystems and the life cycles of its species. This delicate balance is now threatened by the unprecedented pace and magnitude of global climate change and anthropogenic pressure. Understanding the long-term consequences of these changes remains an elusive, yet pressing, goal. Our work was specifically aimed at identifying which biological processes impact Arctic planktonic ecosystem functioning, and how. Ecological Network Analysis (ENA) indices reveal emergent ecosystem properties that are not accessible through simple in situ observation. These indices are based on the architecture of carbon flows within food webs. But, despite the recent increase in in situ measurements from Arctic seas, many flow values remain unknown. Linear inverse modeling (LIM) allows missing flow values to be estimated from existing flow observations and, subsequent reconstruction of ecosystem food webs. Through a sensitivity analysis on a LIM model of the Amundsen Gulf in the Canadian Arctic, we were able to determine which processes affected the emergent properties of the planktonic ecosystem. The analysis highlighted the importance of an accurate knowledge of the various processes controlling bacterial production (e.g. bacterial growth efficiency and viral lysis). More importantly, a change in the fate of the microzooplankton within the food web can be monitored through the trophic level of mesozooplankton. It can be used as a "canary in the coal mine" signal, a forewarner of larger ecosystem change.
Cyber-T web server: differential analysis of high-throughput data.
Kayala, Matthew A; Baldi, Pierre
2012-07-01
The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.
Web accessibility support for visually impaired users using link content analysis.
Iwata, Hajime; Kobayashi, Naofumi; Tachibana, Kenji; Shirogane, Junko; Fukazawa, Yoshiaki
2013-12-01
Web pages are used for a variety of purposes. End users must understand dynamically changing content and sequentially follow page links to find desired material, requiring significant time and effort. However, for visually impaired users using screen readers, it can be difficult to find links to web pages when link text and alternative text descriptions are inappropriate. Our method supports the discovery of content by analyzing 8 categories of link types, and allows visually impaired users to be aware of the content represented by links in advance. This facilitates end users access to necessary information on web pages. Our method of classifying web page links is therefore effective as a means of evaluating accessibility.
SensA: web-based sensitivity analysis of SBML models.
Floettmann, Max; Uhlendorf, Jannis; Scharp, Till; Klipp, Edda; Spiesser, Thomas W
2014-10-01
SensA is a web-based application for sensitivity analysis of mathematical models. The sensitivity analysis is based on metabolic control analysis, computing the local, global and time-dependent properties of model components. Interactive visualization facilitates interpretation of usually complex results. SensA can contribute to the analysis, adjustment and understanding of mathematical models for dynamic systems. SensA is available at http://gofid.biologie.hu-berlin.de/ and can be used with any modern browser. The source code can be found at https://bitbucket.org/floettma/sensa/ (MIT license) © The Author 2014. Published by Oxford University Press.
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications
Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.
2018-01-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069
Failure Analysis for Composition of Web Services Represented as Labeled Transition Systems
NASA Astrophysics Data System (ADS)
Nadkarni, Dinanath; Basu, Samik; Honavar, Vasant; Lutz, Robyn
The Web service composition problem involves the creation of a choreographer that provides the interaction between a set of component services to realize a goal service. Several methods have been proposed and developed to address this problem. In this paper, we consider those scenarios where the composition process may fail due to incomplete specification of goal service requirements or due to the fact that the user is unaware of the functionality provided by the existing component services. In such cases, it is desirable to have a composition algorithm that can provide feedback to the user regarding the cause of failure in the composition process. Such feedback will help guide the user to re-formulate the goal service and iterate the composition process. We propose a failure analysis technique for composition algorithms that views Web service behavior as multiple sequences of input/output events. Our technique identifies the possible cause of composition failure and suggests possible recovery options to the user. We discuss our technique using a simple e-Library Web service in the context of the MoSCoE Web service composition framework.
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.
Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D
2017-04-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.
AAVSO Target Tool: A Web-Based Service for Tracking Variable Star Observations (Abstract)
NASA Astrophysics Data System (ADS)
Burger, D.; Stassun, K. G.; Barnes, C.; Kafka, S.; Beck, S.; Li, K.
2018-06-01
(Abstract only) The AAVSO Target Tool is a web-based interface for bringing stars in need of observation to the attention of AAVSOÃs network of amateur and professional astronomers. The site currently tracks over 700 targets of interest, collecting data from them on a regular basis from AAVSOÃs servers and sorting them based on priority. While the target tool does not require a login, users can obtain visibility times for each target by signing up and entering a telescope location. Other key features of the site include filtering by AAVSO observing section, sorting by different variable types, formatting the data for printing, and exporting the data to a CSV file. The AAVSO Target Tool builds upon seven years of experience developing web applications for astronomical data analysis, most notably on Filtergraph (Burger, D., et al. 2013, Astronomical Data Analysis Software and Systems XXII, Astronomical Society of the Pacific, San Francisco, 399), and is built using the web2py web framework based on the python programming language. The target tool is available at http://filtergraph.com/aavso.
Service-based analysis of biological pathways
Zheng, George; Bouguettaya, Athman
2009-01-01
Background Computer-based pathway discovery is concerned with two important objectives: pathway identification and analysis. Conventional mining and modeling approaches aimed at pathway discovery are often effective at achieving either objective, but not both. Such limitations can be effectively tackled leveraging a Web service-based modeling and mining approach. Results Inspired by molecular recognitions and drug discovery processes, we developed a Web service mining tool, named PathExplorer, to discover potentially interesting biological pathways linking service models of biological processes. The tool uses an innovative approach to identify useful pathways based on graph-based hints and service-based simulation verifying user's hypotheses. Conclusion Web service modeling of biological processes allows the easy access and invocation of these processes on the Web. Web service mining techniques described in this paper enable the discovery of biological pathways linking these process service models. Algorithms presented in this paper for automatically highlighting interesting subgraph within an identified pathway network enable the user to formulate hypothesis, which can be tested out using our simulation algorithm that are also described in this paper. PMID:19796403
VAAPA: a web platform for visualization and analysis of alternative polyadenylation.
Guan, Jinting; Fu, Jingyi; Wu, Mingcheng; Chen, Longteng; Ji, Guoli; Quinn Li, Qingshun; Wu, Xiaohui
2015-02-01
Polyadenylation [poly(A)] is an essential process during the maturation of most mRNAs in eukaryotes. Alternative polyadenylation (APA) as an important layer of gene expression regulation has been increasingly recognized in various species. Here, a web platform for visualization and analysis of alternative polyadenylation (VAAPA) was developed. This platform can visualize the distribution of poly(A) sites and poly(A) clusters of a gene or a section of a chromosome. It can also highlight genes with switched APA sites among different conditions. VAAPA is an easy-to-use web-based tool that provides functions of poly(A) site query, data uploading, downloading, and APA sites visualization. It was designed in a multi-tier architecture and developed based on Smart GWT (Google Web Toolkit) using Java as the development language. VAAPA will be a valuable addition to the community for the comprehensive study of APA, not only by making the high quality poly(A) site data more accessible, but also by providing users with numerous valuable functions for poly(A) site analysis and visualization. Copyright © 2014 Elsevier Ltd. All rights reserved.
Provost, Mélanie; Koompalum, Dayin; Dong, Diane; Martin, Bradley C
2006-01-01
To develop a comprehensive instrument assessing quality of health-related web sites. Phase I consisted of a literature review to identify constructs thought to indicate web site quality and to identify items. During content analysis, duplicate items were eliminated and items that were not clear, meaningful, or measurable were reworded or removed. Some items were generated by the authors. Phase II: a panel consisting of six healthcare and MIS reviewers was convened to assess each item for its relevance and importance to the construct and to assess item clarity and measurement feasibility. Three hundred and eighty-four items were generated from 26 sources. The initial content analysis reduced the scale to 104 items. Four of the six expert reviewers responded; high concordance on the relevance, importance and measurement feasibility of each item was observed: 3 out of 4, or all raters agreed on 76-85% of items. Based on the panel ratings, 9 items were removed, 3 added, and 10 revised. The WebMedQual consists of 8 categories, 8 sub-categories, 95 items and 3 supplemental items to assess web site quality. The constructs are: content (19 items), authority of source (18 items), design (19 items), accessibility and availability (6 items), links (4 items), user support (9 items), confidentiality and privacy (17 items), e-commerce (6 items). The "WebMedQual" represents a first step toward a comprehensive and standard quality assessment of health web sites. This scale will allow relatively easy assessment of quality with possible numeric scoring.
A Web-Based Video Digitizing System for the Study of Projectile Motion.
ERIC Educational Resources Information Center
Chow, John W.; Carlton, Les G.; Ekkekakis, Panteleimon; Hay, James G.
2000-01-01
Discusses advantages of a video-based, digitized image system for the study and analysis of projectile motion in the physics laboratory. Describes the implementation of a web-based digitized video system. (WRM)
76 FR 44048 - Agency Information Collection Activities: Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
... Collection: The GSS is a census of all eligible academic institutions and all departments in science and... Policy Analysis and Research (WebCASPAR) database system. The URL for WebCASPAR is http://caspar.nsf.gov...
Marine biogeochemistry: Methylmercury manufacture
NASA Astrophysics Data System (ADS)
Cossa, Daniel
2013-10-01
The neurotoxin methylmercury can accumulate in marine food webs, contaminating seafood. An analysis of the isotopic composition of fish in the North Pacific suggests that much of the mercury that enters the marine food web originates from low-oxygen subsurface waters.
76 FR 26729 - Ceridian Corporation; Analysis of Proposed Consent Order to Aid Public Comment
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-09
... result of these failures, hackers executed an SQL injection attack on the Powerpay Web site and Web application. Through this attack, the hackers found personal information stored in Powerpay on Ceridian's...
Web-based learning in professional development: experiences of Finnish nurse managers.
Korhonen, Teija; Lammintakanen, Johanna
2005-11-01
The aim of this article is to describe the nurse managers' expectations, attitudes and experiences on web-based learning before and after participation in a web-based course. Information technology has rapidly become more common in health care settings. However, little is known about nurse managers' experiences on web-based learning, although they have a crucial role in promoting the professional development of their staff. Diagnostic assignments (n = 18) written before and interviews (n = 8) taken after the web-based education. The data were analysed by inductive content analysis. Nurse managers found web-based education to be a suitable and modern method of learning. On the basis of their experience they found multiple ways to utilize web-based learning environments in health care. Information technology skills, equipment, support and time were considered essential in web-based learning. Additionally, they found that their own experience might lead to more widespread implementation of web-based learning in health care settings. Information technology skills of nurse managers and staff need to be developed in order to use information technology effectively. In order to learn in a web-based environment, everyone needs the opportunity and access to required resources. Additionally, nurse managers' own experiences are important to promote wider utilization of web-based learning.
Enhanced reproducibility of SADI web service workflows with Galaxy and Docker.
Aranguren, Mikel Egaña; Wilkinson, Mark D
2015-01-01
Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration. The combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns.
A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0
Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.
2014-01-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072
Interactive Visualization of Computational Fluid Dynamics using Mosaic
NASA Technical Reports Server (NTRS)
Clucas, Jean; Watson, Velvin; Chancellor, Marisa K. (Technical Monitor)
1994-01-01
The Web provides new Methods for accessing Information world-wide, but the current text-and-pictures approach neither utilizes all the Web's possibilities not provides for its limitations. While the inclusion of pictures and animations in a paper communicates more effectively than text alone, It Is essentially an extension of the concept of "publication." Also, as use of the Web increases putting images and animations online will quickly load even the "Information Superhighway." We need to find forms of communication that take advantage of the special nature of the Web. This paper presents one approach: the use of the Internet and the Mosaic interface for data sharing and collaborative analysis. We will describe (and In the presentation, demonstrate) our approach: using FAST (Flow Analysis Software Toolkit), a scientific visualization package, as a data viewer and interactive tool called from MOSAIC. Our intent is to stimulate the development of other tools that utilize the unique nature of electronic communication.
Rains, Stephen A; Bosch, Leslie A
2009-07-01
This article reports a content analysis of the privacy policy statements (PPSs) from 97 general reference health Web sites that was conducted to examine the ways in which visitors' privacy is constructed by health organizations. PPSs are formal documents created by the Web site owner to describe how information regarding site visitors and their behavior is collected and used. The results show that over 80% of the PPSs in the sample indicated automatically collecting or requesting that visitors voluntarily provide information about themselves, and only 3% met all five of the Federal Trade Commission's Fair Information Practices guidelines. Additionally, the results suggest that the manner in which PPSs are framed and the use of justifications for collecting information are tropes used by health organizations to foster a secondary exchange of visitors' personal information for access to Web site content.
No complexity–stability relationship in empirical ecosystems
Jacquet, Claire; Moritz, Charlotte; Morissette, Lyne; Legagneux, Pierre; Massol, François; Archambault, Philippe; Gravel, Dominique
2016-01-01
Understanding the mechanisms responsible for stability and persistence of ecosystems is one of the greatest challenges in ecology. Robert May showed that, contrary to intuition, complex randomly built ecosystems are less likely to be stable than simpler ones. Few attempts have been tried to test May's prediction empirically, and we still ignore what is the actual complexity–stability relationship in natural ecosystems. Here we perform a stability analysis of 116 quantitative food webs sampled worldwide. We find that classic descriptors of complexity (species richness, connectance and interaction strength) are not associated with stability in empirical food webs. Further analysis reveals that a correlation between the effects of predators on prey and those of prey on predators, combined with a high frequency of weak interactions, stabilize food web dynamics relative to the random expectation. We conclude that empirical food webs have several non-random properties contributing to the absence of a complexity–stability relationship. PMID:27553393
ERIC Educational Resources Information Center
Murry, G. Brandon; Murry, Francie R.
This study compared the use of two developmental alternatives: a Web Editor (WE) in combination with a customized template/shell (Teaching Not Teaching, T-N-T) and a WE only, for development of a Web-based lesson by pre-service teachers. Six hypotheses were tested to find whether the WE and T-N-T alternative was more efficient, effective, and…
WebScope: A New Tool for Fusion Data Analysis and Visualization
NASA Astrophysics Data System (ADS)
Yang, Fei; Dang, Ningning; Xiao, Bingjia
2010-04-01
A visualization tool was developed through a web browser based on Java applets embedded into HTML pages, in order to provide a world access to the EAST experimental data. It can display data from various trees in different servers in a single panel. With WebScope, it is easier to make a comparison between different data sources and perform a simple calculation over different data sources.
Analysis and Development of a Web-Enabled Planning and Scheduling Database Application
2013-09-01
establishes an entity—relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web- enabled interface for...development, develop, design, process, re- engineering, reengineering, MySQL , structured query language, SQL, myPHPadmin. 15. NUMBER OF PAGES 107 16...relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web-enabled interface for the population of
NASA Astrophysics Data System (ADS)
Titov, A. G.; Okladnikov, I. G.; Gordov, E. P.
2017-11-01
The use of large geospatial datasets in climate change studies requires the development of a set of Spatial Data Infrastructure (SDI) elements, including geoprocessing and cartographical visualization web services. This paper presents the architecture of a geospatial OGC web service system as an integral part of a virtual research environment (VRE) general architecture for statistical processing and visualization of meteorological and climatic data. The architecture is a set of interconnected standalone SDI nodes with corresponding data storage systems. Each node runs a specialized software, such as a geoportal, cartographical web services (WMS/WFS), a metadata catalog, and a MySQL database of technical metadata describing geospatial datasets available for the node. It also contains geospatial data processing services (WPS) based on a modular computing backend realizing statistical processing functionality and, thus, providing analysis of large datasets with the results of visualization and export into files of standard formats (XML, binary, etc.). Some cartographical web services have been developed in a system’s prototype to provide capabilities to work with raster and vector geospatial data based on OGC web services. The distributed architecture presented allows easy addition of new nodes, computing and data storage systems, and provides a solid computational infrastructure for regional climate change studies based on modern Web and GIS technologies.
Do infertile women and government staff differ in the evaluation of infertility-related Web sites?
Takabayashi, Chikako; Shimada, Keiko
2011-01-01
To investigate the evaluation of local government Web sites carrying information on infertility by infertile women and by government staff. In particular, the study investigated whether the women and staff differed with respect to the information they rate as important and their self-reported satisfaction with the Web sites. Cross-sectional descriptive study. Sixty-two local government staff members, of whom 46 were public health nurses managing subsidy programs for infertility treatment in the Hokuriku region of Japan, and 84 infertile women attending local clinics. We measured the level of satisfaction with the local government Web sites and perceptions about the importance of each type of content. Data were descriptively analyzed, as well as by factor analysis and multiple regression analysis. Local government Web sites were analyzed with respect to information about the treatment, details of the subsidy program, psychological support, and procedures for making a subsidy application. The women rated information on the treatment and details of the subsidy programs as important. There was no difference of satisfaction with the Web sites between the infertile women and the staff. Local government staff need to provide reliable data for women who are seeking information on infertility treatment. © 2011 Wiley Periodicals, Inc.
A quality evaluation methodology of health web-pages for non-professionals.
Currò, Vincenzo; Buonuomo, Paola Sabrina; Onesimo, Roberta; de Rose, Paola; Vituzzi, Andrea; di Tanna, Gian Luca; D'Atri, Alessandro
2004-06-01
The proposal of an evaluation methodology for determining the quality of healthcare web sites for the dissemination of medical information to non-professionals. Three (macro) factors are considered for the quality evaluation: medical contents, accountability of the authors, and usability of the web site. Starting from two results in the literature the problem of whether or not to introduce a weighting function has been investigated. This methodology has been validated on a specialized information content, i.e., sore throats, due to the large interest such a topic enjoys with target users. The World Wide Web was accessed using a meta-search system merging several search engines. A statistical analysis was made to compare the proposed methodology with the obtained ranks of the sample web pages. The statistical analysis confirms that the variables examined (per item and sub factor) show substantially similar ranks and are capable of contributing to the evaluation of the main quality macro factors. A comparison between the aggregation functions in the proposed methodology (non-weighted averages) and the weighting functions, derived from the literature, allowed us to verify the suitability of the method. The proposed methodology suggests a simple approach which can quickly award an overall quality score for medical web sites oriented to non-professionals.
Reporting on post-menopausal hormone therapy: an analysis of gynaecologists' web pages.
Bucksch, Jens; Kolip, Petra; Deitermann, Bernhilde
2004-01-01
The present study was designed to analyse Web pages of German gynaecologists with regard to postmenopausal hormone therapy (HT). There is a growing body of evidence, that the overall health risks of HT exceed the benefits. Making one's own informed choice has become a central concern for menopausal women. The Internet is an important source of health information, but the quality is often dubious. The study focused on the analysis of basic criteria such as last modification date and quality of the HT information content. The results of the Women's Health Initiative Study (WHI) were used as a benchmark. We searched for relevant Web pages by entering a combination of key words (9 x 13 = 117) into the search engine www.google.de. Each Web page was analysed using a standardized questionnaire. The basic criteria and the quality of content on each Web page were separately categorized by two evaluators. Disagreements were resolved by discussion. Of the 97 websites identified, basic criteria were not met by the majority. For example, the modification date was displayed by only 23 (23.7%) Web pages. The quality of content of most Web pages regarding HT was inaccurate and incomplete. Whilst only nine (9.3%) took up a balanced position, 66 (68%) recommended HT without any restrictions. In 22 cases the recommendation was indistinct and none of the sites refused HT. With regard to basic criteria, there was no difference between HT-recommending Web pages and sites with balanced position. Evidence-based information resulting from the WHI trial was insufficiently represented on gynaecologists' Web pages. Because of the growing number of consumers looking online for health information, the danger of obtaining harmful information has to be minimized. Web pages of gynaecologists do not appear to be recommendable for women because they do not provide recent evidence-based findings about HT.
CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.
Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J
2015-01-01
CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.
Automated X-ray and Optical Analysis of the Virtual Observatory and Grid Computing
NASA Technical Reports Server (NTRS)
Ptak, A.; Krughoff, S.; Connolly, A.
2011-01-01
We are developing a system to combine the Web Enabled Source Identification with X-Matching (WESIX) web service, which emphasizes source detection on optical images,with the XAssist program that automates the analysis of X-ray data. XAssist is continuously processing archival X-ray data in several pipelines. We have established a workflow in which FITS images and/or (in the case of X ray data) an X-ray field can be input to WESIX. Intelligent services return available data (if requested fields have been processed) or submit job requests to a queue to be performed asynchronously. These services will be available via web services (for non-interactive use by Virtual Observatory portals and applications) and through web applications (written in the Django web application framework). We are adding web services for specific XAssist functionality such as determining .the exposure and limiting flux for a given position on the sky and extracting spectra and images for a given region. We are improving the queuing system in XAssist to allow for "watch lists" to be specified by users, and when X-ray fields in a user's watch list become publicly available they will be automatically added to the queue. XAssist is being expanded to be used as a survey planning 1001 when coupled with simulation software, including functionality for NuStar, eRosita, IXO, and the Wide Field Xray Telescope (WFXT), as part of an end to end simulation/analysis system. We are also investigating the possibility of a dedicated iPhone/iPad app for querying pipeline data, requesting processing, and administrative job control.
NASA Technical Reports Server (NTRS)
Falke, Stefan; Husar, Rudolf
2011-01-01
The goal of this REASoN applications and technology project is to deliver and use Earth Science Enterprise (ESE) data and tools in support of air quality management. Its scope falls within the domain of air quality management and aims to develop a federated air quality information sharing network that includes data from NASA, EPA, US States and others. Project goals were achieved through a access of satellite and ground observation data, web services information technology, interoperability standards, and air quality community collaboration. In contributing to a network of NASA ESE data in support of particulate air quality management, the project will develop access to distributed data, build Web infrastructure, and create tools for data processing and analysis. The key technologies used in the project include emerging web services for developing self describing and modular data access and processing tools, and service oriented architecture for chaining web services together to assemble customized air quality management applications. The technology and tools required for this project were developed within DataFed.net, a shared infrastructure that supports collaborative atmospheric data sharing and processing web services. Much of the collaboration was facilitated through community interactions through the Federation of Earth Science Information Partners (ESIP) Air Quality Workgroup. The main activities during the project that successfully advanced DataFed, enabled air quality applications and established community-oriented infrastructures were: develop access to distributed data (surface and satellite), build Web infrastructure to support data access, processing and analysis create tools for data processing and analysis foster air quality community collaboration and interoperability.
Web-GIS platform for monitoring and forecasting of regional climate and ecological changes
NASA Astrophysics Data System (ADS)
Gordov, E. P.; Krupchatnikov, V. N.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Shulgina, T. M.
2012-12-01
Growing volume of environmental data from sensors and model outputs makes development of based on modern information-telecommunication technologies software infrastructure for information support of integrated scientific researches in the field of Earth sciences urgent and important task (Gordov et al, 2012, van der Wel, 2005). It should be considered that original heterogeneity of datasets obtained from different sources and institutions not only hampers interchange of data and analysis results but also complicates their intercomparison leading to a decrease in reliability of analysis results. However, modern geophysical data processing techniques allow combining of different technological solutions for organizing such information resources. Nowadays it becomes a generally accepted opinion that information-computational infrastructure should rely on a potential of combined usage of web- and GIS-technologies for creating applied information-computational web-systems (Titov et al, 2009, Gordov et al. 2010, Gordov, Okladnikov and Titov, 2011). Using these approaches for development of internet-accessible thematic information-computational systems, and arranging of data and knowledge interchange between them is a very promising way of creation of distributed information-computation environment for supporting of multidiscipline regional and global research in the field of Earth sciences including analysis of climate changes and their impact on spatial-temporal vegetation distribution and state. Experimental software and hardware platform providing operation of a web-oriented production and research center for regional climate change investigations which combines modern web 2.0 approach, GIS-functionality and capabilities of running climate and meteorological models, large geophysical datasets processing, visualization, joint software development by distributed research groups, scientific analysis and organization of students and post-graduate students education is presented. Platform software developed (Shulgina et al, 2012, Okladnikov et al, 2012) includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also data preprocessing, run and visualization of modeling results of models WRF and «Planet Simulator» integrated into the platform is provided. All functions of the center are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of visualization of processing results, selection of geographical region of interest (pan and zoom) and data layers manipulation (order, enable/disable, features extraction). Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches (Shulgina et al, 2011). Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified graphical web-interface.
Evans-White, Michelle A; Halvorson, Halvor M
2017-01-01
The framework of ecological stoichiometry was developed primarily within the context of "green" autotroph-based food webs. While stoichiometric principles also apply in "brown" detritus-based systems, these systems have been historically understudied and differ from green ones in several important aspects including carbon (C) quality and the nutrient [nitrogen (N) and phosphorus (P)] contents of food resources for consumers. In this paper, we review work over the last decade that has advanced the application of ecological stoichiometry from green to brown food webs, focusing on freshwater ecosystems. We first review three focal areas where green and brown food webs differ: (1) bottom-up controls by light and nutrient availability, (2) stoichiometric constraints on consumer growth and nutritional regulation, and (3) patterns in consumer-driven nutrient dynamics. Our review highlights the need for further study of how light and nutrient availability affect autotroph-heterotroph interactions on detritus and the subsequent effects on consumer feeding and growth. To complement this conceptual review, we formally quantified differences in stoichiometric principles between green and brown food webs using a meta-analysis across feeding studies of freshwater benthic invertebrates. From 257 datasets collated across 46 publications and several unpublished studies, we compared effect sizes (Pearson's r) of resource N:C and P:C on growth, consumption, excretion, and egestion between herbivorous and detritivorous consumers. The meta-analysis revealed that both herbivore and detritivore growth are limited by resource N:C and P:C contents, but effect sizes only among detritivores were significantly above zero. Consumption effect sizes were negative among herbivores but positive for detritivores in the case of both N:C and P:C, indicating distinct compensatory feeding responses across resource stoichiometry gradients. Herbivore P excretion rates responded significantly positively to resource P:C, whereas detritivore N and P excretion did not respond; detritivore N and P egestion responded positively to resource N:C and P:C, respectively. Our meta-analysis highlights resource N and P contents as broadly limiting in brown and green benthic food webs, but indicates contrasting mechanisms of limitation owing to differing consumer regulation. We suggest that green and brown food webs share fundamental stoichiometric principles, while identifying specific differences toward applying ecological stoichiometry across ecosystems.
Evans-White, Michelle A.; Halvorson, Halvor M.
2017-01-01
The framework of ecological stoichiometry was developed primarily within the context of “green” autotroph-based food webs. While stoichiometric principles also apply in “brown” detritus-based systems, these systems have been historically understudied and differ from green ones in several important aspects including carbon (C) quality and the nutrient [nitrogen (N) and phosphorus (P)] contents of food resources for consumers. In this paper, we review work over the last decade that has advanced the application of ecological stoichiometry from green to brown food webs, focusing on freshwater ecosystems. We first review three focal areas where green and brown food webs differ: (1) bottom–up controls by light and nutrient availability, (2) stoichiometric constraints on consumer growth and nutritional regulation, and (3) patterns in consumer-driven nutrient dynamics. Our review highlights the need for further study of how light and nutrient availability affect autotroph–heterotroph interactions on detritus and the subsequent effects on consumer feeding and growth. To complement this conceptual review, we formally quantified differences in stoichiometric principles between green and brown food webs using a meta-analysis across feeding studies of freshwater benthic invertebrates. From 257 datasets collated across 46 publications and several unpublished studies, we compared effect sizes (Pearson’s r) of resource N:C and P:C on growth, consumption, excretion, and egestion between herbivorous and detritivorous consumers. The meta-analysis revealed that both herbivore and detritivore growth are limited by resource N:C and P:C contents, but effect sizes only among detritivores were significantly above zero. Consumption effect sizes were negative among herbivores but positive for detritivores in the case of both N:C and P:C, indicating distinct compensatory feeding responses across resource stoichiometry gradients. Herbivore P excretion rates responded significantly positively to resource P:C, whereas detritivore N and P excretion did not respond; detritivore N and P egestion responded positively to resource N:C and P:C, respectively. Our meta-analysis highlights resource N and P contents as broadly limiting in brown and green benthic food webs, but indicates contrasting mechanisms of limitation owing to differing consumer regulation. We suggest that green and brown food webs share fundamental stoichiometric principles, while identifying specific differences toward applying ecological stoichiometry across ecosystems. PMID:28706509
Lawrence; Giles
1998-04-03
The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the "indexable Web," the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages.
NASA Astrophysics Data System (ADS)
Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Wei, Y.
2010-12-01
Terrestrial ecology data sets are produced from diverse data sources such as model output, field data collection, laboratory analysis and remote sensing observation. These data sets can be created, distributed, and consumed in diverse ways as well. However, this diversity can hinder the usability of the data, and limit data users’ abilities to validate and reuse data for science and application purposes. Geospatial web services, such as those described in this paper, are an important means of reducing this burden. Terrestrial ecology researchers generally create the data sets in diverse file formats, with file and data structures tailored to the specific needs of their project, possibly as tabular data, geospatial images, or documentation in a report. Data centers may reformat the data to an archive-stable format and distribute the data sets through one or more protocols, such as FTP, email, and WWW. Because of the diverse data preparation, delivery, and usage patterns, users have to invest time and resources to bring the data into the format and structure most useful for their analysis. This time-consuming data preparation process shifts valuable resources from data analysis to data assembly. To address these issues, the ORNL DAAC, a NASA-sponsored terrestrial ecology data center, has utilized geospatial Web service technology, such as Open Geospatial Consortium (OGC) Web Map Service (WMS) and OGC Web Coverage Service (WCS) standards, to increase the usability and availability of terrestrial ecology data sets. Data sets are standardized into non-proprietary file formats and distributed through OGC Web Service standards. OGC Web services allow the ORNL DAAC to store data sets in a single format and distribute them in multiple ways and formats. Registering the OGC Web services through search catalogues and other spatial data tools allows for publicizing the data sets and makes them more available across the Internet. The ORNL DAAC has also created a Web-based graphical user interface called Spatial Data Access Tool (SDAT) that utilizes OGC Web services standards and allows data distribution and consumption for users not familiar with OGC standards. SDAT also allows for users to visualize the data set prior to download. Google Earth visualizations of the data set are also provided through SDAT. The use of OGC Web service standards at the ORNL DAAC has enabled an increase in data consumption. In one case, a data set had ~10 fold increase in download through OGC Web service in comparison to the conventional FTP and WWW method of access. The increase in download suggests that users are not only finding the data sets they need but also able to consume them readily in the format they need.
Oasis: online analysis of small RNA deep sequencing data.
Capece, Vincenzo; Garcia Vizcaino, Julio C; Vidal, Ramon; Rahman, Raza-Ur; Pena Centeno, Tonatiuh; Shomroni, Orr; Suberviola, Irantzu; Fischer, Andre; Bonn, Stefan
2015-07-01
Oasis is a web application that allows for the fast and flexible online analysis of small-RNA-seq (sRNA-seq) data. It was designed for the end user in the lab, providing an easy-to-use web frontend including video tutorials, demo data and best practice step-by-step guidelines on how to analyze sRNA-seq data. Oasis' exclusive selling points are a differential expression module that allows for the multivariate analysis of samples, a classification module for robust biomarker detection and an advanced programming interface that supports the batch submission of jobs. Both modules include the analysis of novel miRNAs, miRNA targets and functional analyses including GO and pathway enrichment. Oasis generates downloadable interactive web reports for easy visualization, exploration and analysis of data on a local system. Finally, Oasis' modular workflow enables for the rapid (re-) analysis of data. Oasis is implemented in Python, R, Java, PHP, C++ and JavaScript. It is freely available at http://oasis.dzne.de. stefan.bonn@dzne.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Coloc-stats: a unified web interface to perform colocalization analysis of genomic features.
Simovski, Boris; Kanduri, Chakravarthi; Gundersen, Sveinung; Titov, Dmytro; Domanska, Diana; Bock, Christoph; Bossini-Castillo, Lara; Chikina, Maria; Favorov, Alexander; Layer, Ryan M; Mironov, Andrey A; Quinlan, Aaron R; Sheffield, Nathan C; Trynka, Gosia; Sandve, Geir K
2018-06-05
Functional genomics assays produce sets of genomic regions as one of their main outputs. To biologically interpret such region-sets, researchers often use colocalization analysis, where the statistical significance of colocalization (overlap, spatial proximity) between two or more region-sets is tested. Existing colocalization analysis tools vary in the statistical methodology and analysis approaches, thus potentially providing different conclusions for the same research question. As the findings of colocalization analysis are often the basis for follow-up experiments, it is helpful to use several tools in parallel and to compare the results. We developed the Coloc-stats web service to facilitate such analyses. Coloc-stats provides a unified interface to perform colocalization analysis across various analytical methods and method-specific options (e.g. colocalization measures, resolution, null models). Coloc-stats helps the user to find a method that supports their experimental requirements and allows for a straightforward comparison across methods. Coloc-stats is implemented as a web server with a graphical user interface that assists users with configuring their colocalization analyses. Coloc-stats is freely available at https://hyperbrowser.uio.no/coloc-stats/.
RSAT 2015: Regulatory Sequence Analysis Tools.
Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques
2015-07-01
RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Wolff, Joachim; Bhardwaj, Vivek; Nothjunge, Stephan; Richard, Gautier; Renschler, Gina; Gilsbach, Ralf; Manke, Thomas; Backofen, Rolf; Ramírez, Fidel; Grüning, Björn A
2018-06-13
Galaxy HiCExplorer is a web server that facilitates the study of the 3D conformation of chromatin by allowing Hi-C data processing, analysis and visualization. With the Galaxy HiCExplorer web server, users with little bioinformatic background can perform every step of the analysis in one workflow: mapping of the raw sequence data, creation of Hi-C contact matrices, quality assessment, correction of contact matrices and identification of topological associated domains (TADs) and A/B compartments. Users can create publication ready plots of the contact matrix, A/B compartments, and TADs on a selected genomic locus, along with additional information like gene tracks or ChIP-seq signals. Galaxy HiCExplorer is freely usable at: https://hicexplorer.usegalaxy.eu and is available as a Docker container: https://github.com/deeptools/docker-galaxy-hicexplorer.
Rot, Gregor; Parikh, Anup; Curk, Tomaz; Kuspa, Adam; Shaulsky, Gad; Zupan, Blaz
2009-08-25
Bioinformatics often leverages on recent advancements in computer science to support biologists in their scientific discovery process. Such efforts include the development of easy-to-use web interfaces to biomedical databases. Recent advancements in interactive web technologies require us to rethink the standard submit-and-wait paradigm, and craft bioinformatics web applications that share analytical and interactive power with their desktop relatives, while retaining simplicity and availability. We have developed dictyExpress, a web application that features a graphical, highly interactive explorative interface to our database that consists of more than 1000 Dictyostelium discoideum gene expression experiments. In dictyExpress, the user can select experiments and genes, perform gene clustering, view gene expression profiles across time, view gene co-expression networks, perform analyses of Gene Ontology term enrichment, and simultaneously display expression profiles for a selected gene in various experiments. Most importantly, these tasks are achieved through web applications whose components are seamlessly interlinked and immediately respond to events triggered by the user, thus providing a powerful explorative data analysis environment. dictyExpress is a precursor for a new generation of web-based bioinformatics applications with simple but powerful interactive interfaces that resemble that of the modern desktop. While dictyExpress serves mainly the Dictyostelium research community, it is relatively easy to adapt it to other datasets. We propose that the design ideas behind dictyExpress will influence the development of similar applications for other model organisms.
Uncovering changes in spider orb-web topology owing to aerodynamic effects
Zaera, Ramón; Soler, Alejandro; Teus, Jaime
2014-01-01
An orb-weaving spider's likelihood of survival is influenced by its ability to retain prey with minimum damage to its web and at the lowest manufacturing cost. This set of requirements has forced the spider silk to evolve towards extreme strength and ductility to a degree that is rare among materials. Previous studies reveal that the performance of the web upon impact may not be based on the mechanical properties of silk alone, aerodynamic drag could play a role in the dissipation of the prey's energy. Here, we present a thorough analysis of the effect of the aerodynamic drag on wind load and prey impact. The hypothesis considered by previous authors for the evaluation of the drag force per unit length of thread has been revisited according to well-established principles of fluid mechanics, highlighting the functional dependence on thread diameter that was formerly ignored. Theoretical analysis and finite-element simulations permitted us to identify air drag as a relevant factor in reducing deterioration of the orb web, and to reveal how the spider can take greater—and not negligible—advantage of drag dissipation. The study shows the beneficial air drag effects of building smaller and less dense webs under wind load, and larger and denser webs under prey impact loads. In essence, it points out why the aerodynamics need to be considered as an additional driving force in the evolution of silk threads and orb webs. PMID:24966235
NASA Astrophysics Data System (ADS)
Lopez-Duarte, P. C.; Able, K.; Fodrie, J.; McCann, M. J.; Melara, S.; Noji, C.; Olin, J.; Pincin, J.; Plank, K.; Polito, M. J.; Jensen, O.
2016-02-01
Multiple studies conducted over five years since the 2010 Macondo oil spill in the Gulf of Mexico indicate that oil impacts vary widely among taxonomic groups. For instance, fishes inhabiting the marsh surface show no clear differences in either community composition or population characteristics between oiled and unoiled sites, despite clear evidence of physiological impacts on individual fish. In contrast, marsh insects and spiders are sensitive to the effects of hydrocarbons. Both insects and spiders are components of the marsh food web and represent an important trophic link between marsh plants and higher trophic levels. Because differences in oil impacts throughout the marsh food web have the potential to significantly alter food webs and energy flow pathways and reduce food web resilience, our goal is to quantify differences in marsh food webs between oiled and unoiled sites to test the hypothesis that oiling has resulted in simpler and less resilient food webs. Diets and food web connections were quantified through a combination of stomach content, stable isotope, and fatty acid analysis. The combination of these three techniques provides a more robust approach to quantifying trophic relationships than any of these methods alone. Stomach content analysis provides a detailed snapshot of diets, while fatty acid and stable isotopes reflect diets averaged over weeks to months. Initial results focus on samples collected in May 2015 from a range of terrestrial and aquatic consumer species, including insects, mollusks, crustaceans, and piscivorous fishes.
Uncovering changes in spider orb-web topology owing to aerodynamic effects.
Zaera, Ramón; Soler, Alejandro; Teus, Jaime
2014-09-06
An orb-weaving spider's likelihood of survival is influenced by its ability to retain prey with minimum damage to its web and at the lowest manufacturing cost. This set of requirements has forced the spider silk to evolve towards extreme strength and ductility to a degree that is rare among materials. Previous studies reveal that the performance of the web upon impact may not be based on the mechanical properties of silk alone, aerodynamic drag could play a role in the dissipation of the prey's energy. Here, we present a thorough analysis of the effect of the aerodynamic drag on wind load and prey impact. The hypothesis considered by previous authors for the evaluation of the drag force per unit length of thread has been revisited according to well-established principles of fluid mechanics, highlighting the functional dependence on thread diameter that was formerly ignored. Theoretical analysis and finite-element simulations permitted us to identify air drag as a relevant factor in reducing deterioration of the orb web, and to reveal how the spider can take greater-and not negligible-advantage of drag dissipation. The study shows the beneficial air drag effects of building smaller and less dense webs under wind load, and larger and denser webs under prey impact loads. In essence, it points out why the aerodynamics need to be considered as an additional driving force in the evolution of silk threads and orb webs. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Web-based health services and clinical decision support.
Jegelevicius, Darius; Marozas, Vaidotas; Lukosevicius, Arunas; Patasius, Martynas
2004-01-01
The purpose of this study was the development of a Web-based e-health service for comprehensive assistance and clinical decision support. The service structure consists of a Web server, a PHP-based Web interface linked to a clinical SQL database, Java applets for interactive manipulation and visualization of signals and a Matlab server linked with signal and data processing algorithms implemented by Matlab programs. The service ensures diagnostic signal- and image analysis-sbased clinical decision support. By using the discussed methodology, a pilot service for pathology specialists for automatic calculation of the proliferation index has been developed. Physicians use a simple Web interface for uploading the pictures under investigation to the server; subsequently a Java applet interface is used for outlining the region of interest and, after processing on the server, the requested proliferation index value is calculated. There is also an "expert corner", where experts can submit their index estimates and comments on particular images, which is especially important for system developers. These expert evaluations are used for optimization and verification of automatic analysis algorithms. Decision support trials have been conducted for ECG and ophthalmology ultrasonic investigations of intraocular tumor differentiation. Data mining algorithms have been applied and decision support trees constructed. These services are under implementation by a Web-based system too. The study has shown that the Web-based structure ensures more effective, flexible and accessible services compared with standalone programs and is very convenient for biomedical engineers and physicians, especially in the development phase.
Zaidman-Zait, Anat; Jamieson, Janet R
2004-01-01
The present study has three purposes: (a) to determine who disseminates information on cochlear implants on the Web; (b) to describe a representative sample of Web sites that disseminate information on cochlear implants, with a focus on the content topics and their relevance to parents of deaf children; and (c) to discuss the practical issues of Web-based information and its implications for professionals working with parents of deaf children. Using the terms "cochlear implants" and "children," the first 10 sites generated by the four most popular search engines (Google, Yahoo, Microsoft's MSN, and America Online) at two points in time were selected for analysis, resulting in a sample of 31 Web sites. The majority of Web sites represented medically oriented academic departments and government organizations, although a wide variety of other sources containing information about cochlear implants were also located. Qualitative analysis revealed that the content tended to fall into eight categories; however, the important issues of educational concerns, habilitation following surgery, and communication methods were either addressed minimally or neglected completely. Using analytical tools that had been developed to evaluate "user friendliness" in other domains, each Web site was assessed for its stability, service/design features and ease of use. In general, wide variability was noted across the Web sites for each of these factors. The strong recommendation is made that professionals understand and enhance their knowledge of both the advantages and limitations of incorporating the new technology into their work with parents.
Saito, L.; Johnson, B.M.; Bartholow, J.; Hanna, R.B.
2001-01-01
We investigated the effects on the reservoir food web of a new temperature control device (TCD) on the dam at Shasta Lake, California. We followed a linked modeling approach that used a specialized reservoir water quality model to forecast operation-induced changes in phytoplankton production. A food web–energy transfer model was also applied to propagate predicted changes in phytoplankton up through the food web to the predators and sport fishes of interest. The food web–energy transfer model employed a 10% trophic transfer efficiency through a food web that was mapped using carbon and nitrogen stable isotope analysis. Stable isotope analysis provided an efficient and comprehensive means of estimating the structure of the reservoir's food web with minimal sampling and background data. We used an optimization procedure to estimate the diet proportions of all food web components simultaneously from their isotopic signatures. Some consumers were estimated to be much more sensitive than others to perturbations to phytoplankton supply. The linked modeling approach demonstrated that interdisciplinary efforts enhance the value of information obtained from studies of managed ecosystems. The approach exploited the strengths of engineering and ecological modeling methods to address concerns that neither of the models could have addressed alone: (a) the water quality model could not have addressed quantitatively the possible impacts to fish, and (b) the food web model could not have examined how phytoplankton availability might change due to reservoir operations.
Rot, Gregor; Parikh, Anup; Curk, Tomaz; Kuspa, Adam; Shaulsky, Gad; Zupan, Blaz
2009-01-01
Background Bioinformatics often leverages on recent advancements in computer science to support biologists in their scientific discovery process. Such efforts include the development of easy-to-use web interfaces to biomedical databases. Recent advancements in interactive web technologies require us to rethink the standard submit-and-wait paradigm, and craft bioinformatics web applications that share analytical and interactive power with their desktop relatives, while retaining simplicity and availability. Results We have developed dictyExpress, a web application that features a graphical, highly interactive explorative interface to our database that consists of more than 1000 Dictyostelium discoideum gene expression experiments. In dictyExpress, the user can select experiments and genes, perform gene clustering, view gene expression profiles across time, view gene co-expression networks, perform analyses of Gene Ontology term enrichment, and simultaneously display expression profiles for a selected gene in various experiments. Most importantly, these tasks are achieved through web applications whose components are seamlessly interlinked and immediately respond to events triggered by the user, thus providing a powerful explorative data analysis environment. Conclusion dictyExpress is a precursor for a new generation of web-based bioinformatics applications with simple but powerful interactive interfaces that resemble that of the modern desktop. While dictyExpress serves mainly the Dictyostelium research community, it is relatively easy to adapt it to other datasets. We propose that the design ideas behind dictyExpress will influence the development of similar applications for other model organisms. PMID:19706156
Users' information-seeking behavior on a medical library Website
Rozic-Hristovski, Anamarija; Hristovski, Dimitar; Todorovski, Ljupco
2002-01-01
The Central Medical Library (CMK) at the Faculty of Medicine, University of Ljubljana, Slovenia, started to build a library Website that included a guide to library services and resources in 1997. The evaluation of Website usage plays an important role in its maintenance and development. Analyzing and exploring regularities in the visitors' behavior can be used to enhance the quality and facilitate delivery of information services, identify visitors' interests, and improve the server's performance. The analysis of the CMK Website users' navigational behavior was carried out by analyzing the Web server log files. These files contained information on all user accesses to the Website and provided a great opportunity to learn more about the behavior of visitors to the Website. The majority of the available tools for Web log file analysis provide a predefined set of reports showing the access count and the transferred bytes grouped along several dimensions. In addition to the reports mentioned above, the authors wanted to be able to perform interactive exploration and ad hoc analysis and discover trends in a user-friendly way. Because of that, we developed our own solution for exploring and analyzing the Web logs based on data warehousing and online analytical processing technologies. The analytical solution we developed proved successful, so it may find further application in the field of Web log file analysis. We will apply the findings of the analysis to restructuring the CMK Website. PMID:11999179
Marsh, Terence L.; Saxman, Paul; Cole, James; Tiedje, James
2000-01-01
Rapid analysis of microbial communities has proven to be a difficult task. This is due, in part, to both the tremendous diversity of the microbial world and the high complexity of many microbial communities. Several techniques for community analysis have emerged over the past decade, and most take advantage of the molecular phylogeny derived from 16S rRNA comparative sequence analysis. We describe a web-based research tool located at the Ribosomal Database Project web site (http://www.cme.msu.edu/RDP/html/analyses.html) that facilitates microbial community analysis using terminal restriction fragment length polymorphism of 16S ribosomal DNA. The analysis function (designated TAP T-RFLP) permits the user to perform in silico restriction digestions of the entire 16S sequence database and derive terminal restriction fragment sizes, measured in base pairs, from the 5′ terminus of the user-specified primer to the 3′ terminus of the restriction endonuclease target site. The output can be sorted and viewed either phylogenetically or by size. It is anticipated that the site will guide experimental design as well as provide insight into interpreting results of community analysis with terminal restriction fragment length polymorphisms. PMID:10919828
Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer
2006-01-01
Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.
AstrodyToolsWeb an e-Science project in Astrodynamics and Celestial Mechanics fields
NASA Astrophysics Data System (ADS)
López, R.; San-Juan, J. F.
2013-05-01
Astrodynamics Web Tools, AstrodyToolsWeb (http://tastrody.unirioja.es), is an ongoing collaborative Web Tools computing infrastructure project which has been specially designed to support scientific computation. AstrodyToolsWeb provides project collaborators with all the technical and human facilities in order to wrap, manage, and use specialized noncommercial software tools in Astrodynamics and Celestial Mechanics fields, with the aim of optimizing the use of resources, both human and material. However, this project is open to collaboration from the whole scientific community in order to create a library of useful tools and their corresponding theoretical backgrounds. AstrodyToolsWeb offers a user-friendly web interface in order to choose applications, introduce data, and select appropriate constraints in an intuitive and easy way for the user. After that, the application is executed in real time, whenever possible; then the critical information about program behavior (errors and logs) and output, including the postprocessing and interpretation of its results (graphical representation of data, statistical analysis or whatever manipulation therein), are shown via the same web interface or can be downloaded to the user's computer.
iRefWeb: interactive analysis of consolidated protein interaction data and their supporting evidence
Turner, Brian; Razick, Sabry; Turinsky, Andrei L.; Vlasblom, James; Crowdy, Edgard K.; Cho, Emerson; Morrison, Kyle; Wodak, Shoshana J.
2010-01-01
We present iRefWeb, a web interface to protein interaction data consolidated from 10 public databases: BIND, BioGRID, CORUM, DIP, IntAct, HPRD, MINT, MPact, MPPI and OPHID. iRefWeb enables users to examine aggregated interactions for a protein of interest, and presents various statistical summaries of the data across databases, such as the number of organism-specific interactions, proteins and cited publications. Through links to source databases and supporting evidence, researchers may gauge the reliability of an interaction using simple criteria, such as the detection methods, the scale of the study (high- or low-throughput) or the number of cited publications. Furthermore, iRefWeb compares the information extracted from the same publication by different databases, and offers means to follow-up possible inconsistencies. We provide an overview of the consolidated protein–protein interaction landscape and show how it can be automatically cropped to aid the generation of meaningful organism-specific interactomes. iRefWeb can be accessed at: http://wodaklab.org/iRefWeb. Database URL: http://wodaklab.org/iRefWeb/ PMID:20940177
Enhancing UCSF Chimera through web services
Huang, Conrad C.; Meng, Elaine C.; Morris, John H.; Pettersen, Eric F.; Ferrin, Thomas E.
2014-01-01
Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. PMID:24861624
Remote monitoring of vibrational information in spider webs.
Mortimer, B; Soler, A; Siviour, C R; Vollrath, F
2018-05-22
Spiders are fascinating model species to study information-acquisition strategies, with the web acting as an extension of the animal's body. Here, we compare the strategies of two orb-weaving spiders that acquire information through vibrations transmitted and filtered in the web. Whereas Araneus diadematus monitors web vibration directly on the web, Zygiella x-notata uses a signal thread to remotely monitor web vibration from a retreat, which gives added protection. We assess the implications of these two information-acquisition strategies on the quality of vibration information transfer, using laser Doppler vibrometry to measure vibrations of real webs and finite element analysis in computer models of webs. We observed that the signal thread imposed no biologically relevant time penalty for vibration propagation. However, loss of energy (attenuation) was a cost associated with remote monitoring via a signal thread. The findings have implications for the biological use of vibrations by spiders, including the mechanisms to locate and discriminate between vibration sources. We show that orb-weaver spiders are fascinating examples of organisms that modify their physical environment to shape their information-acquisition strategy.
Remote monitoring of vibrational information in spider webs
NASA Astrophysics Data System (ADS)
Mortimer, B.; Soler, A.; Siviour, C. R.; Vollrath, F.
2018-06-01
Spiders are fascinating model species to study information-acquisition strategies, with the web acting as an extension of the animal's body. Here, we compare the strategies of two orb-weaving spiders that acquire information through vibrations transmitted and filtered in the web. Whereas Araneus diadematus monitors web vibration directly on the web, Zygiella x-notata uses a signal thread to remotely monitor web vibration from a retreat, which gives added protection. We assess the implications of these two information-acquisition strategies on the quality of vibration information transfer, using laser Doppler vibrometry to measure vibrations of real webs and finite element analysis in computer models of webs. We observed that the signal thread imposed no biologically relevant time penalty for vibration propagation. However, loss of energy (attenuation) was a cost associated with remote monitoring via a signal thread. The findings have implications for the biological use of vibrations by spiders, including the mechanisms to locate and discriminate between vibration sources. We show that orb-weaver spiders are fascinating examples of organisms that modify their physical environment to shape their information-acquisition strategy.
Kim, Chun-Ja; Kang, Duck-Hee
2006-01-01
Despite the numerous benefits of physical activity for patients with diabetes, most healthcare providers in busy clinical settings rarely find time to counsel their patients about it. A Web-based program for healthcare providers can be used as an effective counseling tool, when strategies are outlined for specific stages of readiness for physical activity. Seventy-three adults with type 2 diabetes were randomly assigned to Web-based intervention, printed-material intervention, or usual care. After 12 weeks, the effects of the interventions on physical activity, fasting blood sugar, and glycosylated hemoglobin were evaluated. Both Web-based and printed material intervention, compared with usual care, were effective in increasing physical activity (P < .001) and decreasing fasting blood sugar (P<.01) and glycosylated hemoglobin (P < .01). Post hoc analysis for change scores indicated significant differences between Web-based intervention and usual care and between printed material intervention and usual care, but not between web-based and printed material intervention. The findings of this study support the value of Web-based and printed material interventions in healthcare counseling. With increasing Web access, the effectiveness of Web-based programs offered directly to patients needs to be tested.
ERIC Educational Resources Information Center
Viola, Michael Joseph
2016-01-01
The article highlights the ongoing relevance of W.E.B. Du Bois for the global analysis of race and class. Engaging scholarly debates that have ensued within the educational subfields of critical race theory (CRT) and (revolutionary) critical pedagogy, the article explores how a deeper engagement with Du Bois's ideas contributes theoretically and…
ERIC Educational Resources Information Center
Liu, Leping; Maddux, Cleborne D.
2008-01-01
This article presents a study of Web 2.0 articles intended to (a) analyze the content of what is written and (b) develop a statistical model to predict whether authors' write about the need for new instructional design strategies and models. Eighty-eight technology articles were subjected to lexical analysis and a logistic regression model was…
OmicsNet: a web-based tool for creation and visual analysis of biological networks in 3D space.
Zhou, Guangyan; Xia, Jianguo
2018-06-07
Biological networks play increasingly important roles in omics data integration and systems biology. Over the past decade, many excellent tools have been developed to support creation, analysis and visualization of biological networks. However, important limitations remain: most tools are standalone programs, the majority of them focus on protein-protein interaction (PPI) or metabolic networks, and visualizations often suffer from 'hairball' effects when networks become large. To help address these limitations, we developed OmicsNet - a novel web-based tool that allows users to easily create different types of molecular interaction networks and visually explore them in a three-dimensional (3D) space. Users can upload one or multiple lists of molecules of interest (genes/proteins, microRNAs, transcription factors or metabolites) to create and merge different types of biological networks. The 3D network visualization system was implemented using the powerful Web Graphics Library (WebGL) technology that works natively in most major browsers. OmicsNet supports force-directed layout, multi-layered perspective layout, as well as spherical layout to help visualize and navigate complex networks. A rich set of functions have been implemented to allow users to perform coloring, shading, topology analysis, and enrichment analysis. OmicsNet is freely available at http://www.omicsnet.ca.
Hogsden, Kristy L; Harding, Jon S
2012-03-01
We compared food web structure in 20 streams with either anthropogenic or natural sources of acidity and metals or circumneutral water chemistry in New Zealand. Community and diet analysis indicated that mining streams receiving anthropogenic inputs of acidic and metal-rich drainage had much simpler food webs (fewer species, shorter food chains, less links) than those in naturally acidic, naturally high metal, and circumneutral streams. Food webs of naturally high metal streams were structurally similar to those in mining streams, lacking fish predators and having few species. Whereas, webs in naturally acidic streams differed very little from those in circumneutral streams due to strong similarities in community composition and diets of secondary and top consumers. The combined negative effects of acidity and metals on stream food webs are clear. However, elevated metal concentrations, regardless of source, appear to play a more important role than acidity in driving food web structure. Copyright © 2011 Elsevier Ltd. All rights reserved.
Human exposure assessment resources on the World Wide Web.
Schwela, Dieter; Hakkinen, Pertti J
2004-05-20
Human exposure assessment is frequently noted as a weak link and bottleneck in the risk assessment process. Fortunately, the World Wide Web and Internet are providing access to numerous valuable sources of human exposure assessment-related information, along with opportunities for information exchange. Internet mailing lists are available as potential online help for exposure assessment questions, e.g. RISKANAL has several hundred members from numerous countries. Various Web sites provide opportunities for training, e.g. Web sites offering general human exposure assessment training include two from the US Environmental Protection Agency (EPA) and four from the US National Library of Medicine. Numerous other Web sites offer access to a wide range of exposure assessment information. For example, the (US) Alliance for Chemical Awareness Web site addresses direct and indirect human exposures, occupational exposures and ecological exposure assessments. The US EPA's Exposure Factors Program Web site provides a focal point for current information and data on exposure factors relevant to the United States. In addition, the International Society of Exposure Analysis Web site provides information about how this society seeks to foster and advance the science of exposure analysis. A major opportunity exists for risk assessors and others to broaden the level of exposure assessment information available via Web sites. Broadening the Web's exposure information could include human exposure factors-related information about country- or region-specific ranges in body weights, drinking water consumption, etc. along with residential factors-related information on air changeovers per hour in various types of residences. Further, country- or region-specific ranges on how various tasks are performed by various types of consumers could be collected and provided. Noteworthy are that efforts are underway in Europe to develop a multi-country collection of exposure factors and the European Commission is in the early stages of planning and developing a Web-accessible information system (EIS-ChemRisks) to serve as a single gateway to all major European initiatives on human exposure to chemicals contained and released from cleaning products, textiles, toys, etc.
A web server for analysis, comparison and prediction of protein ligand binding sites.
Singh, Harinder; Srivastava, Hemant Kumar; Raghava, Gajendra P S
2016-03-25
One of the major challenges in the field of system biology is to understand the interaction between a wide range of proteins and ligands. In the past, methods have been developed for predicting binding sites in a protein for a limited number of ligands. In order to address this problem, we developed a web server named 'LPIcom' to facilitate users in understanding protein-ligand interaction. Analysis, comparison and prediction modules are available in the "LPIcom' server to predict protein-ligand interacting residues for 824 ligands. Each ligand must have at least 30 protein binding sites in PDB. Analysis module of the server can identify residues preferred in interaction and binding motif for a given ligand; for example residues glycine, lysine and arginine are preferred in ATP binding sites. Comparison module of the server allows comparing protein-binding sites of multiple ligands to understand the similarity between ligands based on their binding site. This module indicates that ATP, ADP and GTP ligands are in the same cluster and thus their binding sites or interacting residues exhibit a high level of similarity. Propensity-based prediction module has been developed for predicting ligand-interacting residues in a protein for more than 800 ligands. In addition, a number of web-based tools have been integrated to facilitate users in creating web logo and two-sample between ligand interacting and non-interacting residues. In summary, this manuscript presents a web-server for analysis of ligand interacting residue. This server is available for public use from URL http://crdd.osdd.net/raghava/lpicom .
Network Analysis of Reconnaissance and Intrusion of an Industrial Control System
2016-09-01
simulated a plant engineer using the engineering workstation web browser to authenticate to the vegetable cooker HMI. While the engineer established the...observed the vegetable cooker HMI web display, the attacker stopped capturing network traffic. Acting as the attacker, we searched the attacker’s pcap...manually controlled by human activity. In this testbed network, only web browser traffic (HTTP) is created by an operator to view an HMI status
An Analysis of the Elements of Collaboration Associated with Top Collaborative Tools
2010-03-01
lets you access your e-mail, calendar, and files from any web browser anywhere in the world. Web based www.hotoffice.com Noodle Vialect’s (parent...www.taroby.org Yuuguu Yuuguu is an instant screen sharing, web conferencing, remote support, desktop remote control and messaging tool. Client...Office, Noodle , Novlet, Revizr, Taroby, and Yuuguu) received all seven NS ratings (see Table 20 below). The overall ratings for the major elements
Analysis of plastic deformation in silicon web crystals
NASA Technical Reports Server (NTRS)
Spitznagel, J. A.; Seidensticker, R. G.; Lien, S. Y.; Mchugh, J. P.; Hopkins, R. H.
1987-01-01
Numerical calculation of 111-plane 110-line slip activity in silicon web crystals generated by thermal stresses is in good agreement with etch pit patterns and X-ray topographic data. The data suggest that stress redistribution effects are small and that a model, similar to that proposed by Penning (1958) and Jordan (1981) but modified to account for dislocation annihilation and egress, can be used to describe plastic flow effects during silicon web growth.
Carroll, Adam J; Badger, Murray R; Harvey Millar, A
2010-07-14
Standardization of analytical approaches and reporting methods via community-wide collaboration can work synergistically with web-tool development to result in rapid community-driven expansion of online data repositories suitable for data mining and meta-analysis. In metabolomics, the inter-laboratory reproducibility of gas-chromatography/mass-spectrometry (GC/MS) makes it an obvious target for such development. While a number of web-tools offer access to datasets and/or tools for raw data processing and statistical analysis, none of these systems are currently set up to act as a public repository by easily accepting, processing and presenting publicly submitted GC/MS metabolomics datasets for public re-analysis. Here, we present MetabolomeExpress, a new File Transfer Protocol (FTP) server and web-tool for the online storage, processing, visualisation and statistical re-analysis of publicly submitted GC/MS metabolomics datasets. Users may search a quality-controlled database of metabolite response statistics from publicly submitted datasets by a number of parameters (eg. metabolite, species, organ/biofluid etc.). Users may also perform meta-analysis comparisons of multiple independent experiments or re-analyse public primary datasets via user-friendly tools for t-test, principal components analysis, hierarchical cluster analysis and correlation analysis. They may interact with chromatograms, mass spectra and peak detection results via an integrated raw data viewer. Researchers who register for a free account may upload (via FTP) their own data to the server for online processing via a novel raw data processing pipeline. MetabolomeExpress https://www.metabolome-express.org provides a new opportunity for the general metabolomics community to transparently present online the raw and processed GC/MS data underlying their metabolomics publications. Transparent sharing of these data will allow researchers to assess data quality and draw their own insights from published metabolomics datasets.
iMetaLab 1.0: A web platform for metaproteomics data analysis.
Liao, Bo; Ning, Zhibin; Cheng, Kai; Zhang, Xu; Li, Leyuan; Mayne, Janice; Figeys, Daniel
2018-06-15
The human gut microbiota, a complex, dynamic and biodiverse community, has been increasingly shown to influence many aspects of health and disease. Metaproteomic analysis has proven to be a powerful approach to study the functionality of the microbiota. However, the processing and analyses of metaproteomic mass spectrometry (MS) data remains a daunting task in metaproteomics data analysis. We developed iMetaLab, a web based platform to provide a user-friendly and comprehensive data analysis pipeline with a focus on lowering the technical barrier for metaproteomics data analysis. iMetaLab is freely available through at http://imetalab.ca. Supplementary data are available at Bioinformatics online.
Chen, Jian; Shi, Fang; Chen, Min; Yang, Yue; Cheng, Lei; Wu, Haitao
2017-10-01
This work is a retrospective analysis to investigate the critical risk factor for the therapeutic effect of endoscopic keel placement on anterior glottic web. Altogether, 36 patients with anterior glottic web undergoing endoscopic lysis and silicone keel placement were enrolled. Their voice qualities were evaluated using the voice handicap index-10 (VHI-10) questionnaire, and improved significantly 3 months after surgery (21.53 ± 3.89 vs 9.81 ± 6.68, P < 0.0001). However, 10 (27.8%) cases had web recurrence during the at least 1-year follow-up. Therefore, patients were classified according to the Cohen classification or web thickness, and the recurrence rates were compared. The distribution of recurrence rates for Cohen type 1 ~ 4 were 28.6, 16.7, 33.3, and 40%, respectively. The difference was not statistically significant (P = 0.461). When classified by web thickness, only 2 of 27 (7.41%) thin type cases relapsed whereas 8 of 9 (88.9%) cases in the thick group reformed webs (P < 0.001). These results suggest that the therapeutic outcome of endoscopic keel placement mostly depends on the web thickness rather than the Cohen grades. Endoscopic lysis and keel placement is only effective for cases with thin glottic webs. Patients with thick webs should be treated by other means.
Lamprey: tracking users on the World Wide Web.
Felciano, R M; Altman, R B
1996-01-01
Tracking individual web sessions provides valuable information about user behavior. This information can be used for general purpose evaluation of web-based user interfaces to biomedical information systems. To this end, we have developed Lamprey, a tool for doing quantitative and qualitative analysis of Web-based user interfaces. Lamprey can be used from any conforming browser, and does not require modification of server or client software. By rerouting WWW navigation through a centralized filter, Lamprey collects the sequence and timing of hyperlinks used by individual users to move through the web. Instead of providing marginal statistics, it retains the full information required to recreate a user session. We have built Lamprey as a standard Common Gateway Interface (CGI) that works with all standard WWW browsers and servers. In this paper, we describe Lamprey and provide a short demonstration of this approach for evaluating web usage patterns.
Opal web services for biomedical applications.
Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W
2010-07-01
Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.
An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing
NASA Astrophysics Data System (ADS)
Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.
2015-07-01
Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.
Effect of Web-based lifestyle modification on weight control: a meta-analysis.
Kodama, S; Saito, K; Tanaka, S; Horikawa, C; Fujiwara, K; Hirasawa, R; Yachi, Y; Iida, K T; Shimano, H; Ohashi, Y; Yamada, N; Sone, H
2012-05-01
Web-based treatment programs are attractive in primary care because of their ability to reach numerous individuals at low cost. Our aim of this meta-analysis is to systematically review the weight loss or maintenance effect of the Internet component in obesity treatment programs. MEDLINE and EMBASE literature searches were conducted to identify studies investigating the effect of Web-based individualized advice on lifestyle modification on weight loss. Randomized controlled trials that consisted of a Web-user experimental and non-Web user control group were included. Weight changes in the experimental group in comparison with the control group were pooled with a random-effects model. A total of 23 studies comprising 8697 participants were included. Overall, using the Internet had a modest but significant additional weight-loss effect compared with non-Web user control groups (-0.68 kg, P=0.03). In comparison with the control group, stratified analysis indicated that using the Internet as an adjunct to obesity care was effective (-1.00 kg, P<0.001), but that using it as a substitute for face-to-face support was unfavorable (+1.27 kg, P=0.01). An additional effect on weight control was observed when the aim of using the Internet was initial weight loss (-1.01 kg; P=0.03), but was not observed when the aim was weight maintenance (+0.68 kg; P=0.26). The relative effect was diminished with longer educational periods (P-trend=0.04) and was insignificant (-0.20 kg; P=0.75) in studies with educational periods of 12 months or more. The current meta-analysis indicates that the Internet component in obesity treatment programs has a modest effect on weight control. However, the effect was inconsistent, largely depending on the type of usage of the Internet or the period of its use.
Annotating spatio-temporal datasets for meaningful analysis in the Web
NASA Astrophysics Data System (ADS)
Stasch, Christoph; Pebesma, Edzer; Scheider, Simon
2014-05-01
More and more environmental datasets that vary in space and time are available in the Web. This comes along with an advantage of using the data for other purposes than originally foreseen, but also with the danger that users may apply inappropriate analysis procedures due to lack of important assumptions made during the data collection process. In order to guide towards a meaningful (statistical) analysis of spatio-temporal datasets available in the Web, we have developed a Higher-Order-Logic formalism that captures some relevant assumptions in our previous work [1]. It allows to proof on meaningful spatial prediction and aggregation in a semi-automated fashion. In this poster presentation, we will present a concept for annotating spatio-temporal datasets available in the Web with concepts defined in our formalism. Therefore, we have defined a subset of the formalism as a Web Ontology Language (OWL) pattern. It allows capturing the distinction between the different spatio-temporal variable types, i.e. point patterns, fields, lattices and trajectories, that in turn determine whether a particular dataset can be interpolated or aggregated in a meaningful way using a certain procedure. The actual annotations that link spatio-temporal datasets with the concepts in the ontology pattern are provided as Linked Data. In order to allow data producers to add the annotations to their datasets, we have implemented a Web portal that uses a triple store at the backend to store the annotations and to make them available in the Linked Data cloud. Furthermore, we have implemented functions in the statistical environment R to retrieve the RDF annotations and, based on these annotations, to support a stronger typing of spatio-temporal datatypes guiding towards a meaningful analysis in R. [1] Stasch, C., Scheider, S., Pebesma, E., Kuhn, W. (2014): "Meaningful spatial prediction and aggregation", Environmental Modelling & Software, 51, 149-165.
NASA Astrophysics Data System (ADS)
De'nan, Fatimah; Keong, Choong Kok; Hashim, Nor Salwani
2017-10-01
Due to extensive usage of corrugated web in construction, this paper performs finite element analysis to investigate the web thickness effects on the bending behaviour of Triangular Web Profile (TRIWP) steel section. A TRIWP steel section which are consists two flanges attached to a triangular profile web plate. This paper analyzes two categories of TRIWP steel sections which are D×100×6×3 mm and D×75×5×2 mm. It was observed that for steel section D×100×6×3 mm (TRIWP1), the deflection about minor and major axis increased as the span length increased. Meanwhile, the deflection about major axis decreased when depth of the web increased. About minor axis, the deflection increased for 3m and 4m span, while the deflection at 4.8m decreased with increment the depth of web. However, when the depth of the web exceeds 250mm, deflection at 3m and 4m were increased. For steel section D×75×5×2 mm (TRIWP2), the result was different with TRIWP1 steel section, where the deflection in both major and minor directions increased with the increment of span length and decreased with increment the depth of web. It shows that the deflection increased proportionally with the depth of web. Therefore, deeper web should be more considered because it resulted in smaller deflection.
WebDISCO: a web service for distributed cox model learning without patient-level data sharing.
Lu, Chia-Lun; Wang, Shuang; Ji, Zhanglong; Wu, Yuan; Xiong, Li; Jiang, Xiaoqian; Ohno-Machado, Lucila
2015-11-01
The Cox proportional hazards model is a widely used method for analyzing survival data. To achieve sufficient statistical power in a survival analysis, it usually requires a large amount of data. Data sharing across institutions could be a potential workaround for providing this added power. The authors develop a web service for distributed Cox model learning (WebDISCO), which focuses on the proof-of-concept and algorithm development for federated survival analysis. The sensitive patient-level data can be processed locally and only the less-sensitive intermediate statistics are exchanged to build a global Cox model. Mathematical derivation shows that the proposed distributed algorithm is identical to the centralized Cox model. The authors evaluated the proposed framework at the University of California, San Diego (UCSD), Emory, and Duke. The experimental results show that both distributed and centralized models result in near-identical model coefficients with differences in the range [Formula: see text] to [Formula: see text]. The results confirm the mathematical derivation and show that the implementation of the distributed model can achieve the same results as the centralized implementation. The proposed method serves as a proof of concept, in which a publicly available dataset was used to evaluate the performance. The authors do not intend to suggest that this method can resolve policy and engineering issues related to the federated use of institutional data, but they should serve as evidence of the technical feasibility of the proposed approach.Conclusions WebDISCO (Web-based Distributed Cox Regression Model; https://webdisco.ucsd-dbmi.org:8443/cox/) provides a proof-of-concept web service that implements a distributed algorithm to conduct distributed survival analysis without sharing patient level data. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Radtke, Jonas; Kebschull, Christopher; Stoll, Enrico
2017-02-01
Recently, several announcements have been published to deploy satellite constellations into Low Earth Orbit (LEO) containing several hundred to thousands of rather small sized objects. The purpose of these constellations is to provide a worldwide internet coverage, even to the remotest areas. Examples of these mega-constellations are one from SpaceX, which is announced to comprise of about 4000 satellites, the Norwegian STEAM network, which is told to contain 4257 satellites, and the OneWeb constellation, which forms one of the smaller constellations with 720 satellites. As example constellation, OneWeb has been chosen. From all announced constellation, OneWeb by far delivered most information, both in regards to constellation design and their plans to encounter space debris issues, which is the reason why it has been chosen for these analyses. In this paper, at first an overview of the planned OneWeb constellation setup is given. From this description, a mission life-cycle is deduced, splitting the complete orbital lifetime of the satellites into four phases. Following, using ESA-MASTER, for each of the mission phases the flux on both single constellations satellites and the complete constellation are performed and the collision probabilities are derived. The focus in this analysis is set on catastrophic collisions. This analysis is then varied parametrically for different operational altitudes of the constellation as well as different lifetimes with different assumptions for the success of post mission disposal (PMD). Following the to-be-expected mean number of collision avoidance manoeuvres during all active mission phases is performed using ARES from ESA's DRAMA tool suite. The same variations as during the flux analysis are considered. Lastly the characteristics of hypothetical OneWeb satellite fragmentation clouds, calculated using the NASA Breakup model, are described and the impact of collision clouds from OneWeb satellites on the constellation itself is analysed.
WormQTL—public archive and analysis web portal for natural variation data in Caenorhabditis spp
Snoek, L. Basten; Van der Velde, K. Joeri; Arends, Danny; Li, Yang; Beyer, Antje; Elvin, Mark; Fisher, Jasmin; Hajnal, Alex; Hengartner, Michael O.; Poulin, Gino B.; Rodriguez, Miriam; Schmid, Tobias; Schrimpf, Sabine; Xue, Feng; Jansen, Ritsert C.; Kammenga, Jan E.; Swertz, Morris A.
2013-01-01
Here, we present WormQTL (http://www.wormqtl.org), an easily accessible database enabling search, comparative analysis and meta-analysis of all data on variation in Caenorhabditis spp. Over the past decade, Caenorhabditis elegans has become instrumental for molecular quantitative genetics and the systems biology of natural variation. These efforts have resulted in a valuable amount of phenotypic, high-throughput molecular and genotypic data across different developmental worm stages and environments in hundreds of C. elegans strains. WormQTL provides a workbench of analysis tools for genotype–phenotype linkage and association mapping based on but not limited to R/qtl (http://www.rqtl.org). All data can be uploaded and downloaded using simple delimited text or Excel formats and are accessible via a public web user interface for biologists and R statistic and web service interfaces for bioinformaticians, based on open source MOLGENIS and xQTL workbench software. WormQTL welcomes data submissions from other worm researchers. PMID:23180786
WormQTL--public archive and analysis web portal for natural variation data in Caenorhabditis spp.
Snoek, L Basten; Van der Velde, K Joeri; Arends, Danny; Li, Yang; Beyer, Antje; Elvin, Mark; Fisher, Jasmin; Hajnal, Alex; Hengartner, Michael O; Poulin, Gino B; Rodriguez, Miriam; Schmid, Tobias; Schrimpf, Sabine; Xue, Feng; Jansen, Ritsert C; Kammenga, Jan E; Swertz, Morris A
2013-01-01
Here, we present WormQTL (http://www.wormqtl.org), an easily accessible database enabling search, comparative analysis and meta-analysis of all data on variation in Caenorhabditis spp. Over the past decade, Caenorhabditis elegans has become instrumental for molecular quantitative genetics and the systems biology of natural variation. These efforts have resulted in a valuable amount of phenotypic, high-throughput molecular and genotypic data across different developmental worm stages and environments in hundreds of C. elegans strains. WormQTL provides a workbench of analysis tools for genotype-phenotype linkage and association mapping based on but not limited to R/qtl (http://www.rqtl.org). All data can be uploaded and downloaded using simple delimited text or Excel formats and are accessible via a public web user interface for biologists and R statistic and web service interfaces for bioinformaticians, based on open source MOLGENIS and xQTL workbench software. WormQTL welcomes data submissions from other worm researchers.
The Web as a Delivery Medium To Enhance Instruction.
ERIC Educational Resources Information Center
Gillani, Bijan
1998-01-01
Discusses how to design and develop an effective Web site to enhance instruction based on a graduate course at California State University at Hayward. Topics include the analysis phase, content organization, site architecture, interface design, testing, and the evaluation process. (LRW)
COMPUTER-AIDED SCIENCE POLICY ANALYSIS AND RESEARCH (WEBCASPAR)
WebCASPAR is a database system containing information about academic science and engineering resources and is available on the World Wide Web. Included in the database is information from several of SRS's academic surveys plus information from a variety of other sources, includin...
Hydrogen Financial Analysis Scenario Tool (H2FAST). Web Tool User's Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, B.; Penev, M.; Melaina, M.
The Hydrogen Financial Analysis Scenario Tool (H2FAST) provides a quick and convenient indepth financial analysis for hydrogen fueling stations. This manual describes how to use the H2FAST web tool, which is one of three H2FAST formats developed by the National Renewable Energy Laboratory (NREL). Although all of the formats are based on the same financial computations and conform to generally accepted accounting principles (FASAB 2014, Investopedia 2014), each format provides a different level of complexity and user interactivity.
dada - a web-based 2D detector analysis tool
NASA Astrophysics Data System (ADS)
Osterhoff, Markus
2017-06-01
The data daemon, dada, is a server backend for unified access to 2D pixel detector image data stored with different detectors, file formats and saved with varying naming conventions and folder structures across instruments. Furthermore, dada implements basic pre-processing and analysis routines from pixel binning over azimuthal integration to raster scan processing. Common user interactions with dada are by a web frontend, but all parameters for an analysis are encoded into a Uniform Resource Identifier (URI) which can also be written by hand or scripts for batch processing.
PSAT: A web tool to compare genomic neighborhoods of multiple prokaryotic genomes
Fong, Christine; Rohmer, Laurence; Radey, Matthew; Wasnick, Michael; Brittnacher, Mitchell J
2008-01-01
Background The conservation of gene order among prokaryotic genomes can provide valuable insight into gene function, protein interactions, or events by which genomes have evolved. Although some tools are available for visualizing and comparing the order of genes between genomes of study, few support an efficient and organized analysis between large numbers of genomes. The Prokaryotic Sequence homology Analysis Tool (PSAT) is a web tool for comparing gene neighborhoods among multiple prokaryotic genomes. Results PSAT utilizes a database that is preloaded with gene annotation, BLAST hit results, and gene-clustering scores designed to help identify regions of conserved gene order. Researchers use the PSAT web interface to find a gene of interest in a reference genome and efficiently retrieve the sequence homologs found in other bacterial genomes. The tool generates a graphic of the genomic neighborhood surrounding the selected gene and the corresponding regions for its homologs in each comparison genome. Homologs in each region are color coded to assist users with analyzing gene order among various genomes. In contrast to common comparative analysis methods that filter sequence homolog data based on alignment score cutoffs, PSAT leverages gene context information for homologs, including those with weak alignment scores, enabling a more sensitive analysis. Features for constraining or ordering results are designed to help researchers browse results from large numbers of comparison genomes in an organized manner. PSAT has been demonstrated to be useful for helping to identify gene orthologs and potential functional gene clusters, and detecting genome modifications that may result in loss of function. Conclusion PSAT allows researchers to investigate the order of genes within local genomic neighborhoods of multiple genomes. A PSAT web server for public use is available for performing analyses on a growing set of reference genomes through any web browser with no client side software setup or installation required. Source code is freely available to researchers interested in setting up a local version of PSAT for analysis of genomes not available through the public server. Access to the public web server and instructions for obtaining source code can be found at . PMID:18366802
Hapgood, Jenny; Smucker Barnwell, Sara; McAfee, Tim
2008-01-01
Background Phone-based tobacco cessation programs have been proven effective and widely adopted. Web-based solutions exist; however, the evidence base is not yet well established. Many cessation treatments are commercially available, but few integrate the phone and Web for delivery and no published studies exist for integrated programs. Objective This paper describes a comprehensive integrated phone/Web tobacco cessation program and the characteristics, experience, and outcomes of smokers enrolled in this program from a real-world evaluation. Methods We tracked program utilization (calls completed, Web log-ins), quit status, satisfaction, and demographics of 11,143 participants who enrolled in the Free & Clear Quit For Life Program between May 2006 and October 2007. All participants received up to five proactive phone counseling sessions with Quit Coaches, unlimited access to an interactive website, up to 20 tailored emails, printed Quit Guides, and cessation medication information. The program was designed to encourage use of all program components rather than asking participants to choose which components they wanted to use while quitting. Results We found that participants tended to use phone services more than Web services. On average, participants completed 2-2.5 counseling calls and logged in to the online program 1-2 times. Women were more adherent to the overall program; women utilized Web and phone services significantly (P = .003) more than men. Older smokers (> 26 years) and moderate smokers (15-20 cigarettes/day) utilized services more (P < .001) than younger (< 26 years) and light or heavy smokers. Satisfaction with services was high (92% to 95%) and varied somewhat with Web utilization. Thirty-day quit rates at the 6-month follow-up were 41% using responder analysis and 21% using intent-to-treat analysis. Web utilization was significantly associated with increased call completion and tobacco abstinence rates at the 6-month follow-up evaluation. Conclusions This paper expands our understanding of a real-world treatment program combining two mediums, phone and Web. Greater adherence to the program, as defined by using both the phone and Web components, is associated with higher quit rates. This study has implications for reaching and treating tobacco users with an integrated phone/Web program and offers evidence regarding the effectiveness of integrated cessation programs. PMID:19017583
Ambrosini, Giovanna; Dreos, René; Kumar, Sunil; Bucher, Philipp
2016-11-18
ChIP-seq and related high-throughput chromatin profilig assays generate ever increasing volumes of highly valuable biological data. To make sense out of it, biologists need versatile, efficient and user-friendly tools for access, visualization and itegrative analysis of such data. Here we present the ChIP-Seq command line tools and web server, implementing basic algorithms for ChIP-seq data analysis starting with a read alignment file. The tools are optimized for memory-efficiency and speed thus allowing for processing of large data volumes on inexpensive hardware. The web interface provides access to a large database of public data. The ChIP-Seq tools have a modular and interoperable design in that the output from one application can serve as input to another one. Complex and innovative tasks can thus be achieved by running several tools in a cascade. The various ChIP-Seq command line tools and web services either complement or compare favorably to related bioinformatics resources in terms of computational efficiency, ease of access to public data and interoperability with other web-based tools. The ChIP-Seq server is accessible at http://ccg.vital-it.ch/chipseq/ .
Web-based Factors Affecting Online Purchasing Behaviour
NASA Astrophysics Data System (ADS)
Ariff, Mohd Shoki Md; Sze Yan, Ng; Zakuan, Norhayati; Zaidi Bahari, Ahamad; Jusoh, Ahmad
2013-06-01
The growing use of internet and online purchasing among young consumers in Malaysia provides a huge prospect in e-commerce market, specifically for B2C segment. In this market, if E-marketers know the web-based factors affecting online buyers' behaviour, and the effect of these factors on behaviour of online consumers, then they can develop their marketing strategies to convert potential customers into active one, while retaining existing online customers. Review of previous studies related to the online purchasing behaviour in B2C market has point out that the conceptualization and empirical validation of the online purchasing behaviour of Information and Communication Technology (ICT) literate users, or ICT professional, in Malaysia has not been clearly addressed. This paper focuses on (i) web-based factors which online buyers (ICT professional) keep in mind while shopping online; and (ii) the effect of web-based factors on online purchasing behaviour. Based on the extensive literature review, a conceptual framework of 24 items of five factors was constructed to determine web-based factors affecting online purchasing behaviour of ICT professional. Analysis of data was performed based on the 310 questionnaires, which were collected using a stratified random sampling method, from ICT undergraduate students in a public university in Malaysia. The Exploratory factor analysis performed showed that five factors affecting online purchase behaviour are Information Quality, Fulfilment/Reliability/Customer Service, Website Design, Quick and Details, and Privacy/Security. The result of Multiple Regression Analysis indicated that Information Quality, Quick and Details, and Privacy/Security affect positively online purchase behaviour. The results provide a usable model for measuring web-based factors affecting buyers' online purchase behaviour in B2C market, as well as for online shopping companies to focus on the factors that will increase customers' online purchase.
Cost Effectiveness of Interventions to Promote Screening for Colorectal Cancer: A Randomized Trial
Misra, Swati; Chan, Wenyaw; Chang, Yu-Chia; Bartholomew, L. Kay; Greisinger, Anthony; McQueen, Amy; Vernon, Sally W.
2011-01-01
Objectives Screening for colorectal cancer is considered cost effective, but is underutilized in the U.S. Information on the efficiency of "tailored interventions" to promote colorectal cancer screening in primary care settings is limited. The paper reports the results of a cost effectiveness analysis that compared a survey-only control group to a Centers for Disease Control (CDC) web-based intervention (screen for life) and to a tailored interactive computer-based intervention. Methods A randomized controlled trial of people 50 and over, was conducted to test the interventions. The sample was 1224 partcipants 50-70 years of age, recruited from Kelsey-Seybold Clinic, a large multi-specialty clinic in Houston, Texas. Screening status was obtained by medical chart review after a 12-month follow-up period. An "intention to treat" analysis and micro costing from the patient and provider perspectives were used to estimate the costs and effects. Analysis of statistical uncertainty was conducted using nonparametric bootstrapping. Results The estimated cost of implementing the web-based intervention was $40 per person and the cost of the tailored intervention was $45 per person. The additional cost per person screened for the web-based intervention compared to no intervention was $2602 and the tailored intervention was no more effective than the web-based strategy. Conclusions The tailored intervention was less cost-effective than the web-based intervention for colorectal cancer screening promotion. The web-based intervention was less cost-effective than previous studies of in-reach colorectal cancer screening promotion. Researchers need to continue developing and evaluating the effectiveness and cost-effectiveness of interventions to increase colorectal cancer screening. PMID:21617335
Automated X-ray and Optical Analysis of the Virtual Observatory and Grid Computing
NASA Astrophysics Data System (ADS)
Ptak, A.; Krughoff, S.; Connolly, A.
2011-07-01
We are developing a system to combine the Web Enabled Source Identification with X-Matching (WESIX) web service, which emphasizes source detection on optical images,with the XAssist program that automates the analysis of X-ray data. XAssist is continuously processing archival X-ray data in several pipelines. We have established a workflow in which FITS images and/or (in the case of X-ray data) an X-ray field can be input to WESIX. Intelligent services return available data (if requested fields have been processed) or submit job requests to a queue to be performed asynchronously. These services will be available via web services (for non-interactive use by Virtual Observatory portals and applications) and through web applications (written in the Django web application framework). We are adding web services for specific XAssist functionality such as determining the exposure and limiting flux for a given position on the sky and extracting spectra and images for a given region. We are improving the queuing system in XAssist to allow for "watch lists" to be specified by users, and when X-ray fields in a user's watch list become publicly available they will be automatically added to the queue. XAssist is being expanded to be used as a survey planning tool when coupled with simulation software, including functionality for NuStar, eRosita, IXO, and the Wide-Field Xray Telescope (WFXT), as part of an end-to-end simulation/analysis system. We are also investigating the possibility of a dedicated iPhone/iPad app for querying pipeline data, requesting processing, and administrative job control. This work was funded by AISRP grant NNG06GE59G.
Lynx web services for annotations and systems analysis of multi-gene disorders.
Sulakhe, Dinanath; Taylor, Andrew; Balasubramanian, Sandhya; Feng, Bo; Xie, Bingqing; Börnigen, Daniela; Dave, Utpal J; Foster, Ian T; Gilliam, T Conrad; Maltsev, Natalia
2014-07-01
Lynx is a web-based integrated systems biology platform that supports annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Lynx has integrated multiple classes of biomedical data (genomic, proteomic, pathways, phenotypic, toxicogenomic, contextual and others) from various public databases as well as manually curated data from our group and collaborators (LynxKB). Lynx provides tools for gene list enrichment analysis using multiple functional annotations and network-based gene prioritization. Lynx provides access to the integrated database and the analytical tools via REST based Web Services (http://lynx.ci.uchicago.edu/webservices.html). This comprises data retrieval services for specific functional annotations, services to search across the complete LynxKB (powered by Lucene), and services to access the analytical tools built within the Lynx platform. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Web-based Visual Analytics for Extreme Scale Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Evans, Katherine J; Harney, John F
In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less
KAGLVis - On-line 3D Visualisation of Earth-observing-satellite Data
NASA Astrophysics Data System (ADS)
Szuba, Marek; Ameri, Parinaz; Grabowski, Udo; Maatouki, Ahmad; Meyer, Jörg
2015-04-01
One of the goals of the Large-Scale Data Management and Analysis project is to provide a high-performance framework facilitating management of data acquired by Earth-observing satellites such as Envisat. On the client-facing facet of this framework, we strive to provide visualisation and basic analysis tool which could be used by scientists with minimal to no knowledge of the underlying infrastructure. Our tool, KAGLVis, is a JavaScript client-server Web application which leverages modern Web technologies to provide three-dimensional visualisation of satellite observables on a wide range of client systems. It takes advantage of the WebGL API to employ locally available GPU power for 3D rendering; this approach has been demonstrated to perform well even on relatively weak hardware such as integrated graphics chipsets found in modern laptop computers and with some user-interface tuning could even be usable on embedded devices such as smartphones or tablets. Data is fetched from the database back-end using a ReST API and cached locally, both in memory and using HTML5 Web Storage, to minimise network use. Computations, calculation of cloud altitude from cloud-index measurements for instance, can depending on configuration be performed on either the client or the server side. Keywords: satellite data, Envisat, visualisation, 3D graphics, Web application, WebGL, MEAN stack.
Toward Exposing Timing-Based Probing Attacks in Web Applications †
Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai
2017-01-01
Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users’ browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach. PMID:28245610
Toward Exposing Timing-Based Probing Attacks in Web Applications.
Mao, Jian; Chen, Yue; Shi, Futian; Jia, Yaoqi; Liang, Zhenkai
2017-02-25
Web applications have become the foundation of many types of systems, ranging from cloud services to Internet of Things (IoT) systems. Due to the large amount of sensitive data processed by web applications, user privacy emerges as a major concern in web security. Existing protection mechanisms in modern browsers, e.g., the same origin policy, prevent the users' browsing information on one website from being directly accessed by another website. However, web applications executed in the same browser share the same runtime environment. Such shared states provide side channels for malicious websites to indirectly figure out the information of other origins. Timing is a classic side channel and the root cause of many recent attacks, which rely on the variations in the time taken by the systems to process different inputs. In this paper, we propose an approach to expose the timing-based probing attacks in web applications. It monitors the browser behaviors and identifies anomalous timing behaviors to detect browser probing attacks. We have prototyped our system in the Google Chrome browser and evaluated the effectiveness of our approach by using known probing techniques. We have applied our approach on a large number of top Alexa sites and reported the suspicious behavior patterns with corresponding analysis results. Our theoretical analysis illustrates that the effectiveness of the timing-based probing attacks is dramatically limited by our approach.
Bae, Jeongyee
2013-04-01
The purpose of this project was to develop an international web-based expert system using principals of artificial intelligence and user-centered design for management of mental health by Korean emigrants. Using this system, anyone can access the system via computer access to the web. Our design process utilized principles of user-centered design with 4 phases: needs assessment, analysis, design/development/testing, and application release. A survey was done with 3,235 Korean emigrants. Focus group interviews were also conducted. Survey and analysis results guided the design of the web-based expert system. With this system, anyone can check their mental health status by themselves using a personal computer. The system analyzes facts based on answers to automated questions, and suggests solutions accordingly. A history tracking mechanism enables monitoring and future analysis. In addition, this system will include intervention programs to promote mental health status. This system is interactive and accessible to anyone in the world. It is expected that this management system will contribute to Korean emigrants' mental health promotion and allow researchers and professionals to share information on mental health.
d'Acierno, Antonio; Facchiano, Angelo; Marabotti, Anna
2009-06-01
We describe the GALT-Prot database and its related web-based application that have been developed to collect information about the structural and functional effects of mutations on the human enzyme galactose-1-phosphate uridyltransferase (GALT) involved in the genetic disease named galactosemia type I. Besides a list of missense mutations at gene and protein sequence levels, GALT-Prot reports the analysis results of mutant GALT structures. In addition to the structural information about the wild-type enzyme, the database also includes structures of over 100 single point mutants simulated by means of a computational procedure, and the analysis to each mutant was made with several bioinformatics programs in order to investigate the effect of the mutations. The web-based interface allows querying of the database, and several links are also provided in order to guarantee a high integration with other resources already present on the web. Moreover, the architecture of the database and the web application is flexible and can be easily adapted to store data related to other proteins with point mutations. GALT-Prot is freely available at http://bioinformatica.isa.cnr.it/GALT/.
Food-web structure of seagrass communities across different spatial scales and human impacts.
Coll, Marta; Schmidt, Allison; Romanuk, Tamara; Lotze, Heike K
2011-01-01
Seagrass beds provide important habitat for a wide range of marine species but are threatened by multiple human impacts in coastal waters. Although seagrass communities have been well-studied in the field, a quantification of their food-web structure and functioning, and how these change across space and human impacts has been lacking. Motivated by extensive field surveys and literature information, we analyzed the structural features of food webs associated with Zostera marina across 16 study sites in 3 provinces in Atlantic Canada. Our goals were to (i) quantify differences in food-web structure across local and regional scales and human impacts, (ii) assess the robustness of seagrass webs to simulated species loss, and (iii) compare food-web structure in temperate Atlantic seagrass beds with those of other aquatic ecosystems. We constructed individual food webs for each study site and cumulative webs for each province and the entire region based on presence/absence of species, and calculated 16 structural properties for each web. Our results indicate that food-web structure was similar among low impact sites across regions. With increasing human impacts associated with eutrophication, however, food-web structure show evidence of degradation as indicated by fewer trophic groups, lower maximum trophic level of the highest top predator, fewer trophic links connecting top to basal species, higher fractions of herbivores and intermediate consumers, and higher number of prey per species. These structural changes translate into functional changes with impacted sites being less robust to simulated species loss. Temperate Atlantic seagrass webs are similar to a tropical seagrass web, yet differed from other aquatic webs, suggesting consistent food-web characteristics across seagrass ecosystems in different regions. Our study illustrates that food-web structure and functioning of seagrass habitats change with human impacts and that the spatial scale of food-web analysis is critical for determining results.
Food-Web Structure of Seagrass Communities across Different Spatial Scales and Human Impacts
Coll, Marta; Schmidt, Allison; Romanuk, Tamara; Lotze, Heike K.
2011-01-01
Seagrass beds provide important habitat for a wide range of marine species but are threatened by multiple human impacts in coastal waters. Although seagrass communities have been well-studied in the field, a quantification of their food-web structure and functioning, and how these change across space and human impacts has been lacking. Motivated by extensive field surveys and literature information, we analyzed the structural features of food webs associated with Zostera marina across 16 study sites in 3 provinces in Atlantic Canada. Our goals were to (i) quantify differences in food-web structure across local and regional scales and human impacts, (ii) assess the robustness of seagrass webs to simulated species loss, and (iii) compare food-web structure in temperate Atlantic seagrass beds with those of other aquatic ecosystems. We constructed individual food webs for each study site and cumulative webs for each province and the entire region based on presence/absence of species, and calculated 16 structural properties for each web. Our results indicate that food-web structure was similar among low impact sites across regions. With increasing human impacts associated with eutrophication, however, food-web structure show evidence of degradation as indicated by fewer trophic groups, lower maximum trophic level of the highest top predator, fewer trophic links connecting top to basal species, higher fractions of herbivores and intermediate consumers, and higher number of prey per species. These structural changes translate into functional changes with impacted sites being less robust to simulated species loss. Temperate Atlantic seagrass webs are similar to a tropical seagrass web, yet differed from other aquatic webs, suggesting consistent food-web characteristics across seagrass ecosystems in different regions. Our study illustrates that food-web structure and functioning of seagrass habitats change with human impacts and that the spatial scale of food-web analysis is critical for determining results. PMID:21811637
Carpenter, Suzanne H
2016-01-01
A graduate degree is required of nursing faculty in America. Because of the nursing faculty shortage, web-based graduate nursing programs are being offered to encourage nurses to return to school. The identification of deterrents to participating in these programs is an important step in increasing enrollment. To identify deterrents to participation in web-based graduate nursing programs. Descriptive survey research. Louisiana Two hundred and eighty-one registered nurse members of the Louisiana Nurses' Association. The 54-item four-point Likert-type interval scale Deterrents to Participation in Web-Based Graduate Nursing Programs Survey Instrument was used. Data were collected over 8weeks using SurveyMonkey.com to administer the web survey tool to all members of the Louisiana State Nurses' Association. A factor analysis revealed a three-factor solution that explained 55.436% of the total variance in deterrents to participation in web-based graduate nursing programs. The factors were labeled "concerns about quality, cost, and time," "concerns about access to resources: technological and personal," and "concerns about electronic mediated communication." Multiple regression analysis revealed an overall model of three predictors of deterrents to participation in web-based graduate nursing programs: no computer literacy, annual household income between 20,000 and 50,000 dollars, and having the current educational status of graduating from a diploma RN program. This model accounted for 21% of the variance in the deterrents to participation scores. Since these three significant predictors of deterrents to participation in web-based graduate nursing programs were identified, web-based nursing graduate program administrators might consider an outreach to RN diploma graduates in an effort to make them aware of available technology support programs to foster participation. Scholarships for lower income nursing students are recommended, and programs to support computer literacy within the nursing community should be considered. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lopez-Doriga, Adriana; Feliubadaló, Lídia; Menéndez, Mireia; Lopez-Doriga, Sergio; Morón-Duran, Francisco D; del Valle, Jesús; Tornero, Eva; Montes, Eva; Cuesta, Raquel; Campos, Olga; Gómez, Carolina; Pineda, Marta; González, Sara; Moreno, Victor; Capellá, Gabriel; Lázaro, Conxi
2014-03-01
Next-generation sequencing (NGS) has revolutionized genomic research and is set to have a major impact on genetic diagnostics thanks to the advent of benchtop sequencers and flexible kits for targeted libraries. Among the main hurdles in NGS are the difficulty of performing bioinformatic analysis of the huge volume of data generated and the high number of false positive calls that could be obtained, depending on the NGS technology and the analysis pipeline. Here, we present the development of a free and user-friendly Web data analysis tool that detects and filters sequence variants, provides coverage information, and allows the user to customize some basic parameters. The tool has been developed to provide accurate genetic analysis of targeted sequencing of common high-risk hereditary cancer genes using amplicon libraries run in a GS Junior System. The Web resource is linked to our own mutation database, to assist in the clinical classification of identified variants. We believe that this tool will greatly facilitate the use of the NGS approach in routine laboratories.
PaintOmics 3: a web resource for the pathway analysis and visualization of multi-omics data.
Hernández-de-Diego, Rafael; Tarazona, Sonia; Martínez-Mira, Carlos; Balzano-Nogueira, Leandro; Furió-Tarí, Pedro; Pappas, Georgios J; Conesa, Ana
2018-05-25
The increasing availability of multi-omic platforms poses new challenges to data analysis. Joint visualization of multi-omics data is instrumental in better understanding interconnections across molecular layers and in fully utilizing the multi-omic resources available to make biological discoveries. We present here PaintOmics 3, a web-based resource for the integrated visualization of multiple omic data types onto KEGG pathway diagrams. PaintOmics 3 combines server-end capabilities for data analysis with the potential of modern web resources for data visualization, providing researchers with a powerful framework for interactive exploration of their multi-omics information. Unlike other visualization tools, PaintOmics 3 covers a comprehensive pathway analysis workflow, including automatic feature name/identifier conversion, multi-layered feature matching, pathway enrichment, network analysis, interactive heatmaps, trend charts, and more. It accepts a wide variety of omic types, including transcriptomics, proteomics and metabolomics, as well as region-based approaches such as ATAC-seq or ChIP-seq data. The tool is freely available at www.paintomics.org.
Low cost silicon solar array project large area silicon sheet task: Silicon web process development
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.
1977-01-01
Growth configurations were developed which produced crystals having low residual stress levels. The properties of a 106 mm diameter round crucible were evaluated and it was found that this design had greatly enhanced temperature fluctuations arising from convection in the melt. Thermal modeling efforts were directed to developing finite element models of the 106 mm round crucible and an elongated susceptor/crucible configuration. Also, the thermal model for the heat loss modes from the dendritic web was examined for guidance in reducing the thermal stress in the web. An economic analysis was prepared to evaluate the silicon web process in relation to price goals.
Academic medical center libraries on the Web.
Tannery, N H; Wessel, C B
1998-01-01
Academic medical center libraries are moving towards publishing electronically, utilizing networked technologies, and creating digital libraries. The catalyst for this movement has been the Web. An analysis of academic medical center library Web pages was undertaken to assess the information created and communicated in early 1997. A summary of present uses and suggestions for future applications is provided. A method for evaluating and describing the content of library Web sites was designed. The evaluation included categorizing basic information such as description and access to library services, access to commercial databases, and use of interactive forms. The main goal of the evaluation was to assess original resources produced by these libraries. PMID:9803298
SOAP based web services and their future role in VO projects
NASA Astrophysics Data System (ADS)
Topf, F.; Jacquey, C.; Génot, V.; Cecconi, B.; André, N.; Zhang, T. L.; Kallio, E.; Lammer, H.; Facsko, G.; Stöckler, R.; Khodachenko, M.
2011-10-01
Modern state-of-the-art web services are from crucial importance for the interoperability of different VO tools existing in the planetary community. SOAP based web services assure the interconnectability between different data sources and tools by providing a common protocol for communication. This paper will point out a best practice approach with the Automated Multi-Dataset Analysis Tool (AMDA) developed by CDPP, Toulouse and the provision of VEX/MAG data from a remote database located at IWF, Graz. Furthermore a new FP7 project IMPEx will be introduced with a potential usage example of AMDA web services in conjunction with simulation models.
Choi, Okkyung; Han, SangYong
2007-01-01
Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.
NASA Astrophysics Data System (ADS)
Muehlbauer, J. D.; Doyle, M. W.; Tockner, K.
2011-12-01
This presentation will present the results of a meta-analysis on river-floodplain carbon/energy subsidies. This analysis combines data from the existing body of literature (ca. 100 studies) to determine a "stream signature:" a regression equation that fits the decline in aquatic-derived energy in terrestrial predator food webs as a function of distance from the river. The nature of this decay curve and its implications for river/riparian ecological dynamics will be desrcibed. Variation in this metric due to the influence of stream order, river bank characteristics, and channel geomorphology will be assessed. In addition, the implications of variation in the stream signature for terrestrial aquatic food webs under different geomorphic and anthropogenic scenarios will be discussed.
Web-based applications for building, managing and analysing kinetic models of biological systems.
Lee, Dong-Yup; Saha, Rajib; Yusufi, Faraaz Noor Khan; Park, Wonjun; Karimi, Iftekhar A
2009-01-01
Mathematical modelling and computational analysis play an essential role in improving our capability to elucidate the functions and characteristics of complex biological systems such as metabolic, regulatory and cell signalling pathways. The modelling and concomitant simulation render it possible to predict the cellular behaviour of systems under various genetically and/or environmentally perturbed conditions. This motivates systems biologists/bioengineers/bioinformaticians to develop new tools and applications, allowing non-experts to easily conduct such modelling and analysis. However, among a multitude of systems biology tools developed to date, only a handful of projects have adopted a web-based approach to kinetic modelling. In this report, we evaluate the capabilities and characteristics of current web-based tools in systems biology and identify desirable features, limitations and bottlenecks for further improvements in terms of usability and functionality. A short discussion on software architecture issues involved in web-based applications and the approaches taken by existing tools is included for those interested in developing their own simulation applications.
G2S: a web-service for annotating genomic variants on 3D protein structures.
Wang, Juexin; Sheridan, Robert; Sumer, S Onur; Schultz, Nikolaus; Xu, Dong; Gao, Jianjiong
2018-06-01
Accurately mapping and annotating genomic locations on 3D protein structures is a key step in structure-based analysis of genomic variants detected by recent large-scale sequencing efforts. There are several mapping resources currently available, but none of them provides a web API (Application Programming Interface) that supports programmatic access. We present G2S, a real-time web API that provides automated mapping of genomic variants on 3D protein structures. G2S can align genomic locations of variants, protein locations, or protein sequences to protein structures and retrieve the mapped residues from structures. G2S API uses REST-inspired design and it can be used by various clients such as web browsers, command terminals, programming languages and other bioinformatics tools for bringing 3D structures into genomic variant analysis. The webserver and source codes are freely available at https://g2s.genomenexus.org. g2s@genomenexus.org. Supplementary data are available at Bioinformatics online.
Flood-Grady, Elizabeth; Paige, Samantha R; Karimipour, Nicki; Harris, Paul A; Cottler, Linda B; Krieger, Janice L
2017-12-01
There is a dearth of literature providing guidance on how to effectively communicate about clinical research (CR). Using the transactional model of communication, a content analysis of the investigator (n=62) and participant (n=18) Web sites of institutions funded through the National Institutes of Health Clinical and Translational Science Award (CTSA) was conducted to identify their strategies (e.g., messages) for communicating about CR participation. CTSAs targeted investigators with CR participation content across the main Web sites, although most CTSAs (n=55; 88.7%) also included CR participation content for participants. In total, 18 CTSAs (29%) hosted participant Web sites. Participant sites included 13 message types about CR participation (e.g., registry enrollment) and 5 additional channels (e.g., email, phone number) to communicate about CR. However, many CTSA participant Web sites excluded information explaining the CR process and offered CR content exclusively in English. CTSAs should identify their target audience and design strategies (e.g., messages, channels) accordingly.
Unpicking the signal thread of the sector web spider Zygiella x-notata
Mortimer, Beth; Holland, Chris; Windmill, James F. C.; Vollrath, Fritz
2015-01-01
Remote sensing allows an animal to extend its morphology with appropriate conductive materials and sensors providing environmental feedback from spatially removed locations. For example, the sector web spider Zygiella x-notata uses a specialized thread as both a structural bridge and signal transmitter to monitor web vibrations from its retreat at the web perimeter. To unravel this model multifunctional system, we investigated Zygiella's signal thread structure with a range of techniques, including tensile testing, laser vibrometry, electron microscopy and behavioural analysis. We found that signal threads varied significantly in the number of filaments; a result of the spider adding a lifeline each time it runs along the bridge. Our mechanical property analysis suggests that while the structure varies, its normalized load does not. We propose that the signal thread represents a complex and fully integrated multifunctional structure where filaments can be added, thus increasing absolute load-bearing capacity while maintaining signal fidelity. We conclude that such structures may serve as inspiration for remote sensing design strategies. PMID:26674191
MAGI: a Node.js web service for fast microRNA-Seq analysis in a GPU infrastructure.
Kim, Jihoon; Levy, Eric; Ferbrache, Alex; Stepanowsky, Petra; Farcas, Claudiu; Wang, Shuang; Brunner, Stefan; Bath, Tyler; Wu, Yuan; Ohno-Machado, Lucila
2014-10-01
MAGI is a web service for fast MicroRNA-Seq data analysis in a graphics processing unit (GPU) infrastructure. Using just a browser, users have access to results as web reports in just a few hours->600% end-to-end performance improvement over state of the art. MAGI's salient features are (i) transfer of large input files in native FASTA with Qualities (FASTQ) format through drag-and-drop operations, (ii) rapid prediction of microRNA target genes leveraging parallel computing with GPU devices, (iii) all-in-one analytics with novel feature extraction, statistical test for differential expression and diagnostic plot generation for quality control and (iv) interactive visualization and exploration of results in web reports that are readily available for publication. MAGI relies on the Node.js JavaScript framework, along with NVIDIA CUDA C, PHP: Hypertext Preprocessor (PHP), Perl and R. It is freely available at http://magi.ucsd.edu. © The Author 2014. Published by Oxford University Press.
TOKEN: Trustable Keystroke-Based Authentication for Web-Based Applications on Smartphones
NASA Astrophysics Data System (ADS)
Nauman, Mohammad; Ali, Tamleek
Smartphones are increasingly being used to store personal information as well as to access sensitive data from the Internet and the cloud. Establishment of the identity of a user requesting information from smartphones is a prerequisite for secure systems in such scenarios. In the past, keystroke-based user identification has been successfully deployed on production-level mobile devices to mitigate the risks associated with naïve username/password based authentication. However, these approaches have two major limitations: they are not applicable to services where authentication occurs outside the domain of the mobile device - such as web-based services; and they often overly tax the limited computational capabilities of mobile devices. In this paper, we propose a protocol for keystroke dynamics analysis which allows web-based applications to make use of remote attestation and delegated keystroke analysis. The end result is an efficient keystroke-based user identification mechanism that strengthens traditional password protected services while mitigating the risks of user profiling by collaborating malicious web services.
Myrent, Noah; Adams, Douglas E; Griffith, D Todd
2015-02-28
A wind turbine blade's structural dynamic response is simulated and analysed with the goal of characterizing the presence and severity of a shear web disbond. Computer models of a 5 MW offshore utility-scale wind turbine were created to develop effective algorithms for detecting such damage. Through data analysis and with the use of blade measurements, a shear web disbond was quantified according to its length. An aerodynamic sensitivity study was conducted to ensure robustness of the detection algorithms. In all analyses, the blade's flap-wise acceleration and root-pitching moment were the clearest indicators of the presence and severity of a shear web disbond. A combination of blade and non-blade measurements was formulated into a final algorithm for the detection and quantification of the disbond. The probability of detection was 100% for the optimized wind speed ranges in laminar, 30% horizontal shear and 60% horizontal shear conditions. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
[Analysis of the web pages of the intensive care units of Spain].
Navarro-Arnedo, J M
2009-01-01
In order to determine the Intensive Care Units (ICU) of Spanish hospitals that had a web site, to analyze the information they offered and to know what information they needed to offer according to a sample of ICU nurses, a cross-sectional observational, descriptive study was carried out between January and September 2008. For each ICU website, an analysis was made on the information available on the unit, its care, teaching and research activity on nursing. Simultaneously, based on a sample of intensive care nurses, the information that should be contained on an ICU website was determined. The results, expressed in absolute numbers and percentage, showed that 66 of the 292 hospitals with ICU (22.6%) had a web site; 50.7% of the sites showed the number of beds, 19.7% the activity report, 11.3% the published articles/studies and followed research lines and 9.9% the organized formation courses. 14 webs (19.7%) displayed images of nurses. However, only 1 (1.4%) offered guides on the actions followed. No web site offered a navigation section for nursing, the E-mail of the chief nursing, the nursing documentation used or if any nursing model of their own was used. It is concluded that only one-fourth of the Spanish hospitals with ICU have a web site; number of beds was the data offered by the most sites, whereas information on care, educational and investigating activities was very reduced and that on nursing was practically omitted on the web pages of intensive care units.
The effect of top-level domains and advertisements on health web-site credibility.
Walther, Joseph B; Wang, Zuoming; Loh, Tracy
2004-09-03
Concerns over health information on the Internet have generated efforts to enhance credibility markers; yet how users actually assess the credibility of online health information is largely unknown. This study set out to (1) establish a parsimonious and valid questionnaire instrument to measure credibility of Internet health information by drawing on various previous measures of source, news, and other credibility scales; and (2) to identify the effects of Web-site domains and advertising on credibility perceptions. Respondents (N = 156) examined one of 12 Web-site mock-ups and completed credibility scales in a 3 x 2 x 2 between-subjects experimental design. Factor analysis and validity checks were used for item reduction, and analysis of variance was employed for hypothesis testing of Web-site features' effects. In an attempt to construct a credibility instrument, three dimensions of credibility (safety, trustworthiness, and dynamism) were retained, reflecting traditional credibility sub-themes, but composed of items from disparate sources. When testing the effect of the presence or absence of advertising on a Web site on credibility, we found that this depends on the site's domain, with a trend for advertisements having deleterious effects on the credibility of sites with .org domain, but positive effects on sites with .com or .edu domains. Health-information Web-site providers should select domains purposefully when they can, especially if they must accept on-site advertising. Credibility perceptions may not be invariant or stable, but rather are sensitive to topic and context. Future research may employ these findings in order to compare other forms of health-information delivery to optimal Web-site features.
Mazzocut, Mauro; Truccolo, Ivana; Antonini, Marialuisa; Rinaldi, Fabio; Omero, Paolo; Ferrarin, Emanuela; De Paoli, Paolo; Tasso, Carlo
2016-06-16
The use of complementary and alternative medicine (CAM) among cancer patients is widespread and mostly self-administrated. Today, one of the most relevant topics is the nondisclosure of CAM use to doctors. This general lack of communication exposes patients to dangerous behaviors and to less reliable information channels, such as the Web. The Italian context scarcely differs from this trend. Today, we are able to mine and analyze systematically the unstructured information available in the Web, to get an insight of people's opinions, beliefs, and rumors concerning health topics. Our aim was to analyze Italian Web conversations about CAM, identifying the most relevant Web sources, therapies, and diseases and measure the related sentiment. Data have been collected using the Web Intelligence tool ifMONITOR. The workflow consisted of 6 phases: (1) eligibility criteria definition for the ifMONITOR search profile; (2) creation of a CAM terminology database; (3) generic Web search and automatic filtering, the results have been manually revised to refine the search profile, and stored in the ifMONITOR database; (4) automatic classification using the CAM database terms; (5) selection of the final sample and manual sentiment analysis using a 1-5 score range; (6) manual indexing of the Web sources and CAM therapies type retrieved. Descriptive univariate statistics were computed for each item: absolute frequency, percentage, central tendency (mean sentiment score [MSS]), and variability (standard variation σ). Overall, 212 Web sources, 423 Web documents, and 868 opinions have been retrieved. The overall sentiment measured tends to a good score (3.6 of 5). Quite a high polarization in the opinions of the conversation partaking emerged from standard variation analysis (σ≥1). In total, 126 of 212 (59.4%) Web sources retrieved were nonhealth-related. Facebook (89; 21%) and Yahoo Answers (41; 9.7%) were the most relevant. In total, 94 CAM therapies have been retrieved. Most belong to the "biologically based therapies or nutrition" category: 339 of 868 opinions (39.1%), showing an MSS of 3.9 (σ=0.83). Within nutrition, "diets" collected 154 opinions (18.4%) with an MSS of 3.8 (σ=0.87); "food as CAM" overall collected 112 opinions (12.8%) with a MSS of 4 (σ=0.68). Excluding diets and food, the most discussed CAM therapy is the controversial Italian "Di Bella multitherapy" with 102 opinions (11.8%) with an MSS of 3.4 (σ=1.21). Breast cancer was the most mentioned disease: 81 opinions of 868. Conversations about CAM and cancer are ubiquitous. There is a great concern about the biologically based therapies, perceived as harmless and useful, under-rating all risks related to dangerous interactions or malnutrition. Our results can be useful to doctors to be aware of the implications of these beliefs for the clinical practice. Web conversation exploitation could be a strategy to gain insights of people's perspective for other controversial topics.
Berg, Marie; Linden, Karolina; Adolfsson, Annsofie; Sparud Lundin, Carina; Ranerup, Agneta
2018-05-02
Numerous Web-based interventions have been implemented to promote health and health-related behaviors in persons with chronic conditions. Using randomized controlled trials to evaluate such interventions creates a range of challenges, which in turn can influence the study outcome. Applying a critical perspective when evaluating Web-based health interventions is important. The objective of this study was to critically analyze and discuss the challenges of conducting a Web-based health intervention as a randomized controlled trial. The MODIAB-Web study was critically examined using an exploratory case study methodology and the framework for analysis offered through the Persuasive Systems Design model. Focus was on technology, study design, and Web-based support usage, with special focus on the forum for peer support. Descriptive statistics and qualitative content analysis were used. The persuasive content and technological elements in the design of the randomized controlled trial included all four categories of the Persuasive Systems Design model, but not all design principles were implemented. The study duration was extended to a period of four and a half years. Of 81 active participants in the intervention group, a maximum of 36 women were simultaneously active. User adherence varied greatly with a median of 91 individual log-ins. The forum for peer support was used by 63 participants. Although only about one-third of the participants interacted in the forum, there was a fairly rich exchange of experiences and advice between them. Thus, adherence in terms of social interactions was negatively affected by limited active participation due to prolonged recruitment process and randomization effects. Lessons learned from this critical analysis are that technology and study design matter and might mutually influence each other. In Web-based interventions, the use of design theories enables utilization of the full potential of technology and promotes adherence. The randomization element in a randomized controlled trial design can become a barrier to achieving a critical mass of user interactions in Web-based interventions, especially when social support is included. For extended study periods, the technology used may need to be adapted in line with newly available technical options to avoid the risk of becoming outdated in the user realm, which in turn might jeopardize study validity in terms of randomized controlled trial designs. On the basis of lessons learned in this randomized controlled trial, we give recommendations to consider when designing and evaluating Web-based health interventions. ©Marie Berg, Karolina Linden, Annsofie Adolfsson, Carina Sparud Lundin, Agneta Ranerup. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 02.05.2018.
Googling DNA sequences on the World Wide Web.
Hajibabaei, Mehrdad; Singer, Gregory A C
2009-11-10
New web-based technologies provide an excellent opportunity for sharing and accessing information and using web as a platform for interaction and collaboration. Although several specialized tools are available for analyzing DNA sequence information, conventional web-based tools have not been utilized for bioinformatics applications. We have developed a novel algorithm and implemented it for searching species-specific genomic sequences, DNA barcodes, by using popular web-based methods such as Google. We developed an alignment independent character based algorithm based on dividing a sequence library (DNA barcodes) and query sequence to words. The actual search is conducted by conventional search tools such as freely available Google Desktop Search. We implemented our algorithm in two exemplar packages. We developed pre and post-processing software to provide customized input and output services, respectively. Our analysis of all publicly available DNA barcode sequences shows a high accuracy as well as rapid results. Our method makes use of conventional web-based technologies for specialized genetic data. It provides a robust and efficient solution for sequence search on the web. The integration of our search method for large-scale sequence libraries such as DNA barcodes provides an excellent web-based tool for accessing this information and linking it to other available categories of information on the web.
Prospective analysis of the quality of Spanish health information web sites after 3 years.
Conesa-Fuentes, Maria C; Hernandez-Morante, Juan J
2016-12-01
Although the Internet has become an essential source of health information, our study conducted 3 years ago provided evidence of the low quality of Spanish health web sites. The objective of the present study was to evaluate the quality of Spanish health information web sites now, and to compare these results with those obtained 3 years ago. For the original study, the most visited health information web sites were selected through the PageRank® (Google®) system. The present study evaluated the quality of the same web sites from February to May 2013, using the method developed by Bermúdez-Tamayo et al. and HONCode® criteria. The mean quality of the selected web sites was low and has deteriorated since the previous evaluation, especially in regional health services and institutions' web sites. The quality of private web sites remained broadly similar. Compliance with privacy and update criteria also improved in the intervening period. The results indicate that, even in the case of health web sites, design or appearance is more relevant to developers than quality of information. It is recommended that responsible institutions should increase their efforts to eliminate low-quality health information that may further contribute to health problems.
Customizable scientific web-portal for DIII-D nuclear fusion experiment
NASA Astrophysics Data System (ADS)
Abla, G.; Kim, E. N.; Schissel, D. P.
2010-04-01
Increasing utilization of the Internet and convenient web technologies has made the web-portal a major application interface for remote participation and control of scientific instruments. While web-portals have provided a centralized gateway for multiple computational services, the amount of visual output often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments. Since each scientist may have different priorities and areas of interest in the experiment, filtering and organizing information based on the individual user's need can increase the usability and efficiency of a web-portal. DIII-D is the largest magnetic nuclear fusion device in the US. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It offers a customizable interface with personalized page layouts and list of services for users to select. Each individual user can create a unique working environment to fit his own needs and interests. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data analysis and visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, and online instant announcement services. The DIII-D web-portal development utilizes multi-tier software architecture, and Web 2.0 technologies and tools, such as AJAX and Django, to develop a highly-interactive and customizable user interface.
Turk, Tarek; Elhady, Mohamed Tamer; Rashed, Sherwet; Abdelkhalek, Mariam; Nasef, Somia Ahmed; Khallaf, Ashraf Mohamed; Mohammed, Abdelrahman Tarek; Attia, Andrew Wassef; Adhikari, Purushottam; Amin, Mohamed Alsabbahi; Hirayama, Kenji; Huy, Nguyen Tien
2018-01-01
Several influential aspects of survey research have been under-investigated and there is a lack of guidance on reporting survey studies, especially web-based projects. In this review, we aim to investigate the reporting practices and quality of both web- and non-web-based survey studies to enhance the quality of reporting medical evidence that is derived from survey studies and to maximize the efficiency of its consumption. Reporting practices and quality of 100 random web- and 100 random non-web-based articles published from 2004 to 2016 were assessed using the SUrvey Reporting GuidelinE (SURGE). The CHERRIES guideline was also used to assess the reporting quality of Web-based studies. Our results revealed a potential gap in the reporting of many necessary checklist items in both web-based and non-web-based survey studies including development, description and testing of the questionnaire, the advertisement and administration of the questionnaire, sample representativeness and response rates, incentives, informed consent, and methods of statistical analysis. Our findings confirm the presence of major discrepancies in reporting results of survey-based studies. This can be attributed to the lack of availability of updated universal checklists for quality of reporting standards. We have summarized our findings in a table that may serve as a roadmap for future guidelines and checklists, which will hopefully include all types and all aspects of survey research.
Analysis and visualization of Arabidopsis thaliana GWAS using web 2.0 technologies.
Huang, Yu S; Horton, Matthew; Vilhjálmsson, Bjarni J; Seren, Umit; Meng, Dazhe; Meyer, Christopher; Ali Amer, Muhammad; Borevitz, Justin O; Bergelson, Joy; Nordborg, Magnus
2011-01-01
With large-scale genomic data becoming the norm in biological studies, the storing, integrating, viewing and searching of such data have become a major challenge. In this article, we describe the development of an Arabidopsis thaliana database that hosts the geographic information and genetic polymorphism data for over 6000 accessions and genome-wide association study (GWAS) results for 107 phenotypes representing the largest collection of Arabidopsis polymorphism data and GWAS results to date. Taking advantage of a series of the latest web 2.0 technologies, such as Ajax (Asynchronous JavaScript and XML), GWT (Google-Web-Toolkit), MVC (Model-View-Controller) web framework and Object Relationship Mapper, we have created a web-based application (web app) for the database, that offers an integrated and dynamic view of geographic information, genetic polymorphism and GWAS results. Essential search functionalities are incorporated into the web app to aid reverse genetics research. The database and its web app have proven to be a valuable resource to the Arabidopsis community. The whole framework serves as an example of how biological data, especially GWAS, can be presented and accessed through the web. In the end, we illustrate the potential to gain new insights through the web app by two examples, showcasing how it can be used to facilitate forward and reverse genetics research. Database URL: http://arabidopsis.usc.edu/
Damkliang, Kasikrit; Tandayya, Pichaya; Sangket, Unitsa; Pasomsub, Ekawat
2016-11-28
At the present, coding sequence (CDS) has been discovered and larger CDS is being revealed frequently. Approaches and related tools have also been developed and upgraded concurrently, especially for phylogenetic tree analysis. This paper proposes an integrated automatic Taverna workflow for the phylogenetic tree inferring analysis using public access web services at European Bioinformatics Institute (EMBL-EBI) and Swiss Institute of Bioinformatics (SIB), and our own deployed local web services. The workflow input is a set of CDS in the Fasta format. The workflow supports 1,000 to 20,000 numbers in bootstrapping replication. The workflow performs the tree inferring such as Parsimony (PARS), Distance Matrix - Neighbor Joining (DIST-NJ), and Maximum Likelihood (ML) algorithms of EMBOSS PHYLIPNEW package based on our proposed Multiple Sequence Alignment (MSA) similarity score. The local web services are implemented and deployed into two types using the Soaplab2 and Apache Axis2 deployment. There are SOAP and Java Web Service (JWS) providing WSDL endpoints to Taverna Workbench, a workflow manager. The workflow has been validated, the performance has been measured, and its results have been verified. Our workflow's execution time is less than ten minutes for inferring a tree with 10,000 replicates of the bootstrapping numbers. This paper proposes a new integrated automatic workflow which will be beneficial to the bioinformaticians with an intermediate level of knowledge and experiences. All local services have been deployed at our portal http://bioservices.sci.psu.ac.th.
Damkliang, Kasikrit; Tandayya, Pichaya; Sangket, Unitsa; Pasomsub, Ekawat
2016-03-01
At the present, coding sequence (CDS) has been discovered and larger CDS is being revealed frequently. Approaches and related tools have also been developed and upgraded concurrently, especially for phylogenetic tree analysis. This paper proposes an integrated automatic Taverna workflow for the phylogenetic tree inferring analysis using public access web services at European Bioinformatics Institute (EMBL-EBI) and Swiss Institute of Bioinformatics (SIB), and our own deployed local web services. The workflow input is a set of CDS in the Fasta format. The workflow supports 1,000 to 20,000 numbers in bootstrapping replication. The workflow performs the tree inferring such as Parsimony (PARS), Distance Matrix - Neighbor Joining (DIST-NJ), and Maximum Likelihood (ML) algorithms of EMBOSS PHYLIPNEW package based on our proposed Multiple Sequence Alignment (MSA) similarity score. The local web services are implemented and deployed into two types using the Soaplab2 and Apache Axis2 deployment. There are SOAP and Java Web Service (JWS) providing WSDL endpoints to Taverna Workbench, a workflow manager. The workflow has been validated, the performance has been measured, and its results have been verified. Our workflow's execution time is less than ten minutes for inferring a tree with 10,000 replicates of the bootstrapping numbers. This paper proposes a new integrated automatic workflow which will be beneficial to the bioinformaticians with an intermediate level of knowledge and experiences. The all local services have been deployed at our portal http://bioservices.sci.psu.ac.th.
Enhancing UCSF Chimera through web services.
Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E
2014-07-01
Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Li, Yunkai; Zhang, Yuying; Xu, Jun; Zhang, Shuo
2018-03-01
Food web structures are well known to vary widely among ecosystems. Moreover, many food web studies of lakes have generally attempted to characterize the overall food web structure and have largely ignored internal spatial and environmental variations. In this study, we hypothesize that there is a high degree of spatial heterogeneity within an ecosystem and such heterogeneity may lead to strong variations in environmental conditions and resource availability, in turn resulting in different trophic pathways. Stable carbon and nitrogen isotopes were employed for the whole food web to describe the structure of the food web in different sub-basins within Taihu Lake. This lake is a large eutrophic freshwater lake that has been intensively managed and highly influenced by human activities for more than 50 years. The results show significant isotopic differences between basins with different environmental characteristics. Such differences likely result from isotopic baseline differences combining with a shift in food web structure. Both are related to local spatial heterogeneity in nutrient loading in waters. Such variation should be explicitly considered in future food web studies and ecosystem-based management in this lake ecosystem.
Trophic groups and modules: two levels of group detection in food webs
Gauzens, Benoit; Thébault, Elisa; Lacroix, Gérard; Legendre, Stéphane
2015-01-01
Within food webs, species can be partitioned into groups according to various criteria. Two notions have received particular attention: trophic groups (TGs), which have been used for decades in the ecological literature, and more recently, modules. The relationship between these two group concepts remains unknown in empirical food webs. While recent developments in network theory have led to efficient methods for detecting modules in food webs, the determination of TGs (groups of species that are functionally similar) is largely based on subjective expert knowledge. We develop a novel algorithm for TG detection. We apply this method to empirical food webs and show that aggregation into TGs allows for the simplification of food webs while preserving their information content. Furthermore, we reveal a two-level hierarchical structure where modules partition food webs into large bottom–top trophic pathways, whereas TGs further partition these pathways into groups of species with similar trophic connections. This provides new perspectives for the study of dynamical and functional consequences of food-web structure, bridging topological and dynamical analysis. TGs have a clear ecological meaning and are found to provide a trade-off between network complexity and information loss. PMID:25878127
Sauter, Lisa; Jablonski, Lisa; Sander, Uwe; Taheri-Zadeh, Fatemeh
2017-01-01
Background Physician-rating websites (PRWs) may lead to quality improvements in case they enable and establish a peer-to-peer communication between patients and physicians. Yet, we know little about whether and how physicians respond on the Web to patient ratings. Objective The objective of this study was to describe trends in physicians’ Web-based responses to patient ratings over time, to identify what physician characteristics influence Web-based responses, and to examine the topics physicians are likely to respond to. Methods We analyzed physician responses to more than 1 million patient ratings displayed on the German PRW, jameda, from 2010 to 2015. Quantitative analysis contained chi-square analyses and the Mann-Whitney U test. Quantitative content techniques were applied to determine the topics physicians respond to based on a randomly selected sample of 600 Web-based ratings and corresponding physician responses. Results Overall, physicians responded to 1.58% (16,640/1,052,347) of all Web-based ratings, with an increasing trend over time from 0.70% (157/22,355) in 2010 to 1.88% (6377/339,919) in 2015. Web-based ratings that were responded to had significantly worse rating results than ratings that were not responded to (2.15 vs 1.74, P<.001). Physicians who respond on the Web to patient ratings differ significantly from nonresponders regarding several characteristics such as gender and patient recommendation results (P<.001 each). Regarding scaled-survey rating elements, physicians were most likely to respond to the waiting time within the practice (19.4%, 99/509) and the time spent with the patient (18.3%, 110/600). Almost one-third of topics in narrative comments were answered by the physicians (30.66%, 382/1246). Conclusions So far, only a minority of physicians have taken the chance to respond on the Web to patient ratings. This is likely because of (1) the low awareness of PRWs among physicians, (2) the fact that only a few PRWs enable physicians to respond on the Web to patient ratings, and (3) the lack of an active moderator to establish peer-to-peer communication. PRW providers should foster more frequent communication between the patient and the physician and encourage physicians to respond on the Web to patient ratings. Further research is needed to learn more about the motivation of physicians to respond or not respond to Web-based patient ratings. PMID:28747292
Artieta-Pinedo, Isabel; Paz-Pascual, Carmen; Grandes, Gonzalo; Villanueva, Gemma
2018-03-01
the aim of this study is to evaluate the quality of web pages found by women when carrying out an exploratory search concerning pregnancy, childbirth, the postpartum period and breastfeeding. a descriptive study of the first 25 web pages that appear in the search engines Google, Yahoo and Bing, in October 2014 in the Basque Country (Spain), when entering eight Spanish words and seven English words related to pregnancy, childbirth, the postpartum period, breastfeeding and newborns. Web pages aimed at healthcare professionals and forums were excluded. The reliability was evaluated using the LIDA questionnaire, and the contents of the web pages with the highest scores were then described. a total of 126 web pages were found using the key search words. Of these, 14 scored in the top 30% for reliability. The content analysis of these found that the mean score for "references to the source of the information" was 3.4 (SD: 2.17), that for "up-to-date" was 4.30 (SD: 1.97) and the score for "conflict of interest statement" was 5.90 (SD: 2.16). The mean for web pages created by universities and official bodies was 13.64 (SD: 4.47), whereas the mean for those created by private bodies was 11.23 (SD: 4.51) (F (1,124)5.27. p=0.02). The content analysis of these web pages found that the most commonly discussed topic was breastfeeding, followed by self-care during pregnancy and the onset of childbirth. in this study, web pages from established healthcare or academic institutions were found to contain the most reliable information. The significant number of web pages found in this study with poor quality information indicates the need for healthcare professionals to guide women when sourcing information online. As the origin of the web page has a direct effect on reliability, the involvement of healthcare professionals in the use, counselling and generation of new technologies as an intervention tool is increasingly essential. Copyright © 2017 Elsevier Ltd. All rights reserved.
Where are the parasites in food webs?
2012-01-01
This review explores some of the reasons why food webs seem to contain relatively few parasite species when compared to the full diversity of free living species in the system. At present, there are few coherent food web theories to guide scientific studies on parasites, and this review posits that the methods, directions and questions in the field of food web ecology are not always congruent with parasitological inquiry. For example, topological analysis (the primary tool in food web studies) focuses on only one of six important steps in trematode life cycles, each of which requires a stable community dynamic to evolve. In addition, these transmission strategies may also utilize pathways within the food web that are not considered in traditional food web investigations. It is asserted that more effort must be focused on parasite-centric models, and a central theme is that many different approaches will be required. One promising approach is the old energetic perspective, which considers energy as the critical resource for all organisms, and the currency of all food web interactions. From the parasitological point of view, energy can be used to characterize the roles of parasites at all levels in the food web, from individuals to populations to community. The literature on parasite energetics in food webs is very sparse, but the evidence suggests that parasite species richness is low in food webs because parasites are limited by the quantity of energy available to their unique lifestyles. PMID:23092160
Benchmark of Client and Server-Side Catchment Delineation Approaches on Web-Based Systems
NASA Astrophysics Data System (ADS)
Demir, I.; Sermet, M. Y.; Sit, M. A.
2016-12-01
Recent advances in internet and cyberinfrastructure technologies have provided the capability to acquire large scale spatial data from various gauges and sensor networks. The collection of environmental data increased demand for applications which are capable of managing and processing large-scale and high-resolution data sets. With the amount and resolution of data sets provided, one of the challenging tasks for organizing and customizing hydrological data sets is delineation of watersheds on demand. Watershed delineation is a process for creating a boundary that represents the contributing area for a specific control point or water outlet, with intent of characterization and analysis of portions of a study area. Although many GIS tools and software for watershed analysis are available on desktop systems, there is a need for web-based and client-side techniques for creating a dynamic and interactive environment for exploring hydrological data. In this project, we demonstrated several watershed delineation techniques on the web with various techniques implemented on the client-side using JavaScript and WebGL, and on the server-side using Python and C++. We also developed a client-side GPGPU (General Purpose Graphical Processing Unit) algorithm to analyze high-resolution terrain data for watershed delineation which allows parallelization using GPU. The web-based real-time analysis of watershed segmentation can be helpful for decision-makers and interested stakeholders while eliminating the need of installing complex software packages and dealing with large-scale data sets. Utilization of the client-side hardware resources also eliminates the need of servers due its crowdsourcing nature. Our goal for future work is to improve other hydrologic analysis methods such as rain flow tracking by adapting presented approaches.
msBiodat analysis tool, big data analysis for high-throughput experiments.
Muñoz-Torres, Pau M; Rokć, Filip; Belužic, Robert; Grbeša, Ivana; Vugrek, Oliver
2016-01-01
Mass spectrometry (MS) are a group of a high-throughput techniques used to increase knowledge about biomolecules. They produce a large amount of data which is presented as a list of hundreds or thousands of proteins. Filtering those data efficiently is the first step for extracting biologically relevant information. The filtering may increase interest by merging previous data with the data obtained from public databases, resulting in an accurate list of proteins which meet the predetermined conditions. In this article we present msBiodat Analysis Tool, a web-based application thought to approach proteomics to the big data analysis. With this tool, researchers can easily select the most relevant information from their MS experiments using an easy-to-use web interface. An interesting feature of msBiodat analysis tool is the possibility of selecting proteins by its annotation on Gene Ontology using its Gene Id, ensembl or UniProt codes. The msBiodat analysis tool is a web-based application that allows researchers with any programming experience to deal with efficient database querying advantages. Its versatility and user-friendly interface makes easy to perform fast and accurate data screening by using complex queries. Once the analysis is finished, the result is delivered by e-mail. msBiodat analysis tool is freely available at http://msbiodata.irb.hr.
CoP Sensing Framework on Web-Based Environment
NASA Astrophysics Data System (ADS)
Mustapha, S. M. F. D. Syed
The Web technologies and Web applications have shown similar high growth rate in terms of daily usages and user acceptance. The Web applications have not only penetrated in the traditional domains such as education and business but have also encroached into areas such as politics, social, lifestyle, and culture. The emergence of Web technologies has enabled Web access even to the person on the move through PDAs or mobile phones that are connected using Wi-Fi, HSDPA, or other communication protocols. These two phenomena are the inducement factors toward the need of building Web-based systems as the supporting tools in fulfilling many mundane activities. In doing this, one of the many focuses in research has been to look at the implementation challenges in building Web-based support systems in different types of environment. This chapter describes the implementation issues in building the community learning framework that can be supported on the Web-based platform. The Community of Practice (CoP) has been chosen as the community learning theory to be the case study and analysis as it challenges the creativity of the architectural design of the Web system in order to capture the presence of learning activities. The details of this chapter describe the characteristics of the CoP to understand the inherent intricacies in modeling in the Web-based environment, the evidences of CoP that need to be traced automatically in a slick manner such that the evidence-capturing process is unobtrusive, and the technologies needed to embrace a full adoption of Web-based support system for the community learning framework.
Net Venn - An integrated network analysis web platform for gene lists
USDA-ARS?s Scientific Manuscript database
Many lists containing biological identifiers such as gene lists have been generated in various genomics projects. Identifying the overlap among gene lists can enable us to understand the similarities and differences between the datasets. Here, we present an interactome network-based web application...
SWMPrats.net: A Web-Based Resource for Exploring SWMP Data
SWMPrats.net is a web-based resource that provides accessible approaches to using SWMP data. The website includes a user forum with instructional ‘Plots of the Month’; links to workshop content; and a description of the SWMPr data analysis package for R. Interactive...
Hydrogen Financial Analysis Scenario Tool (H2FAST) Documentation
for the web and spreadsheet versions of H2FAST. H2FAST Web Tool User's Manual H2FAST Spreadsheet Tool User's Manual (DRAFT) Technical Support Send questions or feedback about H2FAST to H2FAST@nrel.gov. Home
Effects of light reduction on food webs and associated ecosystem services of Yaquina Bay
Reduced water clarity can affect estuarine primary production but little is known of its subsequent effects to consumer guilds or ecosystem services. We investigated those effects using inverse analysis of modeled food webs of the lower (polyhaline) and upper (mesohaline) reache...
NASA Technical Reports Server (NTRS)
Peterson, James P.; Bruce, Walter E., Jr.
1959-01-01
The results of bending tests on six multiweb beams of optimum weight-strength design are presented. The internal structure of the beams consisted of various combinations of two types of full-depth solid webs and a post-stringer web. The observed structural behavior, buckling load, and failing load of the beams are compared with results obtained by the use of existing methods of analysis and found to be quite predictable.
2016-11-18
researchers from the U.S. Army Research Institute of Environmental Medicine (USARIEM) designed and conducted a total of three web - administered job...USARIEM) and Human Performance Systems, Inc. designed three web -administered job analyses questionnaires JAQs to be completed by Army cavalry scouts and...responses from Soldiers in many Army MOSs. This may have affected the quality of some item responses. 3) This survey was web -administered, and
NASA Astrophysics Data System (ADS)
Cole, M.
2017-12-01
Advanced technology plays a key role in enabling future Earth-observing missions needed for global monitoring and climate research. Rapid progress over the past decade and anticipated for the coming decades have diminished the size of some satellites while increasing the amount of data and required pace of integration and analysis. Sensor web developments provide correlations to constellations of smallsats. Reviewing current advances in sensor webs and requirements for constellations will improve planning, operations, and data management for future architectures of multiple satellites with a common mission goal.
Representation of the serial killer on the Italian Internet.
Villano, P; Bastianoni, P; Melotti, G
2001-10-01
The representation of serial killers was examined from the analysis of 317 Web pages in the Italian language to study how the psychological profiles of serial killers are described on the Italian Internet. The correspondence analysis of the content of these Web pages shows that in Italy the serial killer is associated with words such as "monster" and "horror," which suggest and imply psychological perversion and aberrant acts. These traits are peculiar for the Italian scenario.
miRanalyzer: a microRNA detection and analysis tool for next-generation sequencing experiments.
Hackenberg, Michael; Sturm, Martin; Langenberger, David; Falcón-Pérez, Juan Manuel; Aransay, Ana M
2009-07-01
Next-generation sequencing allows now the sequencing of small RNA molecules and the estimation of their expression levels. Consequently, there will be a high demand of bioinformatics tools to cope with the several gigabytes of sequence data generated in each single deep-sequencing experiment. Given this scene, we developed miRanalyzer, a web server tool for the analysis of deep-sequencing experiments for small RNAs. The web server tool requires a simple input file containing a list of unique reads and its copy numbers (expression levels). Using these data, miRanalyzer (i) detects all known microRNA sequences annotated in miRBase, (ii) finds all perfect matches against other libraries of transcribed sequences and (iii) predicts new microRNAs. The prediction of new microRNAs is an especially important point as there are many species with very few known microRNAs. Therefore, we implemented a highly accurate machine learning algorithm for the prediction of new microRNAs that reaches AUC values of 97.9% and recall values of up to 75% on unseen data. The web tool summarizes all the described steps in a single output page, which provides a comprehensive overview of the analysis, adding links to more detailed output pages for each analysis module. miRanalyzer is available at http://web.bioinformatics.cicbiogune.es/microRNA/.
Lee, Hwan Young; Song, Injee; Ha, Eunho; Cho, Sung-Bae; Yang, Woo Ick; Shin, Kyoung-Jin
2008-01-01
Background For the past few years, scientific controversy has surrounded the large number of errors in forensic and literature mitochondrial DNA (mtDNA) data. However, recent research has shown that using mtDNA phylogeny and referring to known mtDNA haplotypes can be useful for checking the quality of sequence data. Results We developed a Web-based bioinformatics resource "mtDNAmanager" that offers a convenient interface supporting the management and quality analysis of mtDNA sequence data. The mtDNAmanager performs computations on mtDNA control-region sequences to estimate the most-probable mtDNA haplogroups and retrieves similar sequences from a selected database. By the phased designation of the most-probable haplogroups (both expected and estimated haplogroups), mtDNAmanager enables users to systematically detect errors whilst allowing for confirmation of the presence of clear key diagnostic mutations and accompanying mutations. The query tools of mtDNAmanager also facilitate database screening with two options of "match" and "include the queried nucleotide polymorphism". In addition, mtDNAmanager provides Web interfaces for users to manage and analyse their own data in batch mode. Conclusion The mtDNAmanager will provide systematic routines for mtDNA sequence data management and analysis via easily accessible Web interfaces, and thus should be very useful for population, medical and forensic studies that employ mtDNA analysis. mtDNAmanager can be accessed at . PMID:19014619
Student participation in World Wide Web-based curriculum development of general chemistry
NASA Astrophysics Data System (ADS)
Hunter, William John Forbes
1998-12-01
This thesis describes an action research investigation of improvements to instruction in General Chemistry at Purdue University. Specifically, the study was conducted to guide continuous reform of curriculum materials delivered via the World Wide Web by involving students, instructors, and curriculum designers. The theoretical framework for this study was based upon constructivist learning theory and knowledge claims were developed using an inductive analysis procedure. This results of this study are assertions made in three domains: learning chemistry content via the World Wide Web, learning about learning via the World Wide Web, and learning about participation in an action research project. In the chemistry content domain, students were able to learn chemical concepts that utilized 3-dimensional visualizations, but not textual and graphical information delivered via the Web. In the learning via the Web domain, the use of feedback, the placement of supplementary aids, navigation, and the perception of conceptual novelty were all important to students' use of the Web. In the participation in action research domain, students learned about the complexity of curriculum. development, and valued their empowerment as part of the process.
Using sentiment analysis to review patient satisfaction data located on the internet.
Hopper, Anthony M; Uriyo, Maria
2015-01-01
The purpose of this paper is to test the usefulness of sentiment analysis and time-to-next-complaint methods in quantifying text-based information located on the internet. As important, the authors demonstrate how managers can use time-to-next-complaint techniques to organize sentiment analysis derived data into useful information, which can be shared with doctors and other staff. The authors used sentiment analysis to review patient feedback for a select group of gynecologists in Virginia. The authors utilized time-to-next-complaint methods along with other techniques to organize this data into meaningful information. The authors demonstrated that sentiment analysis and time-to-next-complaint techniques might be useful tools for healthcare managers who are interested in transforming web-based text into meaningful, quantifiable information. This study has several limitations. For one thing, neither the data set nor the techniques the authors used to analyze it will account for biases that resulted from selection issues related to gender, income, and culture, as well as from other socio-demographic concerns. Additionally, the authors lacked key data concerning patient volumes for the targeted physicians. Finally, it may be difficult to convince doctors to consider web-based comments as truthful, thereby preventing healthcare managers from using data located on the internet. The report illustrates some of the ways in which healthcare administrators can utilize sentiment analysis, along with time-to-next-complaint techniques, to mine web-based, patient comments for meaningful information. The paper is one of the first to illustrate ways in which administrators at clinics and physicians' offices can utilize sentiment analysis and time-to-next-complaint methods to analyze web-based patient comments.
NASA Astrophysics Data System (ADS)
Dimopoulos, Kostas; Asimakopoulos, Apostolos
2010-06-01
This study aims to explore navigation patterns and preferred pages' characteristics of ten secondary school students' searching the web for information about cloning. The students navigated the Web for as long as they wished in a context of minimum support of teaching staff. Their navigation patterns were analyzed using audit trail data software. The characteristics of their preferred Web pages were also analyzed using a scheme of analysis largely based on socio-linguistics and socio-semiotics approaches. Two distinct groups of students could be discerned. The first consisted of more competent students, who during their navigation visited fewer relevant pages, however of higher credibility and more specialized content. The second group consists of weaker students, who visited more pages, mainly of lower credibility and rather popularized content. Implications for designing educational web pages and teaching are discussed.
MDWeb and MDMoby: an integrated web-based platform for molecular dynamics simulations.
Hospital, Adam; Andrio, Pau; Fenollosa, Carles; Cicin-Sain, Damjan; Orozco, Modesto; Gelpí, Josep Lluís
2012-05-01
MDWeb and MDMoby constitute a web-based platform to help access to molecular dynamics (MD) in the standard and high-throughput regime. The platform provides tools to prepare systems from PDB structures mimicking the procedures followed by human experts. It provides inputs and can send simulations for three of the most popular MD packages (Amber, NAMD and Gromacs). Tools for analysis of trajectories, either provided by the user or retrieved from our MoDEL database (http://mmb.pcb.ub.es/MoDEL) are also incorporated. The platform has two ways of access, a set of web-services based on the BioMoby framework (MDMoby), programmatically accessible and a web portal (MDWeb). http://mmb.irbbarcelona.org/MDWeb; additional information and methodology details can be found at the web site ( http://mmb.irbbarcelona.org/MDWeb/help.php)
3DNOW: Image-Based 3d Reconstruction and Modeling via Web
NASA Astrophysics Data System (ADS)
Tefera, Y.; Poiesi, F.; Morabito, D.; Remondino, F.; Nocerino, E.; Chippendale, P.
2018-05-01
This paper presents a web-based 3D imaging pipeline, namely 3Dnow, that can be used by anyone without the need of installing additional software other than a browser. By uploading a set of images through the web interface, 3Dnow can generate sparse and dense point clouds as well as mesh models. 3D reconstructed models can be downloaded with standard formats or previewed directly on the web browser through an embedded visualisation interface. In addition to reconstructing objects, 3Dnow offers the possibility to evaluate and georeference point clouds. Reconstruction statistics, such as minimum, maximum and average intersection angles, point redundancy and density can also be accessed. The paper describes all features available in the web service and provides an analysis of the computational performance using servers with different GPU configurations.
Benard, Emmanuel; Michel, Christian J
2009-08-01
We present here the SEGM web server (Stochastic Evolution of Genetic Motifs) in order to study the evolution of genetic motifs both in the direct evolutionary sense (past-present) and in the inverse evolutionary sense (present-past). The genetic motifs studied can be nucleotides, dinucleotides and trinucleotides. As an example of an application of SEGM and to understand its functionalities, we give an analysis of inverse mutations of splice sites of human genome introns. SEGM is freely accessible at http://lsiit-bioinfo.u-strasbg.fr:8080/webMathematica/SEGM/SEGM.html directly or by the web site http://dpt-info.u-strasbg.fr/~michel/. To our knowledge, this SEGM web server is to date the only computational biology software in this evolutionary approach.
NASA Astrophysics Data System (ADS)
Suftin, I.; Read, J. S.; Walker, J.
2013-12-01
Scientists prefer not having to be tied down to a specific machine or operating system in order to analyze local and remote data sets or publish work. Increasingly, analysis has been migrating to decentralized web services and data sets, using web clients to provide the analysis interface. While simplifying workflow access, analysis, and publishing of data, the move does bring with it its own unique set of issues. Web clients used for analysis typically offer workflows geared towards a single user, with steps and results that are often difficult to recreate and share with others. Furthermore, workflow results often may not be easily used as input for further analysis. Older browsers further complicate things by having no way to maintain larger chunks of information, often offloading the job of storage to the back-end server or trying to squeeze it into a cookie. It has been difficult to provide a concept of "session storage" or "workflow sharing" without a complex orchestration of the back-end for storage depending on either a centralized file system or database. With the advent of HTML5, browsers gained the ability to store more information through the use of the Web Storage API (a browser-cookie holds a maximum of 4 kilobytes). Web Storage gives us the ability to store megabytes of arbitrary data in-browser either with an expiration date or just for a session. This allows scientists to create, update, persist and share their workflow without depending on the backend to store session information, providing the flexibility for new web-based workflows to emerge. In the DSASWeb portal ( http://cida.usgs.gov/DSASweb/ ), using these techniques, the representation of every step in the analyst's workflow is stored as plain-text serialized JSON, which we can generate as a text file and provide to the analyst as an upload. This file may then be shared with others and loaded back into the application, restoring the application to the state it was in when the session file was generated. A user may then view results produced during that session or go back and alter input parameters, creating new results and producing new, unique sessions which they can then again share. This technique not only provides independence for the user to manage their session as they like, but also allows much greater freedom for the application provider to scale out without having to worry about carrying over user information or maintaining it in a central location.
Web Services Provide Access to SCEC Scientific Research Application Software
NASA Astrophysics Data System (ADS)
Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.
2003-12-01
Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the correct API interface from within C++ and/or C/Fortran). This poster presentation will provide descriptions of the following selected web services and their origin as scientific application codes: 3D community velocity models for Southern California, geocoordinate conversions (latitude/longitude to UTM), execution of GMT graphical scripts, data format conversions (Gocad to Matlab format), and implementation of Seismic Hazard Analysis application programs that calculate hazard curve and hazard map data sets.
Mining Longitudinal Web Queries: Trends and Patterns.
ERIC Educational Resources Information Center
Wang, Peiling; Berry, Michael W.; Yang, Yiheng
2003-01-01
Analyzed user queries submitted to an academic Web site during a four-year period, using a relational database, to examine users' query behavior, to identify problems they encounter, and to develop techniques for optimizing query analysis and mining. Linguistic analyses focus on query structures, lexicon, and word associations using statistical…
Demonstrating Success: Web Analytics and Continuous Improvement
ERIC Educational Resources Information Center
Loftus, Wayne
2012-01-01
As free and low-cost Web analytics tools become more sophisticated, libraries' approach to user analysis can become more nuanced and precise. Tracking appropriate metrics with a well-formulated analytics program can inform design decisions, demonstrate the degree to which those decisions have succeeded, and thereby inform the next iteration in the…
A Network of Automatic Control Web-Based Laboratories
ERIC Educational Resources Information Center
Vargas, Hector; Sanchez Moreno, J.; Jara, Carlos A.; Candelas, F. A.; Torres, Fernando; Dormido, Sebastian
2011-01-01
This article presents an innovative project in the context of remote experimentation applied to control engineering education. Specifically, the authors describe their experience regarding the analysis, design, development, and exploitation of web-based technologies within the scope of automatic control. This work is part of an inter-university…
Using Web-Based Foreign Advertisements in International Marketing Classes
ERIC Educational Resources Information Center
Ryan, Jason
2011-01-01
The author examines the use of the Web-based foreign advertisements for enhancing the international awareness of undergraduate marketing students. An analysis compares the adaptation of advertisements for identical products to the cultural perceptions and values of consumers in different countries. In a sample of 110 international marketing…
Architecture-Based Reliability Analysis of Web Services
ERIC Educational Resources Information Center
Rahmani, Cobra Mariam
2012-01-01
In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…