Thompson, Terrill; Primlani, Saroj; Fiedor, Lisa
The main goal of accessibility standards and guidelines is to design websites everyone can use. The "IT Accessibility Constituent Group" developed this set of draft guidelines to help EQ authors, reviewers, and staff and the larger EDUCAUSE community ensure that web content is accessible to all users, including those with disabilities. This…
Kreutel, Jörn; Gerlach, Andrea; Klekamp, Stefanie; Schulz, Kristin
We describe the ideas and results of an applied research project that aims at leveraging the expressive power of semantic web technologies as a server-side backend for mobile applications that provide access to location and multimedia data and allow for a rich user experience in mobile scenarios, ranging from city and museum guides to multimedia enhancements of any kind of narrative content, including e-book applications. In particular, we will outline a reusable software architecture for both server-side functionality and native mobile platforms that is aimed at significantly decreasing the effort required for developing particular applications of that kind.
Background The World Wide Web (WWW) has become an increasingly essential resource for health information consumers. The ability to obtain accurate medical information online quickly, conveniently and privately provides health consumers with the opportunity to make informed decisions and participate actively in their personal care. Little is known, however, about whether the content of this online health information is equally accessible to people with disabilities who must rely on special devices or technologies to process online information due to their visual, hearing, mobility, or cognitive limitations. Objective To construct a framework for an automated Web accessibility evaluation; to evaluate the state of accessibility of consumer health information Web sites; and to investigate the possible relationships between accessibility and other features of the Web sites, including function, popularity and importance. Methods We carried out a cross-sectional study of the state of accessibility of health information Web sites to people with disabilities. We selected 108 consumer health information Web sites from the directory service of a Web search engine. A measurement framework was constructed to automatically measure the level of Web Accessibility Barriers (WAB) of Web sites following Web accessibility specifications. We investigated whether there was a difference between WAB scores across various functional categories of the Web sites, and also evaluated the correlation between the WAB and Alexa traffic rank and Google Page Rank of the Web sites. Results We found that none of the Web sites we looked at are completely accessible to people with disabilities, i.e., there were no sites that had no violation of Web accessibility rules. However, governmental and educational health information Web sites do exhibit better Web accessibility than the other categories of Web sites (P < 0.001). We also found that the correlation between the WAB score and the popularity of a
Ribera, Mireia; Porras, Merce; Boldu, Marc; Termens, Miquel; Sule, Andreu; Paris, Pilar
Purpose: The purpose of this paper is to explain the changes in the Web Content Accessibility Guidelines (WCAG) 2.0 compared with WCAG 1.0 within the context of its historical development. Design/methodology/approach: In order to compare WCAG 2.0 with WCAG 1.0 a diachronic analysis of the evolution of these standards is done. Known authors and…
Roig-Vila, Rosabel; Ferrández, Sergio; Ferri-Miralles, Imma
Diversity-based designing, or the goal of ensuring that web-based information is accessible to as many diverse users as possible, has received growing international acceptance in recent years, with many countries introducing legislation to enforce it. This paper analyses web content accessibility levels in Spanish education portals according to…
Ross, Arun; Owen, Charles B.; Vailaya, Aditya
This paper focuses on clustering a World Wide Web site (i.e., the 1998 World Cup Soccer site) into groups of documents that are predictive of future user accesses. Two approaches were developed and tested. The first approach uses semantic information inherent in the documents to facilitate the clustering process. User access history is then used…
Iwata, Hajime; Kobayashi, Naofumi; Tachibana, Kenji; Shirogane, Junko; Fukazawa, Yoshiaki
Web pages are used for a variety of purposes. End users must understand dynamically changing content and sequentially follow page links to find desired material, requiring significant time and effort. However, for visually impaired users using screen readers, it can be difficult to find links to web pages when link text and alternative text descriptions are inappropriate. Our method supports the discovery of content by analyzing 8 categories of link types, and allows visually impaired users to be aware of the content represented by links in advance. This facilitates end users access to necessary information on web pages. Our method of classifying web page links is therefore effective as a means of evaluating accessibility.
Young, Bradley L.; Oladeji, Lasun O.; Cichos, Kyle
Abstract Background Increasing numbers of training physicians are using the Internet to gather information about graduate medical education programs. The content and accessibility of web sites that provide this information have been demonstrated to influence applicants’ decisions. Assessments of orthopedic fellowship web sites including sports medicine, pediatrics, hand and spine have found varying degrees of accessibility and material. The purpose of this study was to evaluate the accessibility and content of the American Shoulder and Elbow Surgeons (ASES) fellowship web sites (SEFWs). Methods A complete list of ASES programs was obtained from a database on the ASES web site. The accessibility of each SEFWs was assessed by the existence of a functioning link found in the database and through Google®. Then, the following content areas of each SEFWs were evaluated: fellow education, faculty/previous fellow information, and recruitment. Results At the time of the study, 17 of the 28 (60.7%) ASES programs had web sites accessible through Google®, and only five (17.9%) had functioning links in the ASES database. Nine programs lacked a web site. Concerning web site content, the majority of SEFWs contained information regarding research opportunities, research requirements, case descriptions, meetings and conferences, teaching responsibilities, attending faculty, the application process, and a program description. Fewer than half of the SEFWs provided information regarding rotation schedules, current fellows, previous fellows, on-call expectations, journal clubs, medical school of current fellows, residency of current fellows, employment of previous fellows, current research, and previous research. Conclusions: A large portion of ASES fellowship programs lacked functioning web sites, and even fewer provided functioning links through the ASES database. Valuable information for potential applicants was largely inadequate across present SEFWs. PMID:27528833
Harper, Simon; Yesilada, Yeliz
Access to, and movement around, complex online environments, of which the World Wide Web (Web) is the most popular example, has long been considered an important and major issue in the Web design and usability field. The commonly used slang phrase ‘surfing the Web’ implies rapid and free access, pointing to its importance among designers and users alike. It has also been long established that this potentially complex and difficult access is further complicated, and becomes neither rapid nor free, if the user is disabled. There are millions of people who have disabilities that affect their use of the Web. Web accessibility aims to help these people to perceive, understand, navigate, and interact with, as well as contribute to, the Web, and thereby the society in general. This accessibility is, in part, facilitated by the Web Content Accessibility Guidelines (WCAG) currently moving from version one to two. These guidelines are intended to encourage designers to make sure their sites conform to specifications, and in that conformance enable the assistive technologies of disabled users to better interact with the page content. In this way, it was hoped that accessibility could be supported. While this is in part true, guidelines do not solve all problems and the new WCAG version two guidelines are surrounded by controversy and intrigue. This chapter aims to establish the published literature related to Web accessibility and Web accessibility guidelines, and discuss limitations of the current guidelines and future directions.
Chalamandaris, Aimilios; Raptis, Spyros; Tsiakoulis, Pirros; Karabetsos, Sotiris
Blind people and in general print-impaired people are often restricted to use their own computers, enhanced most often with expensive, screen reading programs, in order to access the web, and in a form that every screen reading program allows to. In this paper we present SpellCast Navi, a tool that is intended for people with visual impairments, which attempts to combine advantages from both customized and generic web enhancement tools. It consists of a generically designed engine and a set of case-specific filters. It can run on a typical web browser and computer, without the need of installing any additional application locally. It acquires and parses the content of web pages, converts bi-lingual text into synthetic speech using high quality speech synthesizer, and supports a set of common functionalities such as navigation through hotkeys, audible navigation lists and more. By using a post-hoc approach based on a-priori information of the website's layout, the audible presentation and navigation through the website is more intuitive a more efficient than with a typical screen reading application. SpellCast Navi poses no requirements on web pages and introduces no overhead to the design and development of a website, as it functions as a hosted proxy service.
Green, Ravonne A.; Huprich, Julia
Section 508 of the Americans with Disabilities Act (ADA) mandates that programs and services be accessible to people with disabilities. While schools of library and information science (SLIS*) and university libraries should model accessible Web sites, this may not be the case. This article examines previous studies about the Web accessibility of…
Xue, Zhiyun; Long, L. Rodney; Antani, Sameer; Jeronimo, Jose; Thoma, George R.
Content-based image retrieval (CBIR) is the process of retrieving images by directly using image visual characteristics. In this paper, we present a prototype system implemented for CBIR for a uterine cervix image (cervigram) database. This cervigram database is a part of data collected in a multi-year longitudinal effort by the National Cancer Institute (NCI), and archived by the National Library of Medicine (NLM), for the study of the origins of, and factors related to, cervical precancer/cancer. Users may access the system with any Web browser. The system is built with a distributed architecture which is modular and expandable; the user interface is decoupled from the core indexing and retrieving algorithms, and uses open communication standards and open source software. The system tries to bridge the gap between a user's semantic understanding and image feature representation, by incorporating the user's knowledge. Given a user-specified query region, the system returns the most similar regions from the database, with respect to attributes of color, texture, and size. Experimental evaluation of the retrieval performance of the system on "groundtruth" test data illustrates its feasibility to serve as a possible research tool to aid the study of the visual characteristics of cervical neoplasia.
Swallow, David; Petrie, Helen; Power, Christopher
This paper describes the design and evaluation of a Web Accessibility Information Resource (WebAIR) for supporting web developers to create and evaluate accessible websites. WebAIR was designed with web developers in mind, recognising their current working practices and acknowledging their existing understanding of web accessibility. We conducted an evaluation with 32 professional web developers in which they used either WebAIR or an existing accessibility information resource, the Web Content Accessibility Guidelines, to identify accessibility problems. The findings indicate that several design decisions made in relation to the language, organisation, and volume of WebAIR were effective in supporting web developers to undertake web accessibility evaluations.
Lopes, Rui; Carriço, Luis
The Web Science framework poses fundamental questions on the analysis of the Web, by focusing on how microscopic properties (e.g. at the level of a Web page or Web site) emerge into macroscopic properties and phenomena. One research topic on the analysis of the Web is Web accessibility evaluation, which centres on understanding how accessible a Web page is for people with disabilities. However, when framing Web accessibility evaluation on Web Science, we have found that existing research stays at the microscopic level. This article presents an experimental study on framing Web accessibility evaluation into Web Science's goals. This study resulted in novel accessibility properties of the Web not found at microscopic levels, as well as of Web accessibility evaluation processes themselves. We observed at large scale some of the empirical knowledge on how accessibility is perceived by designers and developers, such as the disparity of interpretations of accessibility evaluation tools warnings. We also found a direct relation between accessibility quality and Web page complexity. We provide a set of guidelines for designing Web pages, education on Web accessibility, as well as on the computational limits of large-scale Web accessibility evaluations.
... standards for HHS Web site content and communications materials. 311.7001 Section 311.7001 Federal... that is specifically intended for publication on, or delivery via, an HHS-owned or -funded Web site. (b... publication on, or delivery via, an HHS-owned or -funded Web site, the Project Officer shall consult with...
... standards for HHS Web site content and communications materials. 311.7001 Section 311.7001 Federal... that is specifically intended for publication on, or delivery via, an HHS-owned or -funded Web site. (b... publication on, or delivery via, an HHS-owned or -funded Web site, the Project Officer shall consult with...
... standards for HHS Web site content and communications materials. 311.7001 Section 311.7001 Federal... that is specifically intended for publication on, or delivery via, an HHS-owned or -funded Web site. (b... publication on, or delivery via, an HHS-owned or -funded Web site, the Project Officer shall consult with...
Uchida, Hitoshi; Ando, Masaya; Ohta, Kenji; Shimizu, Hirokazu; Hayashi, Yoshio; Ichihara, Yasuyo G.; Yamazaki, Ryoji
Internet use by the people with disabilities and the elderly in Japan is still low, but growing. However, the majority of web contents written in Japanese, even government sites, have very low accessibility. This paper introduces the active measures being taken in Japan to improve such conditions; consideration of a web contents accessibility guideline tailored to the unique characteristics of the Japanese language, development of a system to evaluate accessibility and implementation of actual trials.
Washington Univ., Seattle.
This brief paper considers the application of "universal design" principles to Web page design in order to increase accessibility for people with disabilities. Suggestions are based on the World Wide Web Consortium's accessibility initiative, which has proposed guidelines for all Web authors and federal government standards. Seven guidelines for…
A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.
Fernandes, N.; Lopes, R.; Carriço, L.
Zaparyniuk, Nicholas; Code, Jillianne
With the Internet taking a dominant role in corporate training, education, retail, and customer based product exploration, authors of Web-based information need to ensure that the media they deliver is accessible to the widest possible audience. Whether users have a visual, auditory, physical, or developmental disability, accessible multimedia can…
Dragut, Eduard Constantin
An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…
Olive, Geoffrey C.
Improving Web accessibility for disabled users visiting a university's Web site is explored following the World Wide Web Consortium (W3C) guidelines and Section 508 of the Rehabilitation Act rules for Web page designers to ensure accessibility. The literature supports the view that accessibility is sorely lacking, not only in the USA, but also…
Hackett, Stephanie; Parmanto, Bambang
The standard display of web pages is inadequate for users who are visually impaired. Most visually impaired people obtain information from a web page in a linear fashion via a screen reader, whereas sighted users can immediately obtain a bird's-eye view of a web page's organization and content by quickly scanning the page. AcceSS (which stands for…
One EPA Web is a multi-year project to improve EPA’s website to better meet the needs of our Web visitors. Content is developed and managed in the WebCMS which supports One EPA Web goals by standardizing how we create and publish content.
Wheaton, Joseph; Bertini, Patrizia
Accessibility is hardly a new problem and certainly did not originate with the Web. Lack of access to buildings long preceded the call for accessible Web content. Although it is unlikely that rehabilitation educators look at Web page accessibility with indifference, many may also find it difficult to implement. The authors posit three reasons why…
Ahmi, Aidi; Mohamad, Rosli
Despite the fact that Malaysian public institutions have progressed considerably on website and portal usage, web accessibility has been reported as one of the issues deserves special attention. Consistent with the government moves to promote an effective use of web and portal, it is essential for the government institutions to ensure compliance with established standards and guidelines on web accessibility. This paper evaluates accessibility of 25 Malaysian ministries websites using automated tools i.e. WAVE and Achecker. Both tools are designed to objectively evaluate web accessibility in conformance with Web Content Accessibility Guidelines 2.0 (WCAG 2.0) and United States Rehabilitation Act 1973 (Section 508). The findings reported somewhat low compliance to web accessibility standard amongst the ministries. Further enhancement is needed in the aspect of input elements such as label and checkbox to be associated with text as well as image-related elements. This findings could be used as a mechanism for webmasters to locate and rectify errors pertaining to the web accessibility and to ensure equal access of the web information and services to all citizen.
Powel, Wayne; Gill, Chris
Describes the Web site development practices of Gonzaga University, Spokane, Washington, and the content management systems that allow owners of information to control content while the university controls the look of the Web site. More than 150 content managers assume control of some portion of the Web site. (SLD)
Herring, Susan C.
Are established methods of content analysis (CA) adequate to analyze web content, or should new methods be devised to address new technological developments? This article addresses this question by contrasting narrow and broad interpretations of the concept of web content analysis. The utility of a broad interpretation that subsumes the narrow one is then illustrated with reference to research on weblogs (blogs), a popular web format in which features of HTML documents and interactive computer-mediated communication converge. The article concludes by proposing an expanded Web Content Analysis (WebCA) paradigm in which insights from paradigms such as discourse analysis and social network analysis are operationalized and implemented within a general content analytic framework.
Remote Sensing Information Gateway, a tool that allows scientists, researchers and decision makers to access a variety of multi-terabyte, environmental datasets and to subset the data and obtain only needed variables, greatly improving the download time.
Aboard the International Space Station, Expedition 22 Commander Jeff Williams and Flight Engineers Soichi Noguchi and T.J. Creamer share their thoughts about Internet access from space and post a r...
Zhang, X.; Forbes, J.; Miyahara, S.; Hagan, M.
As part of the interdisciplinary investigation "Tides, Planetary Waves, and Eddy Forcing of the Mean MLT Circulation", we provide web-based access to global monthly mean tidal fields from two models: the Kyushu University General Circulation Model, and the NCAR/HAO Global Scale Wave Model. Interactive solutions (Hough functions) to Laplace's Tidal Equation and various animations are also available. Herein, we briefly describe the models and illustrate the various tabular and plot options available. This web site also illustrates web data sharing protocols relevant to wider applications: (1) Balance of public access vs. rights of the investigators - Data sharing agreements, appropriate uses and attribution of the data; (2) Levels of accessibility - Agreement, simple form, application and request for password; (3) Methods of data distribution - Data tables, data files, archived data files, plots; (4) Database management - data dictionary, data recovery, resource lock, security.
Harrison, Sean W
The University of Mississippi Medical Center in Jackson, Miss., is the only medical school in the state. We performed 235,000 procedures in the 2001-02 fiscal year. All imaging services within the radiology department are networked to a PACS and are filmless. The elimination of film required that we decentralize our traditional file room to allow easy access to our radiology network across the campus. In our facility, there are three levels of image access: Diagnostic Quality, Review Quality and Web Access. Diagnostic Quality requires top-of-the-line workstations and monitors and is the most expensive. Review Quality workstations represent some savings over Diagnostic and are used in the ICU, orthopedics and surgery. Web Access appears to satisfy most areas outside the main diagnostic department. The account set-up procedure is simple because it uses our intranet email system. Images are easily pasted into presentation applications for articles and conferences. However, the main advantage of Web Access is the low cost. The downside of Web Access is that the images are for review only and are limited by the quality of the monitor in use. It is also somewhat cumbersome to retrieve old or comparison images via this method. The Web only holds approximately 45 days of the most recent images, therefore older studies may not be available. The deployment of this Web-based service has aided in our efforts to reduce the amount of film we print and has also been beneficial in improving patient care through faster service.
The purpose of this web-accessible database is for the public to be able to view instantaneous readings from a solar-powered air monitoring station located in a public location (prototype pilot test is outside of a library in Durham County, NC). The data are wirelessly transmitte...
A Content Standard for Computational Models; Digital Rights Management (DRM) Architectures; A Digital Object Approach to Interoperable Rights Management: Finely-Grained Policy Enforcement Enabled by a Digital Object Infrastructure; LOCKSS: A Permanent Web Publishing and Access System; Tapestry of Time and Terrain.
Hill, Linda L.; Crosier, Scott J.; Smith, Terrence R.; Goodchild, Michael; Iannella, Renato; Erickson, John S.; Reich, Vicky; Rosenthal, David S. H.
Includes five articles. Topics include requirements for a content standard to describe computational models; architectures for digital rights management systems; access control for digital information objects; LOCKSS (Lots of Copies Keep Stuff Safe) that allows libraries to run Web caches for specific journals; and a Web site from the U.S.…
This article gives tips on how to avoid having content stolen by plagiarists. Suggestions include: using a Web search service such as Google to search for unique strings of text at the individuals site to uncover other sites with the same content; buying a infringement-detection program; or hiring a public relations firm to do the work. There are…
Bradbard, David A.; Peters, Cara
Web accessibility is the practice of making Web sites accessible to all, particularly those with disabilities. As the Internet becomes a central part of post-secondary instruction, it is imperative that instructional Web sites be designed for accessibility to meet the needs of disabled students. The purpose of this article is to introduce Web…
Krach, S. Kathleen; Jelenic, Milan
This study examined individual school home pages and their corresponding district-wide home pages for Web accessibility. As of 1999, the U.S. government established that all public and private school Web sites were to be made "Web accessible," meaning accessible to students with disabilities. Although higher education sites have been subjected to…
This article describes how students with disabilities in regular classes are using the WebQuest lesson format to access the Internet. It explains essential WebQuest principles, creating a draft Web page, and WebQuest components. It offers an example of a WebQuest about salvaging the sunken ships, Titanic and Lusitania. A WebQuest planning form is…
Leon, John; Cutlip, William; Hametz, Mark
The Access To Space (ATS) Group at NASA's Goddard Space Flight Center (GSFC) supports the science and technology community at GSFC by facilitating frequent and affordable opportunities for access to space. Through partnerships established with access mode suppliers, the ATS Group has developed an interactive Mission Design web site. The ATS web site provides both the information and the tools necessary to assist mission planners in selecting and planning their ride to space. This includes the evaluation of single payloads vs. ride-sharing opportunities to reduce the cost of access to space. Features of this site include the following: (1) Mission Database. Our mission database contains a listing of missions ranging from proposed missions to manifested. Missions can be entered by our user community through data input tools. Data is then accessed by users through various search engines: orbit parameters, ride-share opportunities, spacecraft parameters, other mission notes, launch vehicle, and contact information. (2) Launch Vehicle Toolboxes. The launch vehicle toolboxes provide the user a full range of information on vehicle classes and individual configurations. Topics include: general information, environments, performance, payload interface, available volume, and launch sites.
Bradbard, David A.; Peters, Cara; Caneva, Yoana
The Web has become an integral part of postsecondary education within the United States. There are specific laws that legally mandate postsecondary institutions to have Web sites that are accessible for students with disabilities (e.g., the Americans with Disabilities Act (ADA)). Web accessibility policies are a way for universities to provide a…
Wells, Julie A.; Barron, Ann E.
In 2002, the National Center for Educational Statistics reported that 99% of public schools had Internet access and 86% of those schools had a web site or web page (Kleiner & Lewis, 2003). This study examined accessibility issues on elementary school homepages. Using a random sample of elementary school web sites, the researchers documented…
Mirri, Silvia; Salomoni, Paola; Prandi, Catia; Muratori, Ludovico Antonio
The Web 2.0 evolution has spread more interactive technologies which affected accessibility for users who navigate the Web by using assistive technologies. In particular, the partial download of new data, the continuous refreshing, and the massive use of scripting can represent significant barriers especially for people with visual impairments, who enjoy the Web by means of screen readers. On the other hand, such technologies can be an opportunity, because they can provide a new means of transcoding Web content, making the Web more accessible. In this article we present GAPforAPE, an augmented browsing system (based on Web browsers extensions) which offers a user's profiling system and transcodes Web content according to constrains declared by users: the same Web page is provided to any user, but GAPforAPE computes adequate customizations, by exploiting scripting technologies which usually affect Web pages accessibility. GAPforAPE imitates screen readers behavior: it applies a specific set of transcoding scripts devoted to a given Web site, when available, and a default set of transcoding operations otherwise. The continuous and quick evolution of the Web has shown that a crowdsourcing system is a desirable solution, letting the transcoding scripts evolve in the same way.
Freeman, Richard T; Yin, Hujun
We present a new method for content management and knowledge discovery using a topology-preserving neural network. The method, termed topological organization of content (TOC), can generate a taxonomy of topics from a set of unannotated, unstructured documents. The TOC consists of a hierarchy of self-organizing growing chains (GCs), each of which can develop independently in terms of size and topics. The dynamic development process is validated continuously using a proposed entropy-based Bayesian information criterion (BIC). Each chain meeting the criterion spans child chains, with reduced vocabularies and increased specializations. This results in a topological tree hierarchy, which can be browsed like a table of contents directory or web portal. A brief review is given on existing methods for document clustering and organization, and clustering validation measures. The proposed approach has been tested and compared with several existing methods on real world web page datasets. The results have clearly demonstrated the advantages and efficiency in content organization of the proposed method in terms of computational cost and representation. The TOC can be easily adapted for large-scale applications. The topology provides a unique, additional feature for retrieving related topics and confining the search space.
Examines how ongoing content management needs and tasks affect organizations that maintain Web-based information systems (Web IS). Investigates how organizational context shapes content management practice and configuration of the underlying Web IS technology. Presents a model of the socio-technical context of Web IS content management. (AEF)
Thomas, Lisa Carlucci
In a time of increasingly digital distribution, challenging questions arise regarding what people own, what they want to access to, and how they develop and maintain collections. What considerations influence their decision making, as individuals and libraries shift toward more subscription-oriented content? Digital access to e-books and…
Bray, Marty; Flowers, Claudia; Gibson, Patricia
Many school districts (SDs) use the World Wide Web (WWW or Web) to disseminate a wide variety of information about things such as district events, policies, and a wide variety of student information. On-line barriers limit the accessibility of the WWW for persons and students with disabilities and thus can limit their access to vital information.…
While accessibility of information technologies is often acknowledged as important, it is frequently not well addressed in practice. The purpose of this study was to examine the work of web developers and content managers to explore why and how accessibility is or is not addressed as an objective as websites are planned, built and maintained.…
Lee, P. Y.; Hui, S. C.; Fong, A. C. M.
Presents an analysis of the distinguishing features of pornographic Web pages so that effective filtering techniques can be developed. Surveys the existing techniques for Web content filtering and describes the implementation of a Web content filtering system that uses an artificial neural network. (Author/LRW)
Bialkova, Svetlana; Oberauer, Klaus
In two experiments participants held in working memory (WM) three digits in three different colors, and updated individual digits with the results of arithmetic equations presented in one of the colors. In the memory-access condition, a digit from WM had to be used as the first number in the equation; in the no-access condition, complete equations were presented so that no information from WM had to be accessed for the computation. Updating a digit not updated in the preceding step took longer than updating the same digit as in the preceding step, a time difference referred to as object-switch costs. Object-switch costs were equal in access and no-access equations, implying that they did not reflect the time to retrieve a new digit from WM. Access equations were completed as fast as no-access equations, implying that access to information in WM is as fast as reading the same information. No-access equations were slowed by a mismatch between the first digit of the presented equation and the to-be-updated digit in WM, showing that this digit is automatically accessed even when not needed. It is concluded that contents and their contexts form composites in WM that are necessarily accessed together.
Vigo, Markel; Brajnik, Giorgio; Arrue, Myriam; Abascal, Julio
The Web Accessibility Quantitative Metric (WAQM) aims at accurately measuring the accessibility of web pages. One of the main features of WAQM among others is that it is evaluation tool independent for ranking and accessibility monitoring scenarios. This article proposes a method to attain evaluation tool independence for all foreseeable scenarios. After demonstrating that homepages have a more similar error profile than any other web page in a given web site, 15 homepages were measured with 10,000 different values of WAQM parameters using EvalAccess and LIFT, two automatic evaluation tools for accessibility. A similar procedure was followed with random pages and with several test files obtaining several tuples that minimise the difference between both tools. One thousand four hundred forty-nine web pages from 15 web sites were measured with these tuples and those values that minimised the difference between the tools were selected. Once the WAQM was tuned, the accessibility of 15 web sites was measured with two metrics for web sites, concluding that even if similar values can be produced, obtaining the same scores is undesirable since evaluation tools behave in a different way.
The principles of One EPA Web can be applied to better meet the needs and expectations of our audiences, fit their information-seeking behavior, and help them accomplish tasks. Learn about the five paths forward for transforming web content.
Why do librarians and library staff other than Web librarians and developers need to know about accessibility? Web services staff do not--or should not--operate in isolation from the rest of the library staff. It is important to consider what areas of online accessibility are applicable to other areas of library work and to colleagues' regular job…
Hinds, Richard M; Klifto, Christopher S; Naik, Amish A; Sapienza, Anthony; Capo, John T
The Internet is a common resource for applicants of hand surgery fellowships, however, the quality and accessibility of fellowship online information is unknown. The objectives of this study were to evaluate the accessibility of hand surgery fellowship Web sites and to assess the quality of information provided via program Web sites. Hand fellowship Web site accessibility was evaluated by reviewing the American Society for Surgery of the Hand (ASSH) on November 16, 2014 and the National Resident Matching Program (NRMP) fellowship directories on February 12, 2015, and performing an independent Google search on November 25, 2014. Accessible Web sites were then assessed for quality of the presented information. A total of 81 programs were identified with the ASSH directory featuring direct links to 32% of program Web sites and the NRMP directory directly linking to 0%. A Google search yielded direct links to 86% of program Web sites. The quality of presented information varied greatly among the 72 accessible Web sites. Program description (100%), fellowship application requirements (97%), program contact email address (85%), and research requirements (75%) were the most commonly presented components of fellowship information. Hand fellowship program Web sites can be accessed from the ASSH directory and, to a lesser extent, the NRMP directory. However, a Google search is the most reliable method to access online fellowship information. Of assessable programs, all featured a program description though the quality of the remaining information was variable. Hand surgery fellowship applicants may face some difficulties when attempting to gather program information online. Future efforts should focus on improving the accessibility and content quality on hand surgery fellowship program Web sites.
Wright, Adam; Bates, David W; Middleton, Blackford; Hongsermeier, Tonya; Kashyap, Vipul; Thomas, Sean M; Sittig, Dean F
Clinical decision support is a powerful tool for improving healthcare quality and patient safety. However, developing a comprehensive package of decision support interventions is costly and difficult. If used well, Web 2.0 methods may make it easier and less costly to develop decision support. Web 2.0 is characterized by online communities, open sharing, interactivity and collaboration. Although most previous attempts at sharing clinical decision support content have worked outside of the Web 2.0 framework, several initiatives are beginning to use Web 2.0 to share and collaborate on decision support content. We present case studies of three efforts: the Clinfowiki, a world-accessible wiki for developing decision support content; Partners Healthcare eRooms, web-based tools for developing decision support within a single organization; and Epic Systems Corporation's Community Library, a repository for sharing decision support content for customers of a single clinical system vendor. We evaluate the potential of Web 2.0 technologies to enable collaborative development and sharing of clinical decision support systems through the lens of three case studies; analyzing technical, legal and organizational issues for developers, consumers and organizers of clinical decision support content in Web 2.0. We believe the case for Web 2.0 as a tool for collaborating on clinical decision support content appears strong, particularly for collaborative content development within an organization.
Qutab, Saima; Mahmood, Khalid
Purpose: The purpose of this paper is to investigate library web sites in Pakistan, to analyse their content and navigational strengths and weaknesses and to give recommendations for developing better web sites and quality assessment studies. Design/methodology/approach: Survey of web sites of 52 academic, special, public and national libraries in…
The aim of this paper is to describe ongoing research being carried out to enable people with visual impairments to communicate directly with designers and specifiers of hobby and community web sites to maximise the accessibility of their sites. The research started with an investigation of the accessibility of community and hobby web sites as perceived by a group of visually impaired end users. It is continuing with an investigation into how to best to communicate with web designers who are not experts in web accessibility. The research is making use of communication theory to investigate how terminology describing personal experience can be used in the most effective and powerful way. By working with the users using a Delphi study the research has ensured that the views of the visually impaired end users is successfully transmitted.
Eckert, Dominique; Jauzac, Mathilde; Shan, HuanYuan; Kneib, Jean-Paul; Erben, Thomas; Israel, Holger; Jullo, Eric; Klein, Matthias; Massey, Richard; Richard, Johan; Tchernin, Céline
Big-Bang nucleosynthesis indicates that baryons account for 5% of the Universe’s total energy content. In the local Universe, the census of all observed baryons falls short of this estimate by a factor of two[2,3]. Cosmological simulations indicate that the missing baryons have not yet condensed into virialised halos, but reside throughout the filaments of the cosmic web: a low-density plasma at temperature 105–107 K known as the warm-hot intergalactic medium (WHIM)[3,4,5,6]. There have been previous claims of the detection of warm baryons along the line of sight to distant blazars[7,8,9,10] and hot gas between interacting clusters[11,12,13,14]. These observations were however unable to trace the large-scale filamentary structure, or to estimate the total amount of warm baryons in a representative volume of the Universe. Here we report X-ray observations of filamentary structures of ten-million-degree gas associated with the galaxy cluster Abell 2744. Previous observations of this cluster were unable to resolve and remove coincidental X-ray point sources. After subtracting these, we reveal hot gas structures that are coherent over 8 Mpc scales. The filaments coincide with over-densities of galaxies and dark matter, with 5-10% of their mass in baryonic gas. This gas has been heated up by the cluster's gravitational pull and is now feeding its core. PMID:26632589
Gomathi, C.; Moorthi, M.; Duraiswamy, K.
Web Access Pattern (WAP), which is the sequence of accesses pursued by users frequently, is a kind of interesting and useful knowledge in practice. Sequential Pattern mining is the process of applying data mining techniques to a sequential database for the purposes of discovering the correlation relationships that exist among an ordered list of…
Aydogmus, Z.; Aydogmus, O.
The Internet provides an opportunity for students to access laboratories from outside the campus. This paper presents a Web-based remote access real-time laboratory using SCADA (supervisory control and data acquisition) control. The control of an induction motor is used as an example to demonstrate the effectiveness of this remote laboratory,…
Guthrie, Sally A.
Examines the accessibility of Web sites belonging to 80 colleges of communications and schools of journalism by examining the hypertext markup language (HTML) used to format the pages. Suggests ways to revise the markup of pages to make them more accessible to students with vision, hearing, and mobility problems. Lists resources of the latest…
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
Yamada, Akira; Kubota, Ayumu; Miyake, Yutaka; Hashimoto, Kazuo
Using Web-based content management systems such as Blog, an end user can easily publish User Generated Content (UGC). Although publishing of UGCs is easy, controlling access to them is a difficult problem for end users. Currently, most of Blog sites offer no access control mechanism, and even when it is available to users, it is not sufficient to control users who do not have an account at the site, not to mention that it cannot control accesses to content hosted by other UGC sites. In this paper, we propose new access control architecture for UGC, in which third party entities can offer access control mechanism to users independently of UGC hosting sites. With this architecture, a user can control accesses to his content that might be spread over many different UGC sites, regardless of whether those sites have access control mechanism or not. The key idea to separate access control mechanism from UGC sites is to apply cryptographic access control and we implemented the idea in such a way that it requires no modification to UGC sites and Web browsers. Our prototype implementation shows that the proposed access control architecture can be easily deployed in the current Web-based communication environment and it works quite well with popular Blog sites.
The fact that people now live in a world of abundant portable electronic devices is important to any organization that maintains a web presence, including libraries. No longer tied to a desktop, the patrons' netbooks, tablets, ebook readers, and, of course, cellphones all become potential tools for remote access to library content. About a year…
Explains extensible markup language (XML) and how it differs from hypertext markup language (HTML) and standard generalized markup language (SGML). Highlights include features of XML, including better formatting of documents, better searching capabilities, multiple uses for hyperlinking, and an increase in Web applications; Web browsers; and what…
Pomeroy, Brian; Crawford, Evan; Sinisi, Albert
The Children's Hospital of Philadelphia transformed its web site to enhance patient satisfaction and attract new patients, as well as meet the needs of clinicians and the hospital's business plans. They accomplished this by implementing a content management system that would allow content creation and updating to be delegated to appropriate department staffers, thereby eliminating bottlenecks and unnecessary steps and ensuring that the web site receives fresh content much more quickly.
Legaz-García, María del Carmen; Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás
Linking Electronic Healthcare Records (EHR) content to educational materials has been considered a key international recommendation to enable clinical engagement and to promote patient safety. This would suggest citizens to access reliable information available on the web and to guide them properly. In this paper, we describe an approach in that direction, based on the use of dual model EHR standards and standardized educational contents. The recommendation method will be based on the semantic coverage of the learning content repository for a particular archetype, which will be calculated by applying semantic web technologies like ontologies and semantic annotations.
Colleges increasingly rely on the Web to attract, inform, and interact with students. This makes Web site accessibility and usability critical concerns, particularly for public community colleges, which educate sizable numbers of students with disabilities. As committed providers of postsecondary education to students with disabilities and thus a…
Oakland University's Kresge Library first launched its Web site in 1996. The initial design and subsequent contributions were originally managed by a single Webmaster. In 2002, the library restructured its Web content creation and management to a distributed, collaborative method with the goal of increasing the amount, accuracy, and timeliness of…
Tso, Kam S.; Pajevski, Michael J.
Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers
Gerber, Nathan; Merker, Lance
Like many institutions across the country, Utah Valley State College (UVSC) found itself struggling to keep its website current. In the spring of 2003, the UVSC's IT department began looking at alternatives that would simplify and streamline the process of updating the university's web pages. Specifically, they wanted a straightforward way to…
SanthanaVannan, Suresh K; Cook, Robert B; Pan, Jerry Yun; Wilson, Bruce E
Remote sensing data from satellites have provided valuable information on the state of the earth for several decades. Since March 2000, the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on board NASA s Terra and Aqua satellites have been providing estimates of several land parameters useful in understanding earth system processes at global, continental, and regional scales. However, the HDF-EOS file format, specialized software needed to process the HDF-EOS files, data volume, and the high spatial and temporal resolution of MODIS data make it difficult for users wanting to extract small but valuable amounts of information from the MODIS record. To overcome this usability issue, the NASA-funded Distributed Active Archive Center (DAAC) for Biogeochemical Dynamics at Oak Ridge National Laboratory (ORNL) developed a Web service that provides subsets of MODIS land products using Simple Object Access Protocol (SOAP). The ORNL DAAC MODIS subsetting Web service is a unique way of serving satellite data that exploits a fairly established and popular Internet protocol to allow users access to massive amounts of remote sensing data. The Web service provides MODIS land product subsets up to 201 x 201 km in a non-proprietary comma delimited text file format. Users can programmatically query the Web service to extract MODIS land parameters for real time data integration into models, decision support tools or connect to workflow software. Information regarding the MODIS SOAP subsetting Web service is available on the World Wide Web (WWW) at http://daac.ornl.gov/modiswebservice.
Mischo, William H.; Schlembach, Mary C.
Describes the Web-based technologies employed by the Grainger Engineering Library Information Center at the University of Illinois, Urbana-Champaign in implementing access to local information resources. Discusses Microsoft Active Server Pages (ASP) technologies and the associated local database structure and format, as well as the general…
Franklin, Rosemary Aud
Examines the developing state of subject access on the Web. Topics include new scholarly research methods; authority control; cataloging and metadata; interoperability and thesauri development; extensibility, including XML and RDF (resource description framework); indexing and information retrieval; flexibility in subject classification; and…
Ingle, Emma; Green, Ravonne A.; Huprich, Julia
One issue that public librarians must consider when planning Web site design is accessibility for patrons with disabilities. This article reports a study of Web site accessibility of public libraries in Georgia. The focus of the report is whether public libraries use accessible guidelines and standards in making their Web sites accessible. An…
Cagirgan Gulten, Dilek
This research aims to investigate primary preservice mathematics teachers' views on distance education and web pedagogical content knowledge in terms of the subscales of general web, communicative web, pedagogical web, web pedagogical content and attitude towards web based instruction. The research was conducted with 46 senior students in the…
Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.
Reynolds, Betty; And Others
This paper shows how the content of a World Wide Web page is selected and how an examination of the intended audience influences content. Examples from the New Mexico Tech (NMT) Library homepage show what sources are selected and what level of detail is appropriate for the intended audience. Six fundamental functions of libraries and information…
McEneaney, John E.
Supporting students' learning from subject area text involves focusing on both the text's content and on the processes students apply as they work to acquire, organize, and integrate that content. Clearly, more complex texts require more sophisticated learning processes on the part of students. Resources on the World Wide Web pose special…
The purpose of this research is to "design web-based media contents editor for establishing UCC(User Created Contents)-based websites." The web-based editor features user-oriented interfaces and increased convenience, significantly different from previous off-line editors. It allows users to edit media contents online and can be effectively used for online promotion activities of enterprises and organizations. In addition to development of the editor, the research aims to support the entry of enterprises and public agencies to the online market by combining the technology with various UCC items.
Hammann, H P; Suedmeyer, W K; Hahn, A W
We have developed a system for remotely accessible secure electronic storage of electrocardiographic (ECG) and other associated data. It allows entry of data from any authorized remote user and is specifically built to accommodate the ECGs of multiple species. The present system is implemented on a Sun Sparc Solaris 2.5 platform using Oracle 7.3.2, and the Oracle 7.3.2 Web server. It may be easily ported to any other UNIX or Windows NT platform. No client is needed other than an Internet Protocol connected computer using a web browser such as Netscape Navigator or Microsoft Internet Explorer.
Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.
Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the
Mills, Steven C.
As the use of the Web is perceived to be an effective tool for dissemination of research findings for the provision of asynchronous instruction, the issue of accessibility of Web page information will become more and more relevant. The World Wide Web consortium (W3C) has recognized a disparity in accessibility to the Web between persons with and…
Kurniawan, Sri H.
This study investigated whether World Wide Web information resources for students with disabilities are accessible and whether there is an accessibility difference between Web sites from the United Kingdom, United States, Australia, and Canada as rated by the Bobby automatic accessibility tool. Thirty academic Web sites from each country were…
Anderson, Lynn; Rauscher, Richard; Lee, H.
Authentication, authorization, accounting, and encryption are goals of security strategies for web information being accessed that is private. The definition of these terms is as follows: • Authentication - validation that the individual (or system) is who they say they are • Authorization - validation that the individual (or system) accessing information is authorized to do so • Accounting - records are kept of what is accessed • Encryption - use of a ‘scrambling’ algorithm such that the information can pass securely across the public Internet without being intelligible; information is specifically ‘unscrambled’ or deencrypted at the receiving end Many tools can be used to meet these goals. The degree to which the goals are met is determined by how we use these tools. Methodologies similar to TSEC and ITSEC can be used to determine the appropriate level of protection for a particular web application. This poster describes a set of effective strategies for web application security and the level of protection each strategy provides.
Find resources and guidance on writing for the web, keeping your content relevant, using social media, meeting accessibility standards, and how to transform your content into the WebCMS to meet One EPA Web standards.
Burguiere, Thomas; Causse, Florian; Ung, Visotheary; Vignes-Lebbe, Régine
Single-access keys are a major tool for biologists who need to identify specimens. The construction process of these keys is particularly complex (especially if the input data set is large) so having an automatic single-access key generation tool is essential. As part of the European project ViBRANT, our aim was to develop such a tool as a web service, thus allowing end-users to integrate it directly into their workflow. IKey+generates single-access keys on demand, for single users or research institutions. It receives user input data (using the standard SDD format), accepts several key-generation parameters (affecting the key topology and representation), and supports several output formats. IKey+is freely available (sources and binary packages) at www.identificationkey.fr. Furthermore, it is deployed on our server and can be queried (for testing purposes) through a simple web client also available at www.identificationkey.fr (last accessed 13 August 2012). Finally, a client plugin will be integrated to the Scratchpads biodiversity networking tool (scratchpads.eu).
Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan
Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.
Whang, Yonghyun; Jung, Changwoo; Kim, Jihong; Chung, Sungkwon
In this paper, we describe the design and implementation of WebAlchemist, a prototype web transcoding system, which automatically converts a given HTML page into a sequence of equivalent HTML pages that can be properly displayed on a hand-held device. The Web/Alchemist system is based on a set of HTML transcoding heuristics managed by the Transcoding Manager (TM) module. In order to tackle difficult-to-transcode pages such as ones with large or complex table structures, we have developed several new transcoding heuristics that extract partial semantics from syntactic information such as the table width, font size and cascading style sheet. Subjective evaluation results using popular HTML pages (such as the CNN home page) show that WebAlchemist generates readable, structure-preserving transcoded pages, which can be properly displayed on hand-held devices.
Surfing for Data: A Gathering Trend in Data Storage Is the Use of Web-Based Applications that Make It Easy for Authorized Users to Access Hosted Server Content with Just a Computing Device and Browser
Technology & Learning, 2005
In recent years, the widespread availability of networks and the flexibility of Web browsers have shifted the industry from a client-server model to a Web-based one. In the client-server model of computing, clients run applications locally, with the servers managing storage, printing functions, and network traffic. Because every client is…
Fitzgerald, Sharon A; Macan Yadrich, Donna; Werkowitch, Marilyn; Piamjariyakul, Ubolrat; Smith, Carol E
When managing chronic illnesses, caregivers repeatedly seek online information about providing complex, long-term care but often neglect to find information about how to care for themselves. Poor health among caregivers is not only detrimental to their own well-being but may also result in harm to those for whom they care. For this reason, caregivers need access to information and activities about caring for themselves in addition to the information about managing home care they are already likely to seek. The HPN Family Caregivers Web site was developed to guide caregivers through the process of caring for themselves by establishing a caregiving routine, self-monitoring their mental and physical health, and practicing good sleep hygiene, while also managing the complexities of home care. While Web site information, activities, and algorithms for managing chronic illnesses need to be specific to each population, the content guiding caregivers to care for their own health is universal.
Librarians and libraries have long been committed to providing equitable access to information. In the past decade and a half, the growth of the Internet and the rapid increase in the number of online library resources and tools have added a new dimension to this core duty of the profession: ensuring accessibility of online resources to users with…
Roelof Versteeg; Roelof Versteeg; Trevor Rowe
Nguyen, Ngan; Hickey, Glenn; Raney, Brian J.; Armstrong, Joel; Clawson, Hiram; Zweig, Ann; Karolchik, Donna; Kent, William James; Haussler, David; Paten, Benedict
Motivation: Researchers now have access to large volumes of genome sequences for comparative analysis, some generated by the plethora of public sequencing projects and, increasingly, from individual efforts. It is not possible, or necessarily desirable, that the public genome browsers attempt to curate all these data. Instead, a wealth of powerful tools is emerging to empower users to create their own visualizations and browsers. Results: We introduce a pipeline to easily generate collections of Web-accessible UCSC Genome Browsers interrelated by an alignment. It is intended to democratize our comparative genomic browser resources, serving the broad and growing community of evolutionary genomicists and facilitating easy public sharing via the Internet. Using the alignment, all annotations and the alignment itself can be efficiently viewed with reference to any genome in the collection, symmetrically. A new, intelligently scaled alignment display makes it simple to view all changes between the genomes at all levels of resolution, from substitutions to complex structural rearrangements, including duplications. To demonstrate this work, we create a comparative assembly hub containing 57 Escherichia coli and 9 Shigella genomes and show examples that highlight their unique biology. Availability and implementation: The source code is available as open source at: https://github.com/glennhickey/progressiveCactus The E.coli and Shigella genome hub is now a public hub listed on the UCSC browser public hubs Web page. Contact: email@example.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25138168
Bermúdez-Tamayo, Clara; Pernett, Jaime Jiménez; Garcia-Gutierrez, Jose Francisco; Cózar-Olmo, José Manuel; Valero-Aguilera, Beatriz
Abstract Background: People who use the Internet to research health topics do not usually find all the information they need and do not trust what they read. This study was designed to assess the reliability, accessibility, readability, and popularity of cancer Web sites in Spanish and to analyze the suitability of Web site content in accordance with the specific information needs of cancer patients. Materials and Methods: This was a two-phase, cross-sectional, descriptive study. The first phase involved data gathering through online searches and direct observation. The second phase involved individual structured interviews with 169 patients with breast, prostate, bladder, and kidney cancer. Spearman rank correlations were calculated between variables. Results: Most sites belonged to nonprofit organizations, followed by universities or medical centers (14%). Thirty-one percent of the Web sites had quality seals, 59% provided details of authorship, 62% provided references to bibliographic sources, 38% identified their funding sources, and 54% showed the date of their last update. Twenty-one percent of the Web sites did not meet the minimum accessibility criteria. With regard to readability, 24% of the texts were considered to be “quite difficult.” Patients' information needs vary depending on the type of cancer they have, although all patients want to know about the likelihood of a cure, survival rates, the side effects, and risks of treatment. Conclusions: The health information on cancer available on the Internet in Spanish is not very reliable, accessible, or readable and is not necessarily the information that breast, kidney, prostate, and bladder cancer patients require. The content of cancer Web sites needs to be assessed according to the information needs of patients. PMID:24073899
Wisdom, Jenifer R.; White, Nathan A.; Goldsmith, Kimberley A.; Bielavitz, Sarann; Davis, Charles E.; Drum, Charles
A needs assessment determined Oregon community college staff knowledge of (a) accessible information technology (IT) guidelines, (b) IT-related disability laws, (c) legal obligations regarding Web accessibility, and (d) perceived level of current Web accessibility. Training needs were assessed and training suggestions were solicited. IT staff…
Mariger, Heather Ann
The Internet is an integral part of higher education today. Students, faculty, and staff must have access to the institutional web for essential activities. For persons with disabilities, the web is a double-edged sword. While an accessibly designed website can mitigate or remove barriers, an inaccessible one can make access impossible. If…
Social bookmarking has gained popularity since the advent of Web 2.0. Keywords known as tags are created to annotate web content, and the resulting tag space composed of the tags, the resources, and the users arises as a new platform for web content discovery. Useful and interesting web resources can be located through searching and browsing based…
Hakkarinen, C.; Brown, D.; Callahan, J.; hankin, S.; de Koningh, M.; Middleton-Link, D.; Wigley, T.
A Web-based access system to climate model output data sets for intercomparison and analysis has been produced, using the NOAA-PMEL developed Live Access Server software as host server and Ferret as the data serving and visualization engine. Called ARCAS ("ACACIA Regional Climate-data Access System"), and publicly accessible at http://dataserver.ucar.edu/arcas, the site currently serves climate model outputs from runs of the NCAR Climate System Model for the 21st century, for Business as Usual and Stabilization of Greenhouse Gas Emission scenarios. Users can select, download, and graphically display single variables or comparisons of two variables from either or both of the CSM model runs, averaged for monthly, seasonal, or annual time resolutions. The time length of the averaging period, and the geographical domain for download and display, are fully selectable by the user. A variety of arithmetic operations on the data variables can be computed "on-the-fly", as defined by the user. Expansions of the user-selectable options for defining analysis options, and for accessing other DOD-compatible ("Distributed Ocean Data System-compatible") data sets, residing at locations other than the NCAR hardware server on which ARCAS operates, are planned for this year. These expansions are designed to allow users quick and easy-to-operate web-based access to the largest possible selection of climate model output data sets available throughout the world.
Lahinta, A.; Haris, I.; Abdillah, T.
The aim of this paper is to describe a developed application of Simple Object Access Protocol (SOAP) as a model for improving libraries’ digital content findability on the library web. The study applies XML text-based protocol tools in the collection of data about libraries’ visibility performance in the search results of the book. Model from the integrated Web Service Document Language (WSDL) and Universal Description, Discovery and Integration (UDDI) are applied to analyse SOAP as element within the system. The results showed that the developed application of SOAP with multi-tier architecture can help people simply access the website in the library server Gorontalo Province and support access to digital collections, subscription databases, and library catalogs in each library in Regency or City in Gorontalo Province.
Yuan, Dingrong; Mo, Zhuoying; Xie, Bing; Xie, Yangcai
There are huge amounts of information on Web pages, which includes content information and other useless information, such as navigation, advertisement and flash of animation etc. Reducing the toils of Web users, we estabished a thechnique to extract the content information from web page. Fristly, we analyzed the semantic of web documents by V8 engine of Google and parsed the web document into DOM tree. And then, traversed the DOM tree, pruned the DOM tree in the light of the characteristic of Web page's edit language. Finally, we extracted the content information from Web page. Theoretics and experiments showed that the technique could simplify the web page, present the content information to web users and supply clean data for applicable area, such as retrieval, KDD and DM from web.
Donato, David I.
McCray, A. T.; Loane, R. F.; Browne, A. C.; Bangalore, A. K.
We conducted a study of user queries to the National Library of Medicine Web site over a three month period. Our purpose was to study the nature and scope of these queries in order to understand how to improve users' access to the information they are seeking on our site. The results show that the queries are primarily medical in content (94%), with only a small percentage (5.5%) relating to library services, and with a very small percentage (.5%) not being medically relevant at all. We characterize the data set, and conclude with a discussion of our plans to develop a UMLS-based terminology server to assist NLM Web users. Images Figure 1 PMID:10566330
Nelson, Michael L.; Bianco, David J.
NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer and technology awareness applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology OPportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. The NASA Technical Report Server (NTRS) provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people.
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
Guenther, Rebecca; Myrick, Leslie
Born-digital material such as archived Web sites provides unique challenges in ensuring access and preservation. This article examines some of the technical challenges involved in harvesting and managing Web archives as well as metadata strategies to provide descriptive, technical, and preservation related information about archived Web sites,…
Amtmann, Dagmar; Johnson, Kurt; Cook, Debbie
Summarizes results from a study of problems blind people using screen readers and Web browsers experienced when reading tables on the World Wide Web. Explains accessibility factors including complexity of layout, use of HTML programming, features of screen-reading software, and user variables; and makes recommendations for Web-based tables,…
This collective case study reviewed the current state of Web accessibility at 102 postsecondary colleges and universities in North Carolina. The study examined themes within Web-accessibility compliance and identified which disability subgroups were most and least affected, why the common errors were occurring, and how the errors could be fixed.…
Licciardone, John C
A large national telephone survey conducted in 2000 by the Pew Internet & American Life Project estimated that 52 million American adults used the Internet to acquire health information. Based on population estimates, these users comprised 25% of all adults. The growth of online health information coupled with increasing Internet access has led to the emergence of consumer informatics as an outbranching from traditional medical informatics. The ease of international communications afforded by the Internet holds great promise for consumers in the realm of travel medicine. For example, an early study found that a Web site hosted by an international travel medicine clinic was accessed by client computers in more than 100 countries. Nevertheless, relatively little research has been conducted on consumer informatics in travel medicine. An important aspect of consumer informatics involves studying consumers' needs for health information. The purpose of this study was to perform a descriptive analysis of overall use and content-specific access patterns for health information provided on a clinic-based Web site for international travelers.
Hoelzer, Simon; Schweiger, Ralf K; Rieger, Joerg; Meyer, Michael
The organizational structures of web contents and electronic information resources must adapt to the demands of a growing volume of information and user requirements. Otherwise the information society will be threatened by disinformation. The biomedical sciences are especially vulnerable in this regard, since they are strongly oriented toward text-based knowledge sources. Here sustainable improvement can only be achieved by using a comprehensive, integrated approach that not only includes data management but also specifically incorporates the editorial processes, including structuring information sources and publication. The technical resources needed to effectively master these tasks are already available in the form of the data standards and tools of the Semantic Web. They include Rich Site Summaries (RSS), which have become an established means of distributing and syndicating conventional news messages and blogs. They can also provide access to the contents of the previously mentioned information sources, which are conventionally classified as 'deep web' content.
Valeau, Edward J.; Luan, Jing
In this study, the process and outcome of a web-based planning application, called Ports of Call, are discussed. The application allows college management to create, edit, and report out activities relating to college plans, all through a web browser. Its design was based on best practices in modern web technology and the application can be easily…
Fujiki, Tadayoshi; Hanada, Eisuke; Yamada, Tomomi; Noda, Yoshihiro; Antoku, Yasuaki; Nakashima, Naoki; Nose, Yoshiaki
Abstract Much has been written concerning the difficulties faced by visually handicapped persons when they access the internet. To solve some of the problems and to make web pages more accessible, we developed a tool we call the "Easy Bar," which works as a toolbar on the web browser. The functions of the Easy Bar are to change the size of web texts and images, to adjust the color, and to clear cached data that is automatically saved by the web browser. These functions are executed with ease by clicking buttons and operating a pull-down list. Since the icons built into Easy Bar are quite large, it is not necessary for the user to deal with delicate operations. The functions of Easy Bar run on any web page without increasing the processing time. For the visually handicapped, Easy Bar would contribute greatly to improved web accessibility to medical information.
New economy corporate Web sites have pioneered exciting techniques-rich media, interactivity, personalization, community, and integration of much third-party content. Discusses business-to-business (B2B) Web commerce, with examples of several B2B corporate sites; portal and content elements of these sites; and corporate content outlooks. (AEF)
Web pedagogical content knowledge generally takes pedagogical knowledge, content knowledge, and Web knowledge as basis. It is a structure emerging through the interaction of these three components. Content knowledge refers to knowledge of subjects to be taught. Pedagogical knowledge involves knowledge of process, implementation, learning methods,…
Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.
The biomedical digital library of the future is expected to provide access to stores of biomedical database information containing text and images. Developing efficient methods for accessing such databases is a research effort at the Lister Hill National Center for Biomedical Communications of the National Library of Medicine. In this paper we examine issues in providing access to databases across the Web and describe a tool we have developed: the Web-based Medical Information Retrieval System (WebMIRS). We address a number of critical issues, including preservation of data integrity, efficient database design, access to documentation, quality of query and results interfaces, capability to export results to other software, and exploitation of multimedia data. WebMIRS is implemented as a Java applet that allows database access to text and to associated image data, without requiring any user software beyond a standard Web browser. The applet implementation allows WebMIRS to run on any hardware platform (such as PCs, the Macintosh, or Unix machines) which supports a Java-enabled Web browser, such as Netscape or Internet Explorer. WebMIRS is being tested on text/x-ray image databases created from the National Health and Nutrition Examination Surveys (NHANES) data collected by the National Center for Health Statistics.
Zeng, Xiaoming; Sligar, Steven R.
Human resource development programs in various institutions communicate with their constituencies including persons with disabilities through websites. Web sites need to be accessible for legal, economic and ethical reasons. We used an automated web usability evaluation tool, aDesigner, to evaluate 205 home pages from the organizations of AHRD…
Mosier, Mona L.
Los Alamos National Laboratory's Research Library has developed a World Wide Web (WWW) page to allow laboratory staff, as well as individuals from around the world, access to information via the Internet. While many Web pages offer information solely on the organization, the Los Alamos National Laboratory page provides links to reference materials…
Madhusudhan, Margam; Aggarwal, Shalini
Purpose: The purpose of the paper is to examine the various features and components of web-based online public access catalogues (OPACs) of IIT libraries in India with the help of a specially designed evaluation checklist. Design/methodology/approach: The various features of the web-based OPACs in six IIT libraries (IIT Delhi, IIT Bombay, IIT…
On the basis of analyzing the characteristics of content components in the current distance education technology specifications, this paper puts forward an Open Content Object model for the Web-based learning content by extending the Sharable Content Object (SCO) of the Sharable Content Object Reference Model (SCORM) which was established by the…
Yan, Liang; Rong, Chunming
Radio Frequency Identification (RFID) technology that used to identify objects and users has been applied to many applications such retail and supply chain recently. How to prevent tag content from unauthorized readout is a core problem of RFID privacy issues. Hash-lock access control protocol can make tag to release its content only to reader who knows the secret key shared between them. However, in order to get this shared secret key required by this protocol, reader needs to communicate with a back end database. In this paper, we propose to use identity-based secret key exchange approach to generate the secret key required for hash-lock access control protocol. With this approach, not only back end database connection is not needed anymore, but also tag cloning problem can be eliminated at the same time.
Baris, Mehmet Fatih
Several studies have been conducted on technological, pedagogical content knowledge and web-based education. In this study, the Technological Pedagogical Content Knowledge and Educational Use of Web Technologies (TPCK-W) were analyzed in addition to the self-efficacy and attitudes of 33 teachers from eight different branches carrying out their…
Brown, Andy; Jay, Caroline; Harper, Simon
Presenting Web content through screen readers can be a challenging task, but this is the only means of access for many blind and visually impaired users. The difficulties are more acute when the information forms part of an interactive process, such as the increasingly common "Web 2.0 applications". If the process is to be completed correctly and efficiently it is vital that appropriate information is given to the user at an appropriate time. Designing a non-visual interface that achieves these aims is a non-trivial task, for which several approaches are possible. The one taken here is to use eye-tracking to understand how sighted users interact with the content, and to gain insight into how they benefit from the information, then apply this understanding to design a non-visual user interface. This paper describes how this technique was applied to develop audio interfaces for two common types of interaction-auto-suggest lists and pop-up calendars. Although the resulting interfaces were quite different, one largely mirroring the visual representation and the other not, evaluations showed that the approach was effective, with both audio implementations effective and popular with participants.
Special Report: State of the Content Industry. Content Anywhere, Anytime; Content Management Technology: A Booming Market; Entering the Content Space: Carry Your Lawyer at All Times; Content Unchained: The New Value Web; Arnold on Pricing: Disturbing Trends Ahead.
Mickey, Bill; Trippe, Bill; Ojala, Marydee; Pack, Thomas; Arnold, Stephen E.
This special report on the electronic content industry covers three interrelated areas: technology, rights/legal issues, and pricing/business models. Articles include: "Content Anywhere, Anytime"; "Content Management Technology: A Booming Market"; "Entering the Content Space: Carry Your Lawyer at All Times"; Content Unchained: The New Value Web";…
... CFR Part 27 Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and... Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and Automated Kiosks at U.S... and foreign air carriers to make their Web sites that market air transportation to the general...
Radovan, Marko; Perdih, Mojca
E-learning is a rapidly developing form of education. One of the key characteristics of e-learning is flexibility, which enables easier access to knowledge for everyone. Information and communications technology (ICT), which is e-learning's main component, enables alternative means of accessing the web-based learning materials that comprise the…
Bryen, Diane Nelson; Heake, George; Semenuk, Amy; Segal, Mary
People with significant speech and motor disabilities often face obstacles attempting to navigate the World Wide Web. This is especially true for the millions of children and adults worldwide who rely on or could benefit from augmentative and alternative communication (AAC). This study was designed to test the usability of WebAACcess, an accessibility enhancement tool designed to bypass some of the barriers to navigating the web. Using a repeated-measures research design, whereby subjects were their own controls, each of the 12 participants (7 with motor disabilities who used AAC and 5 peers without disabilities) navigated equivalent web pages using Internet Explorer alone and Internet Explorer with WebAACcess. Results consistently demonstrated that navigating using WebAACess with Internet Explorer was more efficient, easier, and equally effective for all of the participants than navigating with Internet Explorer alone.
Caudle, Dana M.; Schmitz, Cecilia M.
Libraries are investing heavily in an increasing number of electronic journals and providing access to them through their websites. We set out to determine if ARL academic libraries offer the same options on their websites to access electronic journals and databases. Using a checklist, we evaluated the websites for the presence of A-Z lists, links…
Brunvand, Stein; Abadeh, Heidi
The proliferation of Web 2.0 technologies has made it possible for teachers to create a variety of engaging online learning activities for students of all ages. However, for students with learning disabilities, the prospect of having to search, read, and analyze information online can be overwhelming. This article reviews a variety of tools and…
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.
Drnasin, Ivan; Grgić, Mislav; Gogić, Goran
Alvy, Lisa M; Calvert, Sandra L
In 2006 the Institute of Medicine (IOM) concluded that food marketing was a contributor to childhood obesity in the United States. One recommendation of the IOM committee was for research on newer marketing venues, such as Internet Web sites. The purpose of this cross-sectional study was to answer the IOM's call by examining food marketing on popular children's Web sites. Ten Web sites were selected based on market research conducted by KidSay, which identified favorite sites of children aged 8 to 11 years during February 2005. Using a standardized coding form, these sites were examined page by page for the existence, type, and features of food marketing. Web sites were compared using chi2 analyses. Although food marketing was not pervasive on the majority of the sites, seven of the 10 Web sites contained food marketing. The products marketed were primarily candy, cereal, quick serve restaurants, and snacks. Candystand.com, a food product site, contained a significantly greater amount of food marketing than the other popular children's Web sites. Because the foods marketed to children are not consistent with a healthful diet, nutrition professionals should consider joining advocacy groups to pressure industry to reduce online food marketing directed at youth.
Liu, Linyuan; Huang, Zhiqiu; Zhu, Haibin
With the popularity of Internet technology, web services are becoming the most promising paradigm for distributed computing. This increased use of web services has meant that more and more personal information of consumers is being shared with web service providers, leading to the need to guarantee the privacy of consumers. This paper proposes a role-based privacy access control framework for Web services collaboration, it utilizes roles to specify the privacy privileges of services, and considers the impact on the reputation degree of the historic experience of services in playing roles. Comparing to the traditional privacy access control approaches, this framework can make the fine-grained authorization decision, thus efficiently protecting consumers' privacy.
Marenco, Luis; Wang, Rixin; Shepherd, Gordon M; Miller, Perry L
This paper describes the capabilities of DISCO, an extensible approach that supports integrative Web-based information dissemination. DISCO is a component of the Neuroscience Information Framework (NIF), an NIH Neuroscience Blueprint initiative that facilitates integrated access to diverse neuroscience resources via the Internet. DISCO facilitates the automated maintenance of several distinct capabilities using a collection of files 1) that are maintained locally by the developers of participating neuroscience resources and 2) that are "harvested" on a regular basis by a central DISCO server. This approach allows central NIF capabilities to be updated as each resource's content changes over time. DISCO currently supports the following capabilities: 1) resource descriptions, 2) "LinkOut" to a resource's data items from NCBI Entrez resources such as PubMed, 3) Web-based interoperation with a resource, 4) sharing a resource's lexicon and ontology, 5) sharing a resource's database schema, and 6) participation by the resource in neuroscience-related RSS news dissemination. The developers of a resource are free to choose which DISCO capabilities their resource will participate in. Although DISCO is used by NIF to facilitate neuroscience data integration, its capabilities have general applicability to other areas of research.
Demuth, Nora H.; Knudson, Christa K.
The web development team of the Environmental Technology Directorate (ETD) at the U.S. Department of Energy’s Pacific Northwest National Laboratory (PNNL) redesigned the ETD website as a database-driven system, powered by the newly designed ETD Common Information System (ETD-CIS). The ETD website was redesigned in response to an analysis that showed the previous ETD websites were inefficient, costly, and lacking in a consistent focus. Redesigned and newly created websites based on a new ETD template provide a consistent image, meet or exceed accessibility standards, and are linked through a common database. The protocols used in developing the ETD website support integration of further organizational sites and facilitate internal use by staff and training on ETD website development and maintenance. Other PNNL organizations have approached the ETD web development team with an interest in applying the methods established by the ETD system. The ETD system protocol could potentially be used by other DOE laboratories to improve their website efficiency and content focus. “The tools by which we share science information must be as extraordinary as the information itself.[ ]” – DOE Science Director Raymond Orbach
Examines the potential of the World Wide Web as an information and bibliographic source for scientists. Analyzes through content analysis the Web pages retrieved by the major search engines on a particular date as a result of a query regarding informetrics, and compares results to data retrieved from commercial databases. (Author/LRW)
Hazard, Brenda L.
This study examines the Web sites of the Association of Research Libraries member libraries to determine the presence of a separate text version of the default graphical homepage. The content of the text version and the homepage is compared. Of 121 Web sites examined, twenty libraries currently offer a text version. Ten sites maintain wholly…
Web 2.0 technologies have created a trend of user-generated content by supporting media production, collaboration, communication, and dissemination. User-generated content is translated into student-generated content (SGC) in education. SGC engages learners in an authentic project that fosters students' autonomy, creativity, and real-world…
Pais, V. F.; Stancalie, V.; Mihailescu, F. A.; Totolici, M. C.
Web services are starting to be widely used in applications for remotely accessing data. This is of special interest for research based on small and medium scale fusion devices, since scientists participating remotely to experiments are accessing large amounts of data over the Internet. Recent tests were conducted to see how the new network traffic, generated by the use of web services, can be integrated in the existing infrastructure and what would be the impact over existing applications, especially those used in a remote participation scenario.
Stoecker, Nora K.; Alford, Dixie L.
Describes the processes developed in the Sandia National Laboratories (a Department of Energy multiprogram national laboratory) Technical Library to provide and improve desktop access to Sandia-generated documents. Discusses procedures for cataloging these electronic reports, including identification of the bibliographic information and MARC tags…
Kleib, Manal; Sales, Anne E; Andrea Baylon, Melba; Beaith, Amy; Lima, Isac
The aim of this study was to identify the proportion and characteristics of Registered Nurses who reported having had an access to the Web in the year 2000 National Sample Survey of Registered Nurses. We conducted a secondary data analysis using more than 25 000 respondents to the year 2000 National Sample Survey of Registered Nurses. Using bivariate and logistic regression, we examined the association of reporting access to the Web with demographic, educational, and other characteristics of Registered Nurse respondents to the survey. We found that several factors were associated with the increased likelihood of Registered Nurses' reporting having had an access to the Web in the year 2000. These included race/ethnicity, marital and family status, highest level of nursing education, current enrollment in a nursing education program, annual household income, and continuing education in informatics. The likelihood of reporting having had access decreased with sex, age, experience since first nursing degree, and primary job responsibility. The results of this study indicate that having access to the Web enhances Registered Nurses' participation in professional development and continuing education opportunities.
Background Innovations in biological and biomedical imaging produce complex high-content and multivariate image data. For decision-making and generation of hypotheses, scientists need novel information technology tools that enable them to visually explore and analyze the data and to discuss and communicate results or findings with collaborating experts from various places. Results In this paper, we present a novel Web2.0 approach, BioIMAX, for the collaborative exploration and analysis of multivariate image data by combining the webs collaboration and distribution architecture with the interface interactivity and computation power of desktop applications, recently called rich internet application. Conclusions BioIMAX allows scientists to discuss and share data or results with collaborating experts and to visualize, annotate, and explore multivariate image data within one web-based platform from any location via a standard web browser requiring only a username and a password. BioIMAX can be accessed at http://ani.cebitec.uni-bielefeld.de/BioIMAX with the username "test" and the password "test1" for testing purposes. PMID:21777450
This document outlines the procedures for ensuring access to EPA information by hosting EPA data and information on the epa.gov server. Additionally, it provides the procedures for obtaining waivers of this requirement.
The National Space Science Data Center (NSSDC) was established by NASA to provide for the preservation and dissemination of scientific data from NASA missions. This white paper will address the NSSDC policies that govern data preservation and dissemination and the various methods of accessing NSSDC-archived data via the web.
Dai, Jianli; Chen, Yuansha; Lauzardo, Michael
Mycobacteria include a large number of pathogens. Identification to species level is important for diagnoses and treatments. Here, we report the development of a Web-accessible database of the hsp65 locus sequences (http://msis.mycobacteria.info) from 149 out of 150 Mycobacterium species/subspecies. This database can serve as a reference for identifying Mycobacterium species. PMID:21450960
Brown, K. E.; Newby, K.; Caley, M.; Danahay, A.; Kehal, I.
Sexual health service access is fundamental to good sexual health, yet interventions designed to address this have rarely been implemented or evaluated. In this article, pilot evaluation findings for a targeted public health behavior change intervention, delivered via a website and web-app, aiming to increase uptake of sexual health services among…
Examines the primary considerations in choosing a computer for Web access in a school setting: (1) Internet connection; (2) operating system; (3) processor and memory; (4) video subsystem; (5) audio; and (6) hard disk. Suggests having both a budget and a high-end personal computer. (AEF)
Whitney, Michael P.
From computer workstations to the world of the web, statutes and policies have afforded students with disabilities the right to participate in postsecondary education in a non-discriminatory manner. Automatic doors and adjustable tables are a commonplace on campuses and represent prime examples of accessible policy adherence, but what affect do…
resolution process? What is the potential impact of the Semantic Web on the bug resolution process, and vice versa? To answer these questions, we...the following Section 2.2, we illustrate these challenges by means of an actual bug report in a chosen OSS community. We then describe a scenario of the...communities. Some communities may not use any of these tools, but they all still do provide some means of interaction. For example, Wikipedia has none of the
Weertman, B. R.; Trabant, C.; Karstens, R.; Suleiman, Y. Y.; Ahern, T. K.; Casey, R.; Benson, R. B.
The IRIS Data Management Center (DMC) has developed a suite of web services that provide access to the DMC's time series holdings, their related metadata and earthquake catalogs. In addition, services are available to perform simple, on-demand time series processing at the DMC prior to being shipped to the user. The primary goal is to provide programmatic access to data and processing services in a manner usable by and useful to the research community. The web services are relatively simple to understand and use and will form the foundation on which future DMC access tools will be built. Based on standard Web technologies they can be accessed programmatically with a wide range of programming languages (e.g. Perl, Python, Java), command line utilities such as wget and curl or with any web browser. We anticipate these services being used for everything from simple command line access, used in shell scripts and higher programming languages to being integrated within complex data processing software. In addition to improving access to our data by the seismological community the web services will also make our data more accessible to other disciplines. The web services available from the DMC include ws-bulkdataselect for the retrieval of large volumes of miniSEED data, ws-timeseries for the retrieval of individual segments of time series data in a variety of formats (miniSEED, SAC, ASCII, audio WAVE, and PNG plots) with optional signal processing, ws-station for station metadata in StationXML format, ws-resp for the retrieval of instrument response in RESP format, ws-sacpz for the retrieval of sensor response in the SAC poles and zeros convention and ws-event for the retrieval of earthquake catalogs. To make the services even easier to use, the DMC is developing a library that allows Java programmers to seamlessly retrieve and integrate DMC information into their own programs. The library will handle all aspects of dealing with the services and will parse the returned
... the Basis of Disability in Air Travel: Accessibility of Web Sites and Automated Kiosks at U.S... supplemental notice of proposed rulemaking (SNPRM) on the accessibility of Web sites and automated kiosks that... for an extension, citing difficulties in using the online comment form on the www.regulations.gov...
Jaeger, Paul T.
In the United States, a number of federal laws establish requirements that electronic government (e-government) information and services be accessible to individuals with disabilities. These laws affect e-government Web sites at the federal, state, and local levels. To this point, research about the accessibility of e-government Web sites has…
ERDC/CHL CHETN-IV-103 February 2015 Approved for public release; distribution is unlimited. WaveNet: A Web -Based Metocean Data Access, Processing...modeling and planning missions require metocean data (e.g., winds, waves, tides, water levels). WaveNet is a web -based graphical-user-interface (GUI...AND SUBTITLE WaveNet: A Web -Based Metocean Data Access, Processing and Analysis Tool; Part 5 - WW3 Database 5a. CONTRACT NUMBER 5b. GRANT NUMBER
Web 2.0 is a widely used term to describe web-based tools that rely on user input and collaboration. So what would professional learning 2.0 look like? When educators are asked to do more with less and still reach ever-rising benchmarks for student achievement, leaders must begin to think differently about how classroom teachers are supported. Now…
Fugkeaw, Somchart; Mitrpanont, Jarernsri L.; Manpanpanich, Piyawit; Juntapremjitt, Sekpon
This paper proposes the design and development of Role- based Access Control (RBAC) model for the Single Sign-On (SSO) Web-OLAP query spanning over multiple data warehouses (DWs). The model is based on PKI Authentication and Privilege Management Infrastructure (PMI); it presents a binding model of RBAC authorization based on dimension privilege specified in attribute certificate (AC) and user identification. Particularly, the way of attribute mapping between DW user authentication and privilege of dimensional access is illustrated. In our approach, we apply the multi-agent system to automate flexible and effective management of user authentication, role delegation as well as system accountability. Finally, the paper culminates in the prototype system A-COLD (Access Control of web-OLAP over multiple DWs) that incorporates the OLAP features and authentication and authorization enforcement in the multi-user and multi-data warehouse environment.
Chiang, Michael F; Cole, Roy G; Gupta, Suhit; Kaiser, Gail E; Starren, Justin B
Rapid advances in information technology have dramatically transformed the world during the past several decades. Access to computers and the World Wide Web is increasingly required for education and employment, as well as for many activities of daily living. Although these changes have improved society in many respects, they present an obstacle for visually disabled patients who may have significant difficulty processing the visual cues presented by modern graphical user interfaces. This article reviews the specific barriers to computer and Web access faced by visually disabled patients, describes clinical evaluation methods, summarizes traditional low vision methods as well as newer assistive computer technologies for universal accessibility, and discusses emerging technologies and future directions in this area.
Elmsheuser, J.; Walker, R.; Serfon, C.; Garonne, V.; Blunier, S.; Lavorini, V.; Nilsson, P.
With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyse collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.
Brooks, David W.; Markwell, John P.; Langell, Marjorie A.; Emry, Randall; Crippen, Kent J.; Brooks, Helen B.; Abuloum, Amjad; Cohen, Karen C.
We report on the creation and delivery of Web-based content courses stressing content integration for high school chemistry teachers. We make recommendations to other chemical educators seeking to develop instructional systems that emphasize automatic, repeatable, practice with immediate, performance-related feedback.
von Franqué, Alexander; Tellioglu, Hilda
Many educational institutions use Learning Management Systems to provide e-learning content to their students. This often includes quizzes that can help students to prepare for exams. However, the content is usually web-optimized and not very usable on mobile devices. In this work a native mobile application ("UML Quiz") that imports…
Gökçearslan, Sahin; Karademir, Tugra; Korucu, Agah Tugrul
Technological Pedagogical Content Knowledge, one of the frameworks proposed in order to popularize the use of technology in a classroom environment, has been customized and has taken the form of Web Pedagogical Content Knowledge. The Relational Screening Model was used in this study. It aims to determine whether a profile of preservice teachers…
This article reports on a study addressing the readability of content on academic libraries' Web sites, specifically content intended to improve users' information literacy skills. Results call for recognition of readability as an evaluative component of text in order to better meet the needs of diverse user populations. (Contains 8 tables.)
Johnston, Semay; Renambot, Luc; Sauter, Daniel
Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.
Loman, Nicholas J; Pallen, Mark J
Tokuno, Hironobu; Tanaka, Ikuko; Umitsu, Yoshitomo; Akazawa, Toshikazu; Nakamura, Yasuhisa
Here we describe a web-accessible digital brain atlas of the common marmoset (Callithrix jacchus) at http://marmoset-brain.org:2008. We prepared the histological sections of the marmoset brain using various staining techniques. For virtual microscopy, high-resolution digital images of sections were obtained with Aperio Scanscope. The digital images were then converted to Zoomify files (zoomable multiresolution image files). Thereby, we could provide the multiresolution images of the marmoset brains for fast interactive viewing on the web via the Internet. In addition, we describe an automated method to obtain drawings of Nissl-stained sections.
Abel, Fabian; Henze, Nicola; Krause, Daniel
Common social tagging systems like Flickr, Delicious and others lately became very popular. The key benefits of these systems include that users can easily annotate Web content and benefit from the annotations of other users with improved retrieval support. With GroupMe! we extend the idea of current social tagging systems by enabling users to not only tag Web resources they are interested in, but also to create collections (groups) of these Web resources by simple drag & drop operations. The grouping metaphor is intuitive and easy for the users, and our evaluation shows that users appreciate the grouping facility, and use this feature to organize and structure diverse Web content. Technically, the grouping of resources carries valuable information about Web resources and their relations. GroupMe! exploits such information to improve search and retrieval. The RESTful Semantic Web interface of GroupMe! enables also other applications to benefit from the GroupMe! features and makes GroupMe! a Social Semantic Web application.
Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.
hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.
Hershkovitz, Arnon; Hardof-Jaffe, Sharon; Nachmias, Rafi
This study presents an empirical investigation of the relationship between the hierarchical structure of content delivered to students within a Learning Management System (LMS) and its actual consumption. To this end, campus-wide data relating to 1,203 courses were collected from the LMS' servers and were subsequently analyzed using data mining…
Muenkelt, Olaf; Kaufmann, Oliver; Eckstein, Wolfgang
This article propose a way to automatically retrieve images from the world-wide-web using a semantic description for images and an agent concept for the retrieval of images. The system represents image in a textual way, e.g. look for a portrait of the a specific person, or fetch an image showing a countryside in Southern California. This textual descriptions are fed in search machines, e.g. yahoo, alta- vista. The resulting html documents are seeked for links. The next step subsequently processes each link by fetching the document other the net, converting it to an ascii representation, and performing a full text search by using the image description. This leads to starting points of images which are retrieved via a web-agent. The image descriptions are decomposed in a set of parts containing image operations which are further processed, e.g. a set for representing the background of a portrait tries to find a homogeneous region in the image because this is likely to find in a portrait. Additional operations are performed on the foreground, i.e. the image region which contains e.g. the face of a person. The system is realized using two C++ libraries: one for building up web-agents, LIWA++, and one for processing images, HORUS.
Güntsch, Anton; Hyam, Roger; Hagedorn, Gregor; Chagnoux, Simon; Röpert, Dominik; Casino, Ana; Droege, Gabi; Glöckler, Falko; Gödderz, Karsten; Groom, Quentin; Hoffmann, Jana; Holleman, Ayco; Kempa, Matúš; Koivula, Hanna; Marhold, Karol; Nicolson, Nicky; Smith, Vincent S; Triebel, Dagmar
With biodiversity research activities being increasingly shifted to the web, the need for a system of persistent and stable identifiers for physical collection objects becomes increasingly pressing. The Consortium of European Taxonomic Facilities agreed on a common system of HTTP-URI-based stable identifiers which is now rolled out to its member organizations. The system follows Linked Open Data principles and implements redirection mechanisms to human-readable and machine-readable representations of specimens facilitating seamless integration into the growing semantic web. The implementation of stable identifiers across collection organizations is supported with open source provider software scripts, best practices documentations and recommendations for RDF metadata elements facilitating harmonized access to collection information in web portals.
Science and science education benefit from easy access to data yet often geophysical data sets are large, complex and difficult to share. The difficulty in sharing data and imagery easily inhibits both collaboration and the use of real data in educational applications. The dissemination of data products through web maps serves a very efficient and user-friendly method for students, the public and the science community to gain insights and understanding from data. Few research groups provide direct access to their data, let alone map-based visualizations. By building upon current GIS infrastructure with web mapping technologies, like ArcGIS Server, scientific groups, institutions and agencies can enhance the value of their GIS investments. The advantages of web maps to serve data products are many; existing web-mapping technology allows complex GIS analysis to be shared across the Internet, and can be easily scaled from a few users to millions. This poster highlights the features of an interactive web map developed at the Polar Geophysics Group at the Lamont-Doherty Earth Observatory of Columbia University that provides a visual representation of, and access to, data products that resulted from the group's recently concluded AGAP project (http://pgg.ldeo.columbia.edu). The AGAP project collected more than 120,000 line km of new aerogeophysical data using two Twin Otter aircrafts. Data included ice penetrating radar, magnetometer, gravimeter and laser altimeter measurements. The web map is based upon ArcGIS Viewer for Flex, which is a configurable client application built on the ArcGIS API for Flex that works seamlessly with ArcGIS Server 10. The application can serve a variety of raster and vector file formats through the Data Interoperability for Server, which eliminates data sharing barriers across numerous file formats. The ability of the application to serve large datasets is only hindered by the availability of appropriate hardware. ArcGIS is a proprietary
Vázquez-Naya, José; Loureiro, Javier; Calle, Julián; Vidal, Jorge; Sierra, Alejandro
The evolution in information and telecommunication technologies has allowed the development of systems that use the Internet infrastructure and Web technology to remotely access a hospital's picture archiving and communication system (PACS). However, one of the main problems in the construction of this type of system is the development of mechanisms that guarantee the security of the medical data that are being consulted. Most countries have specific norms for the protection of such medical data. This work describes security mechanisms that are developed in an access system to PACS DICOM with Web technology and comply with the Spanish legislation concerning the protection of medical data. The proposed security mechanisms are flexible, they leave room for the definition of security policies adjusted to the needs of each particular organization and they can be adapted to comply with new or foreign norms.
Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P.
ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. PMID:25883136
Matykiewicz, J.; Anderson, G.; Henderson, D.; Hodgkinson, K.; Hoyt, B.; Lee, E.; Persson, E.; Torrez, D.; Smith, J.; Wright, J.; Jackson, M.
The EarthScope Plate Boundary Observatory (PBO) at UNAVCO, Inc., part of the NSF-funded EarthScope project, is designed to study the three-dimensional strain field resulting from deformation across the active boundary zone between the Pacific and North American plates in the western United States. To meet these goals, PBO will install 880 continuous GPS stations, 103 borehole strainmeter stations, and five laser strainmeters, as well as manage data for 209 previously existing continuous GPS stations and one previously existing laser strainmeter. UNAVCO provides access to data products from these stations, as well as general information about the PBO project, via the PBO web site (http://pboweb.unavco.org). GPS and strainmeter data products can be found using a variety of access methods, incuding map searches, text searches, and station specific data retrieval. In addition, the PBO construction status is available via multiple mapping interfaces, including custom web based map widgets and Google Earth. Additional construction details can be accessed from PBO operational pages and station specific home pages. The current state of health for the PBO network is available with the statistical snap-shot, full map interfaces, tabular web based reports, and automatic data mining and alerts. UNAVCO is currently working to enhance the community access to this information by developing a web service framework for the discovery of data products, interfacing with operational engineers, and exposing data services to third party participants. In addition, UNAVCO, through the PBO project, provides advanced data management and monitoring systems for use by the community in operating geodetic networks in the United States and beyond. We will demonstrate these systems during the AGU meeting, and we welcome inquiries from the community at any time.
Redmon, R.; Kihn, E.; Zhizhin, M.
We present SPIDR III, a web based data access, visualization and data management system for the space environment community, allowing a solar terrestrial physics customer to intelligently access and manage historical space physics data for integration with environmental models and space weather forecasts. SPIDR III is the newly redesigned Space Physics Interactive Resource (SPIDR) web application and was redesigned with input from it's user community via an intensive usability study. We will present on SPIDR III's new features, improved use and on lessons learned in usability and federating multi-source data. In 2004, SPIDR II underwent extensive rework yielding a completely redesigned interface for improved user interaction and the addition of many enhanced and complex features. The usability alterations were motivated in large part by a usability study performed by outside professional site reviewers and involving key data managers and current SPIDR II users. SPIDR III is built following the application direct to data archive paradigm, using Web Services both for internal and external exchange of data and information. It is now a framework and application set of Web Services. This application suite is fully open source and is designed to operate as a standalone VO as well as seamlessly integrate with other existing VOs. This extensible and open design yields easy mirroring worldwide for free and open exchange of scientific data and information. Data managed by SPIDR includes Geomagnetic Indices, GOES, Ionospheric, and DMSP which is archived/ingested from many data providers including WDC, IIWG, SAO, HDF, AFCCC, SEC, NASA, and this list is easily extendable. SPIDR III may be accessed via http://spidr.ngdc.noaa.gov/spidr/ A guest login is provided for convenience. Becoming a full access user, is free and only requires completing a short registration form.
“As Bill Gates and Steve Case proclaim the global omnipresence of the Internet, the majority of non-Western nations and 97 per cent of the world's population remain unconnected to the net for lack of money, access, or knowledge. This exclusion of so vast a share of the global population from the Internet sharply contradicts the claims of those who posit the World Wide Web as a ‘universal' medium of egalitarian communication.” (Trend 2001:2)
Alam, Najma H.
The problem observed in this study is the low level of compliance of higher education website accessibility with Section 508 of the Rehabilitation Act of 1973. The literature supports the non-compliance of websites with the federal policy in general. Studies were performed to analyze the accessibility of fifty-four sample web pages using automated…
Calderón, José Luis; Zadshir, Ashraf; Norris, Keith
Chronic kidney disease (CKD) is a pandemic and the need to inform those at risk has never been more important. The World Wide Web (WWW) is no w considered a key source of health information, but the quality and utility of this information has been challenged. In this article, we assess structural, content, and linguistic barriers to accessed CKD information and discuss the implications of limited Internet access to communicating health. Technical (number of hyperlinks), content (number of six core CKD and risk factor information domains included), and linguistic (readability and variation in readability) barriers were assessed for websites offered by 12 kidney disease associations. The Flesch Reading Ease Index method was used to estimate readability scores, and variation in the readability of information was assessed. Eleven websites met inclusion criteria. Six of 11 websites provided information in all 6 domains of CKD information. A mean of 4 hyperlinks (range 3-5) was clicked before CKD information was available and a mean of 6 hyperlinks (range 4-12) was clicked to access all available CKD information. Mean readability scores for all six domains of CKD information exceeded national average literacy skills and far exceeded the 5th grade level readability desired for informing vulnerable populations. Information about CKD and diabetes consistently had higher readability scores. The WWW currently has little utility for informing populations at greatest risk for CKD. Barriers to accessing CKD information on the WWW are socioeconomic, technical, and linguistic. Having lower socioeconomic status, less access to computers and the WWW, multiple website hyperlinks, incomplete information, difficult readability, and significant variation in readability of CKD information on the WWW are social, structural, and content barriers to communicating CKD information. This may contribute to the growing epidemics of diminished public understanding about CKD, and disparities in
Gupta, Nidhi; Sharma, Sunil K; Rana, Jai C; Chauhan, Rajinder S
In light of the economic importance of buckwheat as well as existence of enormous accessions of Fagopyrum species in the Himalayan regions of India, the characterization of tartary buckwheat for rutin content variation vis-à-vis DNA fingerprinting was undertaken so as to identify fingerprint profiles unique to high rutin content accessions. Rutin content analysis in mature seeds of 195 accessions of Fagopyrum tataricum showed a wide range of variation (6 μg/mg to 30 μg/mg D.W.) with most of the accessions (81%) containing 10-16 μg/mg of rutin followed by 14% accessions with significantly higher rutin content (17 μg/mg to 30 μg/mg) and 5% accessions with low rutin content (≤10 μg/mg). AFLP fingerprinting of 18 accessions having high (≥17 μg/mg) and low rutin content (≤10 μg/mg) with 19 EcoRI/MseI primer combinations yielded 136 polymorphic fragments out of total 907. The hierarchical and model-based cluster analyses of AFLP data strongly suggested that the 18 populations of F. tataricum were clustered into two separate groups. The high and low rutin content accessions were clustered into two separate groups based on AFLP fingerprinting. The AFLP fingerprints associated with high rutin content accessions of F. tataricum are expected to be useful for evaluation, conservation and genetic improvement of buckwheat.
Andersson, Stefan; Erlingsson, Christen; Magnusson, Lennart; Hanson, Elizabeth
Policy makers in Sweden and other European Member States pay increasing attention as to how best support working carers; carers juggling providing unpaid family care for older family members while performing paid work. Exploring perceived benefits and challenges with web-based information and communication technologies as a means of supporting working carers' in their caregiving role, this paper draws on findings from a qualitative study. The study aimed to describe working carers' experiences of having access to the web-based family care support network 'A good place' (AGP) provided by the municipality to support those caring for an older family member. Content analysis of interviews with nine working carers revealed three themes: A support hub, connections to peers, personnel and knowledge; Experiencing ICT support as relevant in changing life circumstances; and Upholding one's personal firewall. Findings indicate that the web-based family care support network AGP is an accessible, complementary means of support. Utilising support while balancing caregiving, work obligations and responsibilities was made easier with access to AGP; enabling working carers to access information, psychosocial support and learning opportunities. In particular, it provided channels for carers to share experiences with others, to be informed, and to gain insights into medical and care issues. This reinforced working carers' sense of competence, helping them meet caregiving demands and see positive aspects in their situation. Carers' low levels of digital skills and anxieties about using computer-based support were barriers to utilising web-based support and could lead to deprioritising of this support. However, to help carers overcome these barriers and to better match web-based support to working carers' preferences and situations, web-based support must be introduced in a timely manner and must more accurately meet each working carer's unique caregiving needs.
Deserno, Thomas M.; Antani, Sameer; Long, Rodney
The number of articles published in the scientific medical literature is continuously increasing, and Web access to the journals is becoming common. Databases such as SPIE Digital Library, IEEE Xplore, indices such as PubMed, and search engines such as Google provide the user with sophisticated full-text search capabilities. However, information in images and graphs within these articles is entirely disregarded. In this paper, we quantify the potential impact of using content-based image retrieval (CBIR) to access this non-text data. Based on the Journal Citations Report (JCR), the journal Radiology was selected for this study. In 2005, 734 articles were published electronically in this journal. This included 2,587 figures, which yields a rate of 3.52 figures per article. Furthermore, 56.4% of these figures are composed of several individual panels, i.e. the figure combines different images and/or graphs. According to the Image Cross-Language Evaluation Forum (ImageCLEF), the error rate of automatic identification of medical images is about 15%. Therefore, it is expected that, by applying ImageCLEF-like techniques, already 95.5% of articles could be retrieved by means of CBIR. The challenge for CBIR in scientific literature, however, is the use of local texture properties to analyze individual image panels in composite illustrations. Using local features for content-based image representation, 8.81 images per article are available, and the predicted correctness rate may increase to 98.3%. From this study, we conclude that CBIR may have a high impact in medical literature research and suggest that additional research in this area is warranted.
Haas, Stephanie W.; Grams, Erika S.
Discusses research describing Web page and link classification systems resulting from a content analysis of over 75 Web pages. Topics include the decision-making processes of Web page authors and readers; syntactic analysis of labeled and isolated anchors; expansion and resource links; and where links lead. (Author/LRW)
Henry, Anna E.; Story, Mary
Objective: To identify food and beverage brand Web sites featuring designated children's areas, assess marketing techniques present on those industry Web sites, and determine nutritional quality of branded food items marketed to children. Design: Systematic content analysis of food and beverage brand Web sites and nutrient analysis of food and…
Santhana Vannan, S.; Cook, R. B.; Wei, Y.
In recent years user access to data and information is increasingly handled through tools, services, and applications. Standards-based services have facilitated this development. These service-based methods to access data has boosted the use of data and in increasingly complex ways. The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) has taken the approach of service-based access to data and visualization for distribution and visualization of its terrestrial ecology data, including MODIS (Moderate Resolution Imaging Spectroradiometer) remote sensing data products. The MODIS data products are highly useful for field research. The spectral, spatial and temporal characteristics of MODIS products have made them an important data source for analyzing key science questions relating to Earth system processes at multiple spatial and temporal scales. However, MODIS data volume and the complexity in data format make it less usable in some cases. To solve this usability issue, the ORNL DAAC has developed a system that prepares and distributes subsets of selected MODIS land products in a scale and format useful for field researchers. Web and Web service tools provide MODIS subsets in comma-delimited text format and in GIS compatible GeoTIFF format. Users can download and visualize MODIS subsets for a set of pre-defined locations, order MODIS subsets for any land location or automate the process of subset extraction using a SOAP-based Web service. The MODIS tools and services can be extended to support the large volume of data that would be produced by the various decadal survey missions. http://daac.ornl.gov/MODIS . The ORNL DAAC has also created a Web-based Spatial Data Access Tool (SDAT) that enables users to browse, visualize, and download a wide variety of geospatial data in various user-selected spatial/temporal extents, formats, and projections. SDAT is based on Open Geospatial Consortium (OGC) Web service standards that allows users to
McCann, M. P.
Using the STOQS Web Application for Access to in situ Oceanographic Data Mike McCann 7 August 2012 With increasing measurement and sampling capabilities of autonomous oceanographic platforms (e.g. Gliders, Autonomous Underwater Vehicles, Wavegliders), the need to efficiently access and visualize the data they collect is growing. The Monterey Bay Aquarium Research Institute has designed and built the Spatial Temporal Oceanographic Query System (STOQS) specifically to address this issue. The need for STOQS arises from inefficiencies discovered from using CF-NetCDF point observation conventions for these data. The problem is that access efficiency decreases with decreasing dimension of CF-NetCDF data. For example, the Trajectory Common Data Model feature type has only one coordinate dimension, usually Time - positions of the trajectory (Depth, Latitude, Longitude) are stored as non-indexed record variables within the NetCDF file. If client software needs to access data between two depth values or from a bounded geographic area, then the whole data set must be read and the selection made within the client software. This is very inefficient. What is needed is a way to easily select data of interest from an archive given any number of spatial, temporal, or other constraints. Geospatial relational database technology provides this capability. The full STOQS application consists of a Postgres/PostGIS database, Mapserver, and Python-Django running on a server and Web 2.0 technology (jQuery, OpenLayers, Twitter Bootstrap) running in a modern web browser. The web application provides faceted search capabilities allowing a user to quickly drill into the data of interest. Data selection can be constrained by spatial, temporal, and depth selections as well as by parameter value and platform name. The web application layer also provides a REST (Representational State Transfer) Application Programming Interface allowing tools such as the Matlab stoqstoolbox to retrieve data
Lee, Seungyup; Yoo, Juwan; Han, Gunhee
Despite the remarkable improvement of hardware and network technology, the inevitable delay from a user's command action to a system response is still one of the most crucial influence factors in user experiences (UXs). Especially for a web video service, an initial delay from click action to video start has significant influences on the quality of experience (QoE). The initial delay of a system can be minimized by preparing execution based on predicted user's intention prior to actual command action. The introduction of the sequential and concurrent flow of resources in human cognition and behavior can significantly improve the accuracy and preparation time for intention prediction. This paper introduces a threaded interaction model and applies it to user intention prediction for initial delay reduction in web video access. The proposed technique consists of a candidate selection module, a decision module and a preparation module that prefetches and preloads the web video data before a user's click action. The candidate selection module selects candidates in the web page using proximity calculation around a cursor. Meanwhile, the decision module computes the possibility of actual click action based on the cursor-gaze relationship. The preparation activates the prefetching for the selected candidates when the click possibility exceeds a certain limit in the decision module. Experimental results show a 92% hit-ratio, 0.5-s initial delay on average and 1.5-s worst initial delay, which is much less than a user's tolerable limit in web video access, demonstrating significant improvement of accuracy and advance time in intention prediction by introducing the proposed threaded interaction model. PMID:26102494
Lee, Seungyup; Yoo, Juwan; Han, Gunhee
Despite the remarkable improvement of hardware and network technology, the inevitable delay from a user's command action to a system response is still one of the most crucial influence factors in user experiences (UXs). Especially for a web video service, an initial delay from click action to video start has significant influences on the quality of experience (QoE). The initial delay of a system can be minimized by preparing execution based on predicted user's intention prior to actual command action. The introduction of the sequential and concurrent flow of resources in human cognition and behavior can significantly improve the accuracy and preparation time for intention prediction. This paper introduces a threaded interaction model and applies it to user intention prediction for initial delay reduction in web video access. The proposed technique consists of a candidate selection module, a decision module and a preparation module that prefetches and preloads the web video data before a user's click action. The candidate selection module selects candidates in the web page using proximity calculation around a cursor. Meanwhile, the decision module computes the possibility of actual click action based on the cursor-gaze relationship. The preparation activates the prefetching for the selected candidates when the click possibility exceeds a certain limit in the decision module. Experimental results show a 92% hit-ratio, 0.5-s initial delay on average and 1.5-s worst initial delay, which is much less than a user's tolerable limit in web video access, demonstrating significant improvement of accuracy and advance time in intention prediction by introducing the proposed threaded interaction model.
Triplett, Mark B.; Seiple, Timothy E.; Watson, David J.; Charboneau, Briant L.; Morse, John G.
Data volume, complexity, and access issues pose severe challenges for analysts, regulators and stakeholders attempting to efficiently use legacy data to support decision making at the U.S. Department of Energy’s (DOE) Hanford Site. DOE has partnered with the Pacific Northwest National Laboratory (PNNL) on the PHOENIX (PNNL-Hanford Online Environmental Information System) project, which seeks to address data access, transparency, and integration challenges at Hanford to provide effective decision support. PHOENIX is a family of spatially-enabled web applications providing quick access to decades of valuable scientific data and insight through intuitive query, visualization, and analysis tools. PHOENIX realizes broad, public accessibility by relying only on ubiquitous web-browsers, eliminating the need for specialized software. It accommodates a wide range of users with intuitive user interfaces that require little or no training to quickly obtain and visualize data. Currently, PHOENIX is actively hosting three applications focused on groundwater monitoring, groundwater clean-up performance reporting, and in-tank monitoring. PHOENIX-based applications are being used to streamline investigative and analytical processes at Hanford, saving time and money. But more importantly, by integrating previously isolated datasets and developing relevant visualization and analysis tools, PHOENIX applications are enabling DOE to discover new correlations hidden in legacy data, allowing them to more effectively address complex issues at Hanford.
Yurdakul, Bünyamin; Uslu, Öner; Çakar, Esra; Yildiz, Derya G.
The aim of this study is to evaluate the professional development program on web based content development (WBCD) designed by the Ministry of National Education (MoNE). Based on the theoretical CIPP model by Stufflebeam and Guskey's levels of evaluation, the study was carried out as a case study. The study group consisted of the courses that…
Stone, Glenn Davis
Outlines opportunities for changing and enhancing the nature of scholarly (peer-reviewed) articles on the Web. Discusses three mechanisms which can improve the form/content of scholarly articles: (1) use of hypertext structuring; (2) integration of multimedia components into articles; and (3) use of differentiated pointers. (Author/AEF)
Pirolli, Peter; Wilson, Mark
An approach to the measurement of knowledge content, knowledge access, and knowledge learning is developed. First a theoretical view of cognition is described, and then a class of measurement models, based on Rasch modeling, is presented. Knowledge access and content are viewed as determining the observable actions selected by an agent to achieve…
Umansky, Ilana M.
This study examines the characteristics and determinants of English learners' (ELs') access to academic content in middle school (Grades 6-8). Following 10 years of data from a large urban school district in California, I identify two predominant characteristics of EL access to content: leveled tracking in which ELs are overrepresented in lower…
de Filippis, Tiziana; Rocchi, Leandro; Rapisardi, Elena
The sharing of research data is a new challenge for the scientific community that may benefit from a large amount of information to solve environmental issues and sustainability in agriculture and urban contexts. Prerequisites for this challenge is the development of an infrastructure that ensure access, management and preservation of data, technical support for a coordinated and harmonious management of data that, in the framework of Open Data Policies, should encourages the reuse and the collaboration. The neogeography and the citizen as sensors approach, highlight that new data sources need a new set of tools and practices so to collect, validate, categorize, and use / access these "crowdsourced" data, that integrate the data sets produced in the scientific field, thus "feeding" the overall available data for analysis and research. When the scientific community embraces the dimension of collaboration and sharing, access and re-use, in order to accept the open innovation approach, it should redesign and reshape the processes of data management: the challenges of technological and cultural innovation, enabled by web 2.0 technologies, bring to the scenario where the sharing of structured and interoperable data will constitute the unavoidable building block to set up a new paradigm of scientific research. In this perspective the Institute of Biometeorology, CNR, whose aim is contributing to sharing and development of research data, has developed the "SensorWebHub" (SWH) infrastructure to support the scientific activities carried out in several research projects at national and international level. It is designed to manage both mobile and fixed open source meteorological and environmental sensors, in order to integrate the existing agro-meteorological and urban monitoring networks. The proposed architecture uses open source tools to ensure sustainability in the development and deployment of web applications with geographic features and custom analysis, as requested
von Haller, B.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Delort, C.; Dénes, E.; Diviá, R.; Fuchs, U.; Niedziela, J.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Wegrzynek, A.
Templeton, M. E.; Gough, C. A.
Web Seismic Un ∗x is a browser-based user interface for the Seismic Un ∗x freeware developed at Colorado School of Mines. The interface allows users to process and display seismic reflection data from any remote platform that runs a graphical Web browser. Users access data and create processing jobs on a remote server by completing form-based Web pages whose Common Gateway Interface scripts are written in Perl. These scripts supply parameters, manage files, call Seismic Un ∗x routines and return data plots. The interface was designed for undergraduate commuter students taking geophysics courses who need to: (a) process seismic data and other time series as a class using computers in campus teaching labs and (b) complete course assignments at home. Students from an undergraduate applied geophysics course tested the Web user interface while completing laboratory assignments in which they acquired and processed common-depth-point seismic reflection data into a subsurface image. This freeware, which will be publicly available by summer 1999, was developed and tested on a Solaris 2.5 server and will be ported to other versions of Unix, including Linux.
Horzum, Mehmet Baris; Canan Gungoren, Ozlem
One of the applications applied most nowadays is web based instruction (WBI). Although there are many studies on WBI, no study which researched the relations between beliefs for WBI, WBI tools acceptance levels and web pedagogical content knowledge (WPCK) of science and technology pre-service teachers was found among these studies. The aim of this…
the 1992 Internet, browser Mosaic [then Netscape or Internet Explorer], Web servers, and static HTML Web pages. The World Wide Web Consortium...read-only Web from 1992–1994 with static Web page content that used Mosaic or Netscape browsers to access static HTML Webpages. These pages had
Arving, Cecilia; Wadensten, Barbro; Johansson, Birgitta
Purpose of the research was to describe registered nurses' (RNs) (n = 53) thoughts on the blended learning format in a 'specialist nursing programme in cancer care'. The study was conducted in autumn 2007 and 2008. A content analysis of answers to open-ended questions in a web-based questionnaire and a focus group interview were carried out. The analysis revealed that the RNs appreciated blended learning. The web lectures facilitated learning and gave RNs access to the education at any time. However, according to the RNs, knowledge is gained through interaction between RNs and teachers, and this aspect needed to be improved. The RNs also thought that the content of the seminars on campus should focus on evidence-based nursing knowledge and practical skills, not just taught as stable facts and procedures. The result from the present study could help to improve the design and content of advanced nursing courses using a blended learning format.
Wright, Paul J; McKinley, Christopher J
This study content analyzed a randomly selected stratified national sample of 203 four-year United States colleges' counseling center Web sites to assess the degree to which such sites feature information and reference services for lesbian, gay, bisexual, and transgender (LGBT) collegians. Results revealed that LGBT-targeted communications were infrequent. For instance, fewer than one third of counseling center Web sites described individual counseling opportunities for LGBT students, fewer than 11% mentioned group counseling opportunities, and fewer than 6% offered a university crafted pamphlet with information about LGBT issues and resources. Findings are interpreted within the context of prior LGBT student health research.
Dowling, Nicki A; Rodda, Simone N; Lubman, Dan I; Jackson, Alun C
The 'concerned significant others' (CSOs) of people with problem gambling frequently seek professional support. However, there is surprisingly little research investigating the characteristics or help-seeking behaviour of these CSOs, particularly for web-based counselling. The aims of this study were to describe the characteristics of CSOs accessing the web-based counselling service (real time chat) offered by the Australian national gambling web-based counselling site, explore the most commonly reported CSO impacts using a new brief scale (the Problem Gambling Significant Other Impact Scale: PG-SOIS), and identify the factors associated with different types of CSO impact. The sample comprised all 366 CSOs accessing the service over a 21 month period. The findings revealed that the CSOs were most often the intimate partners of problem gamblers and that they were most often females aged under 30 years. All CSOs displayed a similar profile of impact, with emotional distress (97.5%) and impacts on the relationship (95.9%) reported to be the most commonly endorsed impacts, followed by impacts on social life (92.1%) and finances (91.3%). Impacts on employment (83.6%) and physical health (77.3%) were the least commonly endorsed. There were few significant differences in impacts between family members (children, partners, parents, and siblings), but friends consistently reported the lowest impact scores. Only prior counselling experience and Asian cultural background were consistently associated with higher CSO impacts. The findings can serve to inform the development of web-based interventions specifically designed for the CSOs of problem gamblers.
Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793
Baker, Stewart C.
This article argues that accessibility and universality are essential to good Web design. A brief review of library science literature sets the issue of Web accessibility in context. The bulk of the article explains the design philosophies of progressive enhancement and responsive Web design, and summarizes recent updates to WCAG 2.0, HTML5, CSS…
vultur, Sidonia Otilia; Marincas, Delia Adriana
In this paper, an evaluation of web sites regarded like projects is discussed. We give an overview of the Web Assessment Index (WAI), by presenting a web sites of Romanian Faculties of Economics case study. The WAI contains five categories: accessibility, access speed, navigability, content and reliability. We analyzed and presented a detailed…
Oduwole, Adebambo Adewale; Oyewumi, Olatundun
Purpose: This study aims to examine the accessibility and use of web-based electronic databases on the Health InterNetwork Access to Research Initiative (HINARI) portal by physicians in the Neuropsychiatric Hospital, Aro--a psychiatry health institution in Nigeria. Design/methodology/approach: Collection of data was through the use of a three-part…
Price, Matthew; Yuen, Erica; Davidson, Tatiana M.; Hubel, Grace; Ruggiero, Kenneth J.
Although web-based treatments have significant potential to assess and treat difficult to reach populations, such as trauma-exposed adolescents, the extent that such treatments are accessed and used is unclear. The present study evaluated the proportion of adolescents who accessed and completed a web-based treatment for post-disaster mental health symptoms. Correlates of access and completion were examined. A sample of 2,000 adolescents living in tornado-affected communities was assessed via structured telephone interview and invited to a web-based treatment. The modular treatment addressed symptoms of PTSD, depression, and alcohol and tobacco use. Participants were randomized to experimental or control conditions after accessing the site. Overall access for the intervention was 35.8%. Module completion for those who accessed ranged from 52.8% to 85.6%. Adolescents with parents who used the Internet to obtain health-related information were more likely to access the treatment. Adolescent males were less likely to access the treatment. Future work is needed to identify strategies to further increase the reach of web-based treatments to provide clinical services in a post-disaster context. PMID:25622071
Price, Matthew; Yuen, Erica K; Davidson, Tatiana M; Hubel, Grace; Ruggiero, Kenneth J
Although Web-based treatments have significant potential to assess and treat difficult-to-reach populations, such as trauma-exposed adolescents, the extent that such treatments are accessed and used is unclear. The present study evaluated the proportion of adolescents who accessed and completed a Web-based treatment for postdisaster mental health symptoms. Correlates of access and completion were examined. A sample of 2,000 adolescents living in tornado-affected communities was assessed via structured telephone interview and invited to a Web-based treatment. The modular treatment addressed symptoms of posttraumatic stress disorder, depression, and alcohol and tobacco use. Participants were randomized to experimental or control conditions after accessing the site. Overall access for the intervention was 35.8%. Module completion for those who accessed ranged from 52.8% to 85.6%. Adolescents with parents who used the Internet to obtain health-related information were more likely to access the treatment. Adolescent males were less likely to access the treatment. Future work is needed to identify strategies to further increase the reach of Web-based treatments to provide clinical services in a postdisaster context.
Goldfarb, S.; Marcelloni, C.; Eli Phoboo, A.; Shaw, K.
The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on the Drupal  content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. Implementation of the web site involves application of the html design using Drupal templates, refined development iterations, and the overall population of the site with content. We present the design and development processes and share the lessons learned along the way, including the results of the data-driven discovery studies. We also demonstrate the advantages of selecting a back-end supported by content management, with a focus on workflow. Finally, we discuss usage of the new public web pages to implement outreach strategy through implementation of clearly presented themes, consistent audience targeting and messaging, and the enforcement of a well-defined visual identity.
Alsaleh, Mansour; Alarifi, Abdulrahman
Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.
Alsaleh, Mansour; Alarifi, Abdulrahman
Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents. PMID:27855179
Huang, Lina; Meijers, Martijn; Šuba, Radan; van Oosterom, Peter
Vario-scale data structures have been designed to support gradual content zoom and the progressive transfer of vector data, for use with arbitrary map scales. The focus to date has been on the server side, especially on how to convert geographic data into the proposed vario-scale structures by means of automated generalisation. This paper contributes to the ongoing vario-scale research by focusing on the client side and communication, particularly on how this works in a web-services setting. It is claimed that these functionalities are urgently needed, as many web-based applications, both desktop and mobile, require gradual content zoom, progressive transfer and a high performance level. The web-client prototypes developed in this paper make it possible to assess the behaviour of vario-scale data and to determine how users will actually see the interactions. Several different options of web-services communication architectures are possible in a vario-scale setting. These options are analysed and tested with various web-client prototypes, with respect to functionality, ease of implementation and performance (amount of transmitted data and response times). We show that the vario-scale data structure can fit in with current web-based architectures and efforts to standardise map distribution on the internet. However, to maximise the benefits of vario-scale data, a client needs to be aware of this structure. When a client needs a map to be refined (by means of a gradual content zoom operation), only the 'missing' data will be requested. This data will be sent incrementally to the client from a server. In this way, the amount of data transferred at one time is reduced, shortening the transmission time. In addition to these conceptual architecture aspects, there are many implementation and tooling design decisions at play. These will also be elaborated on in this paper. Based on the experiments conducted, we conclude that the vario-scale approach indeed supports gradual
Liou, C.; Hulbert, S.
We present the architecture, design, and implementation details of the ADASS XII web site. The web site was implemented in Zope, a high-performance application server, web server, and content management system rolled into one. Zope includes a robust, scalable object database, web services architecture, and powerful programming capabilities. The web site was built to conform to HTML, CSS, and accessibility standards as adopted by the W3C. This dynamic web site also taps into a back-end Sybase database while requiring a minimal amount of coding. We offer this site as a prototype web site suitable for reuse in supporting future ADASS meetings.
Scheduled webinars can help you better manage EPA web content. Class topics include Drupal basics, creating different types of pages in the WebCMS such as document pages and forms, using Google Analytics, and best practices for metadata and accessibility.
Falcão-Reis, Filipa; Costa-Pereira, Altamiro; Correia, Manuel E
Electronic Health Record (EHR) systems are becoming more and more sophisticated and include nowadays numerous applications, which are not only accessed by medical professionals, but also by accounting and administrative personnel. This could represent a problem concerning basic rights such as privacy and confidentiality. The principles, guidelines and recommendations compiled by the OECD protection of privacy and trans-border flow of personal data are described and considered within health information system development. Granting access to an EHR should be dependent upon the owner of the record; the patient: he must be entitled to define who is allowed to access his EHRs, besides the access control scheme each health organization may have implemented. In this way, it's not only up to health professionals to decide who have access to what, but the patient himself. Implementing such a policy is walking towards patient empowerment which society should encourage and governments should promote. The paper then introduces a technical solution based on web security standards. This would give patients the ability to monitor and control which entities have access to their personal EHRs, thus empowering them with the knowledge of how much of his medical history is known and by whom. It is necessary to create standard data access protocols, mechanisms and policies to protect the privacy rights and furthermore, to enable patients, to automatically track the movement (flow) of their personal data and information in the context of health information systems. This solution must be functional and, above all, user-friendly and the interface should take in consideration some heuristics of usability in order to provide the user with the best tools. The current official standards on confidentiality and privacy in health care, currently being developed within the EU, are explained, in order to achieve a consensual idea of the guidelines that all member states should follow to transfer
Background The Internet is an optimal setting to provide massive access to tobacco treatments. To evaluate open-access Web-based smoking cessation programs in a real-world setting, adherence and retention data should be taken into account as much as abstinence rate. Objective The objective was to analyze the usage and effectiveness of a fully automated, open-access, Web-based smoking cessation program by comparing interactive versus noninteractive versions. Methods Participants were randomly assigned either to the interactive or noninteractive version of the program, both with identical content divided into 4 interdependent modules. At baseline, we collected demographic, psychological, and smoking characteristics of the smokers self-enrolled in the Web-based program of Universidad Nacional de Educación a Distancia (National Distance Education University; UNED) in Madrid, Spain. The following questionnaires were administered: the anxiety and depression subscales from the Symptom Checklist-90-Revised, the 4-item Perceived Stress Scale, and the Heaviness of Smoking Index. At 3 months, we analyzed dropout rates, module completion, user satisfaction, follow-up response rate, and self-assessed smoking abstinence. Results A total of 23,213 smokers were registered, 50.06% (11,620/23,213) women and 49.94% (11,593/23,213) men, with a mean age of 39.5 years (SD 10.3). Of these, 46.10% (10,701/23,213) were married and 34.43% (7992/23,213) were single, 46.03% (10,686/23,213) had university education, and 78.73% (18,275/23,213) were employed. Participants smoked an average of 19.4 cigarettes per day (SD 10.3). Of the 11,861 smokers randomly assigned to the interactive version, 2720 (22.93%) completed the first module, 1052 (8.87%) the second, 624 (5.26%) the third, and 355 (2.99%) the fourth. Completion data was not available for the noninteractive version (no way to record it automatically). The 3-month follow-up questionnaire was completed by 1085 of 23,213 enrolled smokers
Farmer, Lesley S. J.
Discusses how librarians can help parents become more knowledgeable about the Internet so they can guide their children in Internet use and become technologically independent. Recommends that school libraries develop Web pages that parents can access and discusses Web page design, content for children, and content for parents. (LRW)
Eberle, Jonas; Hüttich, Christian; Schmullius, Christiane
Time series information is widely used in environmental change analyses and is also an essential information for stakeholders and governmental agencies. However, a challenging issue is the processing of raw data and the execution of time series analysis. In most cases, data has to be found, downloaded, processed and even converted in the correct data format prior to executing time series analysis tools. Data has to be prepared to use it in different existing software packages. Several packages like TIMESAT (Jönnson & Eklundh, 2004) for phenological studies, BFAST (Verbesselt et al., 2010) for breakpoint detection, and GreenBrown (Forkel et al., 2013) for trend calculations are provided as open-source software and can be executed from the command line. This is needed if data pre-processing and time series analysis is being automated. To bring both parts, automated data access and data analysis, together, a web-based system was developed to provide access to satellite based time series data and access to above mentioned analysis tools. Users of the web portal are able to specify a point or a polygon and an available dataset (e.g., Vegetation Indices and Land Surface Temperature datasets from NASA MODIS). The data is then being processed and provided as a time series CSV file. Afterwards the user can select an analysis tool that is being executed on the server. The final data (CSV, plot images, GeoTIFFs) is visualized in the web portal and can be downloaded for further usage. As a first use case, we built up a complimentary web-based system with NASA MODIS products for Germany and parts of Siberia based on the Earth Observation Monitor (www.earth-observation-monitor.net). The aim of this work is to make time series analysis with existing tools as easy as possible that users can focus on the interpretation of the results. References: Jönnson, P. and L. Eklundh (2004). TIMESAT - a program for analysing time-series of satellite sensor data. Computers and Geosciences 30
Mascarini, C; Ratib, O; Trayser, G; Ligier, Y; Appel, R D
The development of a hospital wide PACS is in progress at the University Hospital of Geneva and several archive modules are operational since 1992. This PACS is intended for wide distribution of images to clinical wards. As the PACS project and the number of archived images grow rapidly in the hospital, it was necessary to provide an easy, more widely accessible and convenient access to the PACS database for the clinicians in the different wards and clinical units of the hospital. An innovative solution has been developed using tools such as Netscape navigator and NCSA World Wide Web server as an alternative to conventional database query and retrieval software. These tools present the advantages of providing a user interface which is the same, independent of the platform being used (e.g. Mac, Windows, UNIX), and an easy integration of different types of documents (e.g. text, images). A strict access control has been added to this interface. It allows user identification and access rights checking, as defined by the in-house hospital information system, before allowing the navigation through patient data records.
Mascarini, Christian; Ratib, Osman M.; Trayser, Gerhard; Ligier, Yves; Appel, R. D.
The development of a hospital wide PACS is in progress at the University Hospital of Geneva and several archive modules are operational since 1992. This PACS is intended for wide distribution of images to clinical wards. As the PACS project and the number of archived images grow rapidly in the hospital, it was necessary to provide an easy, more widely accessible and convenient access to the PACS database for the clinicians in the different wards and clinical units of the hospital. An innovative solution has been developed using tools such as Netscape navigator and NCSA World Wide Web server as an alternative to conventional database query and retrieval software. These tools present the advantages of providing an user interface which is the same independently of the platform being used (Mac, Windows, UNIX, ...), and an easy integration of different types of documents (text, images, ...). A strict access control has been added to this interface. It allows user identification and access rights checking, as defined by the in-house hospital information system, before allowing the navigation through patient data records.
James, Nathan L.; Williams, David R.
The National Space Science Data Center (NSSDC) was established by NASA to provide for the preservation and dissemination of scientific data from NASA missions. This paper describes the policies specifically related to lunar science data. NSSDC presently archives 660 lunar data collections. Most of these data (423 units) are stored offline in analog format. The remainder of this collection consists of magnetic tapes and discs containing approximately 1.7 TB of digital lunar data. The active archive for NASA lunar data is the Planetary Data System (PDS). NSSDC has an agreement with the PDS Lunar Data Node to assist in the restoration and preparation of NSSDC-resident lunar data upon request for access and distribution via the PDS archival system. Though much of NSSDC's digital store also resides in PDS, NSSDC has many analog data collections and some digital lunar data sets that are not in PDS. NSSDC stands ready to make these archived lunar data accessible to both the research community and the general public upon request as resources allow. Newly requested offline lunar data are digitized and moved to near-line storage devices called digital linear tape jukeboxes. The data are then packaged and made network-accessible via FTP for the convenience of a growing segment of the user community. This publication will 1) discuss the NSSDC processes and policies that govern how NASA lunar data is preserved, restored, and made accessible via the web and 2) highlight examples of special lunar data requests.
Badidi, E; Lang, B F; Burger, G
FLOSYS is an interactive web-accessible bioinformatics workflow system designed to assist biologists in multi-step data analyses. FLOSYS allows the user to create complex analysis pathways (protocols) graphically, similar to drawing a flowchart: icons representing particular bioinformatics tools are dragged and dropped onto a canvas and lines connecting those icons are drawn to specify the relationships between the tools. In addition, FLOSYS permits to select input-data, execute the protocol and store the results in a personal workspace. The three-tier architecture of FLOSYS has been implemented in Java and uses a relational database system together with new technologies for distributed and web computing such as CORBA, RMI, JSP and JDBC. The prototype of FLOSYS, which is part of the bioinformatics workbench AnaBench, is accessible on-line at http://malawimonas.bcm.umontreal.ca: 8091/anabench. The entire package is available on request to academic groups who wish to have a customized local analysis environment for research or teaching.
Shahbazi, Moloud; Wiley, Matthew T; Hristidis, Vagelis
Background An increasing number of patients from diverse demographic groups share and search for health-related information on Web-based social media. However, little is known about the content of the posted information with respect to the users’ demographics. Objective The aims of this study were to analyze the content of Web-based health-related social media based on users’ demographics to identify which health topics are discussed in which social media by which demographic groups and to help guide educational and research activities. Methods We analyze 3 different types of health-related social media: (1) general Web-based social networks Twitter and Google+; (2) drug review websites; and (3) health Web forums, with a total of about 6 million users and 20 million posts. We analyzed the content of these posts based on the demographic group of their authors, in terms of sentiment and emotion, top distinctive terms, and top medical concepts. Results The results of this study are: (1) Pregnancy is the dominant topic for female users in drug review websites and health Web forums, whereas for male users, it is cardiac problems, HIV, and back pain, but this is not the case for Twitter; (2) younger users (0-17 years) mainly talk about attention-deficit hyperactivity disorder (ADHD) and depression-related drugs, users aged 35-44 years discuss about multiple sclerosis (MS) drugs, and middle-aged users (45-64 years) talk about alcohol and smoking; (3) users from the Northeast United States talk about physical disorders, whereas users from the West United States talk about mental disorders and addictive behaviors; (4) Users with higher writing level express less anger in their posts. Conclusion We studied the popular topics and the sentiment based on users' demographics in Web-based health-related social media. Our results provide valuable information, which can help create targeted and effective educational campaigns and guide experts to reach the right users on Web
Suzuki, Masakazu; Terada, Yugo; Kanahori, Toshihiro; Yamaguchi, Katsuhito
New features in our math-OCR software to convert PDF math contents into accessible e-books are shown. A method for recognizing PDF is thoroughly improved. In addition, contents in any selected area including math formulas in a PDF file can be cut and pasted into a document in various accessible formats, which is automatically recognized and converted into texts and accessible math formulas through this process. Combining it with our authoring tool for a technical document, one can easily produce accessible e-books in various formats such as DAISY, accessible EPUB3, DAISY-like HTML5, Microsoft Word with math objects and so on. Those contents are useful for various print-disabled students ranging from the blind to the dyslexic.
Davis, Brian N.; Werpy, Jason; Friesz, Aaron M.; Impecoven, Kevin; Quenzer, Robert; Maiersperger, Tom; Meyer, David J.
Current methods of searching for and retrieving data from satellite land remote sensing archives do not allow for interactive information extraction. Instead, Earth science data users are required to download files over low-bandwidth networks to local workstations and process data before science questions can be addressed. New methods of extracting information from data archives need to become more interactive to meet user demands for deriving increasingly complex information from rapidly expanding archives. Moving the tools required for processing data to computer systems of data providers, and away from systems of the data consumer, can improve turnaround times for data processing workflows. The implementation of middleware services was used to provide interactive access to archive data. The goal of this middleware services development is to enable Earth science data users to access remote sensing archives for immediate answers to science questions instead of links to large volumes of data to download and process. Exposing data and metadata to web-based services enables machine-driven queries and data interaction. Also, product quality information can be integrated to enable additional filtering and sub-setting. Only the reduced content required to complete an analysis is then transferred to the user.
Xu, Songhua; Yoon, Hong-Jun; Tourassi, Georgia
Motivation: Life stories of diseased and healthy individuals are abundantly available on the Internet. Collecting and mining such online content can offer many valuable insights into patients’ physical and emotional states throughout the pre-diagnosis, diagnosis, treatment and post-treatment stages of the disease compared with those of healthy subjects. However, such content is widely dispersed across the web. Using traditional query-based search engines to manually collect relevant materials is rather labor intensive and often incomplete due to resource constraints in terms of human query composition and result parsing efforts. The alternative option, blindly crawling the whole web, has proven inefficient and unaffordable for e-health researchers. Results: We propose a user-oriented web crawler that adaptively acquires user-desired content on the Internet to meet the specific online data source acquisition needs of e-health researchers. Experimental results on two cancer-related case studies show that the new crawler can substantially accelerate the acquisition of highly relevant online content compared with the existing state-of-the-art adaptive web crawling technology. For the breast cancer case study using the full training set, the new method achieves a cumulative precision between 74.7 and 79.4% after 5 h of execution till the end of the 20-h long crawling session as compared with the cumulative precision between 32.8 and 37.0% using the peer method for the same time period. For the lung cancer case study using the full training set, the new method achieves a cumulative precision between 56.7 and 61.2% after 5 h of execution till the end of the 20-h long crawling session as compared with the cumulative precision between 29.3 and 32.4% using the peer method. Using the reduced training set in the breast cancer case study, the cumulative precision of our method is between 44.6 and 54.9%, whereas the cumulative precision of the peer method is between 24.3 and
just - in - time ” Job Aids. • Job Aids help them learn/recall how to perform their mission. • DRC developed tools which leverage Semantic Web technology...to: – Reduce development costs of authoring Job Aids by dynamically composing Job Aids from procedural knowledge bases on the fly. – Enable “ just - in - time ” training...in real-time – Content used for just - in - time warfighter training while in the field Page 12 of 44 Dynamic Job Aid System (DJAS) Demonstration Page
Smith, K. P.; Richmond, P.; LePoire, D. J.; Arnish, J. J.; Johnson, R.
Argonne National Laboratory has developed an Internet web site providing access to critical information needed to support decisions on the management and disposal of wastes containing naturally occurring radioactive material (NORM). The NORM Technology Connection web site provides current information on (1) service companies that provide support on NORM issues (e.g., site characterization and remediation, sample analysis, radiation safety training, disposal) and (2) existing applicable NORM regulations and guidelines. A third element of the site is an electronic mail list that allows users to post or respond to questions about the management of NORM. Development of the NORM Technology Connection web site was funded by the U.S. Department of Energy, Office of Fossil Energy. It is hosted and maintained by the Interstate Oil and Gas Compact Commission. The web site is publicly available; access is free, as is participation by any of the service companies.
Huprich, Julia; Green, Ravonne
The Council on Public Liberal Arts Colleges (COPLAC) libraries websites were assessed for Section 508 errors using the online WebXACT tool. Only three of the twenty-one institutions (14%) had zero accessibility errors. Eighty-six percent of the COPLAC institutions had an average of 1.24 errors. Section 508 compliance is required for institutions…
Askenazi, Manor; Webber, James T; Marto, Jarrod A
Continued progress toward systematic generation of large-scale and comprehensive proteomics data in the context of biomedical research will create project-level data sets of unprecedented size and ultimately overwhelm current practices for results validation that are based on distribution of native or surrogate mass spectrometry files. Moreover, the majority of proteomics studies leverage discovery-mode MS/MS analyses, rendering associated data-reduction efforts incomplete at best, and essentially ensuring future demand for re-analysis of data as new biological and technical information become available. Based on these observations, we propose to move beyond the sharing of interpreted spectra, or even the distribution of data at the individual file or project level, to a system much like that used in high-energy physics and astronomy, whereby raw data are made programmatically accessible at the site of acquisition. Toward this end we have developed a web-based server (mzServer), which exposes our common API (mzAPI) through very intuitive (RESTful) uniform resource locators (URL) and provides remote data access and analysis capabilities to the research community. Our prototype mzServer provides a model for lab-based and community-wide data access and analysis.
their understanding of VoI attributes (source reliable , information content, and latency). The VoI web application emulates many features of a...built to allow the tool to be accessed from the Internet via a web browser. The following sections describe the web application’s user interface...based upon 3 attributes: source reliable , information content, and latency. The cards are divided into 4 decks: training, tactical, strategic, and
Das, Sudeshna; Girard, Lisa; Green, Tom; Weitzman, Louis; Lewis-Bowen, Alister; Clark, Tim
Web-based biomedical communities are becoming an increasingly popular vehicle for sharing information amongst researchers and are fast gaining an online presence. However, information organization and exchange in such communities is usually unstructured, rendering interoperability between communities difficult. Furthermore, specialized software to create such communities at low cost-targeted at the specific common information requirements of biomedical researchers-has been largely lacking. At the same time, a growing number of biological knowledge bases and biomedical resources are being structured for the Semantic Web. Several groups are creating reference ontologies for the biomedical domain, actively publishing controlled vocabularies and making data available in Resource Description Framework (RDF) language. We have developed the Science Collaboration Framework (SCF) as a reusable platform for advanced structured online collaboration in biomedical research that leverages these ontologies and RDF resources. SCF supports structured 'Web 2.0' style community discourse amongst researchers, makes heterogeneous data resources available to the collaborating scientist, captures the semantics of the relationship among the resources and structures discourse around the resources. The first instance of the SCF framework is being used to create an open-access online community for stem cell research-StemBook (http://www.stembook.org). We believe that such a framework is required to achieve optimal productivity and leveraging of resources in interdisciplinary scientific research. We expect it to be particularly beneficial in highly interdisciplinary areas, such as neurodegenerative disease and neurorepair research, as well as having broad utility across the natural sciences.
Demir, Yasemin; Gozum, Sebahat
This study was designed to evaluate the quality, content, usability, and efficacy of a Web site prepared for the purpose of improving the caregiving capability of family members who provide care for stroke survivors at home. The DISCERN score for the Web site was found to be 4.35 over 5. The first section that assesses reliability of the Web site was 4.38 over 5; mean score of the second section that measures the quality of the provided information on treatment/care options was 4.30, and mean score of the third section that gives a general evaluation of the material was 4.1. The Web site content achieved an average score of 3.47 over 4 after evaluation by experts. The Web site system usability score was found to be 79.4 over 100. The Web site was utilized mostly for exercises in bed (76.3%; n = 29), use of medications, and patient safety (68.4%; n = 26). It was determined that those who were younger and employed and had no previous experience of nursing any patient utilized relatively more from the section of patient nutrition and oral care and married family caregivers from the body hygiene section. The Web site quality and content were judged to be good and reliable to use. The Web site was efficiently used by caregivers.
Schmidt, L. J.; Smith, P. H.; Lombardi, D.
The Phoenix Mars Lander, scheduled to launch in August 2007, is the first mission in NASA's Scout Program. Phoenix has been specifically designed to measure volatiles (especially water) in the northern arctic plains of Mars, where the Mars Odyssey detected evidence of ice-rich soil near the surface. A fundamental part of the mission's goal-driven education and public outreach program is the Phoenix Mars Lander 2007 web site. Content for the site was designed not only to further the casual user's understanding of the Phoenix mission and its objectives, but also to meet the needs of the more science-attentive user who desires in-depth information. To this end, the web site's "Mars 101" module includes five distinct themes, all of which are directly connected to the mission's purpose: Mars Intro includes basic facts about Mars and how the planet differs from Earth; Polar Regions discusses the history of polar exploration on Earth and the similarities between these regions on Mars and Earth; Climate covers the effects that Earth's polar regions have on climate and how these same effects may occur on Mars; Water on Mars introduces the reader to the idea of liquid water and water ice on Mars; and Biology includes a discussion of the requirements of life and life in the universe to facilitate reader understanding of what Phoenix might find. Each of the five themes is described in simple language accompanied by relevant images and graphics, with hypertext links connecting the science-attentive user to more in-depth content. By presenting the "Mars 101" content in a manner that relates each subheading to a specific component of the mission's purpose, the Phoenix web site nurtures understanding of the mission and its relevance to NASA's Mars Exploration goals by the general lay public as well as the science-attentive user.
Eggel, Ivan; Müller, Henning
Over the past few years an increasing amount of scientific journals have been created in an open access format. Particularly in the medical field the number of openly accessible journals is enormous making a wide body of knowledge available for analysis and retrieval. Part of the trend towards open access publications can be linked to funding bodies such as the NIH1 (National Institutes of Health) and the Swiss National Science Foundation (SNF2) requiring funded projects to make all articles of funded research available publicly. This article describes an approach to make part of the knowledge of open access journals available for retrieval including the textual information but also the images contained in the articles. For this goal all articles of 24 journals related to medical informatics and medical imaging were crawled from the web pages of BioMed Central. Text and images of the PDF (Portable Document Format) files were indexed separately and a web-based retrieval interface allows for searching via keyword queries or by visual similarity queries. Starting point for a visual similarity query can be an image on the local hard disk that is uploaded or any image found via the textual search. Search for similar documents is also possible.
Foster, Gary; Navarro-Ruan, Tamara; McEneny-King, Alanna; Edginton, Andrea N; Thabane, Lehana
Background Individual pharmacokinetic assessment is a critical component of tailored prophylaxis for hemophilia patients. Population pharmacokinetics allows using individual sparse data, thus simplifying individual pharmacokinetic studies. Implementing population pharmacokinetics capacity for the hemophilia community is beyond individual reach and requires a system effort. Objective The Web-Accessible Population Pharmacokinetic Service—Hemophilia (WAPPS-Hemo) project aims to assemble a database of patient pharmacokinetic data for all existing factor concentrates, develop and validate population pharmacokinetics models, and integrate these models within a Web-based calculator for individualized pharmacokinetic estimation in patients at participating treatment centers. Methods Individual pharmacokinetic studies on factor VIII and IX concentrates will be sourced from pharmaceutical companies and independent investigators. All factor concentrate manufacturers, hemophilia treatment centers (HTCs), and independent investigators (identified via a systematic review of the literature) having on file pharmacokinetic data and willing to contribute full or sparse pharmacokinetic data will be eligible for participation. Multicompartmental modeling will be performed using a mixed-model approach for derivation and Bayesian forecasting for estimation of individual sparse data. NONMEM (ICON Development Solutions) will be used as modeling software. Results The WAPPS-Hemo research network has been launched and is currently joined by 30 HTCs from across the world. We have gathered dense individual pharmacokinetic data on 878 subjects, including several replicates, on 21 different molecules from 17 different sources. We have collected sparse individual pharmacokinetic data on 289 subjects from the participating centers through the testing phase of the WAPPS-Hemo Web interface. We have developed prototypal population pharmacokinetics models for 11 molecules. The WAPPS-Hemo website
Schweitzer, R.; Hankin, S. C.; Callahan, J. S.; O'Brien, K.; Manke, A.; Wang, X. Y.
Niu, Lu; Luo, Dan; Liu, Ying; Xiao, Shuiyuan
Objective: The present study was designed to assess the quality of Chinese-language Internet-based information on HIV/AIDS. Methods: We entered the following search terms, in Chinese, into Baidu and Sogou: “HIV/AIDS”, “symptoms”, and “treatment”, and evaluated the first 50 hits of each query using the Minervation validation instrument (LIDA tool) and DISCERN instrument. Results: Of the 900 hits identified, 85 websites were included in this study. The overall score of the LIDA tool was 63.7%; the mean score of accessibility, usability, and reliability was 82.2%, 71.5%, and 27.3%, respectively. Of the top 15 sites according to the LIDA score, the mean DISCERN score was calculated at 43.1 (95% confidence intervals (CI) = 37.7–49.5). Noncommercial websites showed higher DISCERN scores than commercial websites; whereas commercial websites were more likely to be found in the first 20 links obtained from each search engine than the noncommercial websites. Conclusions: In general, the HIV/AIDS related Chinese-language websites have poor reliability, although their accessibility and usability are fair. In addition, the treatment information presented on Chinese-language websites is far from sufficient. There is an imperative need for professionals and specialized institutes to improve the comprehensiveness of web-based information related to HIV/AIDS. PMID:27556475
Hong, Dan; Shen, Vincent Y.
The rising popularity of various social networking websites has created a huge problem on Internet privacy. Although it is easy to post photos, comments, opinions on some events, etc. on the Web, some of these data (such as a person’s location at a particular time, criticisms of a politician, etc.) are private and should not be accessed by unauthorized users. Although social networks facilitate sharing, the fear of sending sensitive data to a third party without knowledge or permission of the data owners discourages people from taking full advantage of some social networking applications. We exploit the existing relationships on social networks and build a ‘‘trust network’’ with transitive relationship to allow controlled data sharing so that the privacy and preferences of data owners are respected. The trust network linking private data owners, private data requesters, and intermediary users is a directed weighted graph. The permission value for each private data requester can be automatically assigned in this network based on the transitive relationship. Experiments were conducted to confirm the feasibility of constructing the trust network from existing social networks, and to assess the validity of permission value assignments in the query process. Since the data owners only need to define the access rights of their closest contacts once, this privacy scheme can make private data sharing easily manageable by social network participants.
..., understandable, and robust. Each guideline has testable success criteria defined at three levels (A, AA, and AAA... and Level AA success criteria. Level AAA conformance provides a very high level of accessibility and means that the Web pages satisfy all the Level A, Level AA, and Level AAA success criteria. Level...
This paper explores how a small group of associate teachers (i.e., the classroom teachers who host, supervise, and mentor teacher candidates during practicum placements) accessed and interacted with the Associate Teacher Learning Tool (ATLT), a web-based learning tool created specifically for this new group of users. The ATLT is grounded in…
Semple, Colin AM
The volume of human genome sequence and the variety of web-based tools to access it continue to grow at an impressive rate, but a working knowledge of certain key resources can be sufficient to get the most from your genome. This article provides an update to Genome Biology 2000, 1(4):reviews2001.1-2001.5. PMID:11423014
The central premise of this research is that blind and visually impaired (BVI) people cannot use the Internet effectively due to accessibility and usability problems. Use of the Internet is indispensable in today's education system that relies on Web-enhanced instruction (WEI). Therefore, BVI students cannot participate effectively in WEI. Extant…
Carter, Sunshine; Traill, Stacie
Electronic resource access troubleshooting is familiar work in most libraries. The added complexity introduced when a library implements a web-scale discovery service, however, creates a strong need for well-organized, rigorous training to enable troubleshooting staff to provide the best service possible. This article outlines strategies, tools,…
Mattson, E.; Versteeg, R.; Ankeny, M.; Stormberg, G.
Hu, Kai; Gui, Zhipeng; Cheng, Xiaoqiang; Qi, Kunlun; Zheng, Jie; You, Lan; Wu, Huayi
Many discovery methods for geographic information services have been proposed. There are approaches for finding and matching geographic information services, methods for constructing geographic information service classification schemes, and automatic geographic information discovery. Overall, the efficiency of the geographic information discovery keeps improving., There are however, still two problems in Web Map Service (WMS) discovery that must be solved. Mismatches between the graphic contents of a WMS and the semantic descriptions in the metadata make discovery difficult for human users. End-users and computers comprehend WMSs differently creating semantic gaps in human-computer interactions. To address these problems, we propose an improved query process for WMSs based on the graphic contents of WMS layers, combining Support Vector Machine (SVM) and user relevance feedback. Our experiments demonstrate that the proposed method can improve the accuracy and efficiency of WMS discovery.
Cheng, Xiaoqiang; Qi, Kunlun; Zheng, Jie; You, Lan; Wu, Huayi
Many discovery methods for geographic information services have been proposed. There are approaches for finding and matching geographic information services, methods for constructing geographic information service classification schemes, and automatic geographic information discovery. Overall, the efficiency of the geographic information discovery keeps improving., There are however, still two problems in Web Map Service (WMS) discovery that must be solved. Mismatches between the graphic contents of a WMS and the semantic descriptions in the metadata make discovery difficult for human users. End-users and computers comprehend WMSs differently creating semantic gaps in human-computer interactions. To address these problems, we propose an improved query process for WMSs based on the graphic contents of WMS layers, combining Support Vector Machine (SVM) and user relevance feedback. Our experiments demonstrate that the proposed method can improve the accuracy and efficiency of WMS discovery. PMID:27861505
Constantinescu, Dana; Stefansson, Gunnar
This paper describes some of the principles for building a freely available web-based university with open content. The "tutor-web" is an international project for web-assisted education, including such free and open access. This project was initiated by the University of Iceland in partnership with many universities around the world,…
Harris, Trevor M.; Rouse, L. Jesse; Bergeron, Susan J.
Recent innovations in the Geospatial Web represent a paradigm shift in Web mapping by enabling educators to explore geography in the classroom by dynamically using a rapidly growing suite of impressive online geospatial tools. Coupled with access to spatial data repositories and User-Generated Content, the Geospatial Web provides a powerful…
Gabruk, Michał; Habina, Iwona; Kruk, Jerzy; Dłużewska, Jolanta; Szymańska, Renata
In this study, 25 accessions of Arabidopsis thaliana originating from a variety of climate conditions were grown under controlled circumstances of different light intensity and temperature. The accessions were analyzed for prenyllipids content and composition, as well as expression of the genes involved in tocochromanol biosynthesis (vte1-5). It was found that the applied conditions did not strongly affect total tocochromanols content and there was no apparent correlation of the tocochromanol content with the origin of the accessions. However, the presented results indicate that the temperature, more than the light intensity, affects the expression of the vte1-5 genes and the content of some prenyllipids. An interesting observation was that under low growth temperature, the hydroxy-plastochromanol (PC-OH) to plastochromanol (PC) ratio was considerably increased regardless of the light intensity in most of the accessions. PC-OH is known to be formed as a result of singlet oxygen stress, therefore this observation indicates that the singlet oxygen production is enhanced under low temperature. Unexpectedly, the highest increase in the PC-OH/PC ratio was found for accessions originating from cold climate (Shigu, Krazo-1 and Lov-5), even though such plants could be expected to be more resistant to low temperature stress.
Irwin, Jeannie Y.; Thyvalikakath, Thankam; Spallek, Heiko; Wali, Teena; Kerr, Alexander Ross; Schleyer, Titus
Objective Oral and pharyngeal cancers are responsible for over 7,600 deaths each year in the United States. Given the significance of the disease and the fact that many individuals increasingly rely on health information on the Internet, it is important that patients and others can access clear and accurate oral cancer information on the Web. The objective of this study was threefold: a) develop an initial method to evaluate surface and content quality of selected English- and Spanish-language oral cancer Web sites; b) conduct a pilot evaluation; and c) discuss implications of our findings for dental public health. Methods We developed a search strategy to find oral cancer sites frequented by the public using Medline Plus, Google, and Yahoo in English and Spanish. We adapted the Information Quality Tool (IQT) to perform a surface evaluation and developed a novel tool to evaluate site content for 24 sites each in English and Spanish. Results English-language sites had an average IQT score of 76.6 (out of 100) and an average content score of 52.1 (out of 100). Spanish-language sites had an average IQT score of 50.3 and an average content score of 25.6. Conclusions The study produced a quality assessment of oral cancer Web sites useful for clinicians and patients. Sites provided more information on clinical presentation, and etiology, and risk factors, than other aspects of oral cancer. The surface and quality of Spanish-language sites was low, possibly putting Hispanic populations at a disadvantage regarding oral cancer information on the Web. PMID:21774133
Possible links of cyberbullying with suicide and psychological problems have recently received considerable attention. Suicide-related behaviors have also been linked with viewing of associated web content. Studies on traditional bullying indicate that the roles of bullying involvement (bullies, victims, and bully-victims) matter in terms of associations with specific suicide-related behaviors and psychological problems. Yet, related research in the area of cyberbullying is lacking. The current study investigates the association of cyberbullying roles with viewing of specific suicide-related web content and psychological problems. Data from N = 19,406 (50 percent girls) 11-16-year-olds (M = 13.54, SD = 1.68) of a representative sample of Internet-using children in Europe were analyzed. Self-reports were obtained for cyberbullying role, viewing of web content related to self-harm, and suicide, as well as the emotional, peer, and conduct problem subscales of the Strengths and Difficulties Questionnaire (SDQ). Multinomial logistic regression analyses revealed that compared with those not involved in cyberbullying, viewing of web content related to suicide was higher for cybervictims and cyberbully-victims, but not for cyberbullies. Viewing of web content related to self-harm was higher for all cyberbullying roles, especially for cyberbully-victims. Rates of emotional problems were higher among cybervictims and cyberbully-victims, rates of peer problems were higher for cybervictims, and rates of conduct problems were higher for all cyberbullying roles. Moreover, the links between cyberbullying role and viewing of suicide-related web content were independent of psychological problems. The results can be useful to more precisely target efforts toward the specific problems of each cyberbullying role. The outcomes on viewing of web content also indicate an opportunity to enhance the presence of health service providers on Internet platforms.
Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.
This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.
Ferri, Delia; Giannoumis, G Anthony
Reflecting the commitments undertaken by the EU through the conclusion of the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), the European Disability Strategy 2010–2020 not only gives a prominent position to accessibility, broadly interpreted, but also suggests an examination of the obligations for access to cultural goods and services. The European Disability Strategy 2010–2020 expressly acknowledges that EU action will support national activities to make sports, leisure, cultural and recreational organizations and activities accessible, and use the possibilities for copyright exceptions in the Directive 2001/29/EC (Infosoc Directive). This article discusses to what extent the EU has realized the principle of accessibility and the right to access cultural goods and services envisaged in the UNCRPD. Previous research has yet to explore how web accessibility and digitization interact with the cultural dimension of disability policy in the European Union. This examination attempts to fill this gap by discussing to what extent the European Union has put this cultural dimension into effect and how web accessibility policies and the digitization of cultural materials influence these efforts.
Pawlicki, T; Brown, D; Dunscombe, P; Mutic, S
Singh, Kulwinder; Park, Dong-Won
We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily
Schulz, Kristin; Hunger, Sindy; Brown, George G; Tsai, Siu M; Cerri, Carlos C; Conrad, Ralf; Drake, Harold L
The anoxic saccharide-rich conditions of the earthworm gut provide an ideal transient habitat for ingested microbes capable of anaerobiosis. It was recently discovered that the earthworm Eudrilus eugeniae from Brazil can emit methane (CH4) and that ingested methanogens might be associated with this emission. The objective of this study was to resolve trophic interactions of bacteria and methanogens in the methanogenic food web in the gut contents of E. eugeniae. RNA-based stable isotope probing of bacterial 16S rRNA as well as mcrA and mrtA (the alpha subunit of methyl-CoM reductase and its isoenzyme, respectively) of methanogens was performed with [13C]-glucose as a model saccharide in the gut contents. Concomitant fermentations were augmented by the rapid consumption of glucose, yielding numerous products, including molecular hydrogen (H2), carbon dioxide (CO2), formate, acetate, ethanol, lactate, succinate and propionate. Aeromonadaceae-affiliated facultative aerobes, and obligate anaerobes affiliated to Lachnospiraceae, Veillonellaceae and Ruminococcaceae were associated with the diverse fermentations. Methanogenesis was ongoing during incubations, and 13C-labeling of CH4 verified that supplemental [13C]-glucose derived carbon was dissimilated to CH4. Hydrogenotrophic methanogens affiliated with Methanobacteriaceae and Methanoregulaceae were linked to methanogenesis, and acetogens related to Peptostreptoccocaceae were likewise found to be participants in the methanogenic food web. H2 rather than acetate stimulated methanogenesis in the methanogenic gut content enrichments, and acetogens appeared to dissimilate supplemental H2 to acetate in methanogenic enrichments. These findings provide insight on the processes and associated taxa potentially linked to methanogenesis and the turnover of organic carbon in the alimentary canal of methane-emitting E. eugeniae. PMID:25615437
Schulz, Kristin; Hunger, Sindy; Brown, George G; Tsai, Siu M; Cerri, Carlos C; Conrad, Ralf; Drake, Harold L
The anoxic saccharide-rich conditions of the earthworm gut provide an ideal transient habitat for ingested microbes capable of anaerobiosis. It was recently discovered that the earthworm Eudrilus eugeniae from Brazil can emit methane (CH4) and that ingested methanogens might be associated with this emission. The objective of this study was to resolve trophic interactions of bacteria and methanogens in the methanogenic food web in the gut contents of E. eugeniae. RNA-based stable isotope probing of bacterial 16S rRNA as well as mcrA and mrtA (the alpha subunit of methyl-CoM reductase and its isoenzyme, respectively) of methanogens was performed with [(13)C]-glucose as a model saccharide in the gut contents. Concomitant fermentations were augmented by the rapid consumption of glucose, yielding numerous products, including molecular hydrogen (H2), carbon dioxide (CO2), formate, acetate, ethanol, lactate, succinate and propionate. Aeromonadaceae-affiliated facultative aerobes, and obligate anaerobes affiliated to Lachnospiraceae, Veillonellaceae and Ruminococcaceae were associated with the diverse fermentations. Methanogenesis was ongoing during incubations, and (13)C-labeling of CH4 verified that supplemental [(13)C]-glucose derived carbon was dissimilated to CH4. Hydrogenotrophic methanogens affiliated with Methanobacteriaceae and Methanoregulaceae were linked to methanogenesis, and acetogens related to Peptostreptoccocaceae were likewise found to be participants in the methanogenic food web. H2 rather than acetate stimulated methanogenesis in the methanogenic gut content enrichments, and acetogens appeared to dissimilate supplemental H2 to acetate in methanogenic enrichments. These findings provide insight on the processes and associated taxa potentially linked to methanogenesis and the turnover of organic carbon in the alimentary canal of methane-emitting E. eugeniae.
Celli, Fabrizio; Malapela, Thembani; Wegner, Karna; Subirats, Imma; Kokoliou, Elena; Keizer, Johannes
AGRIS is the International System for Agricultural Science and Technology. It is supported by a large community of data providers, partners and users. AGRIS is a database that aggregates bibliographic data, and through this core data, related content across online information systems is retrieved by taking advantage of Semantic Web capabilities. AGRIS is a global public good and its vision is to be a responsive service to its user needs by facilitating contributions and feedback regarding the AGRIS core knowledgebase, AGRIS’s future and its continuous development. Periodic AGRIS e-consultations, partner meetings and user feedback are assimilated to the development of the AGRIS application and content coverage. This paper outlines the current AGRIS technical set-up, its network of partners, data providers and users as well as how AGRIS’s responsiveness to clients’ needs inspires the continuous technical development of the application. The paper concludes by providing a use case of how the AGRIS stakeholder input and the subsequent AGRIS e-consultation results influence the development of the AGRIS application, knowledgebase and service delivery. PMID:26339471
Alpert, J. C.; Rutledge, G.; Wang, J.; Freeman, P.; Kang, C. Y.
The NOAA Operational Modeling Archive Distribution System (NOMADS) is now delivering high availability services as part of NOAA's official real time data dissemination at its Web Operations Center (WOC). The WOC is a web service used by all organizational units in NOAA and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. New applications to access data and observations for verification of gridded model output, and progress toward integration with access to conventional and non-conventional observations will be discussed. We will demonstrate how users can use NOMADS services to repackage area subsets either using repackaging of GRIB2 files, or values selected by ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6- Dimensional analysis services across the internet.
Emery, W.; Baldwin, D.
Both global area coverage (GAC) and high-resolution picture transmission (HRTP) data from the Advanced Very High Resolution Radiometer (AVHRR) are made available to laternet users through an online data access system. Older GOES-7 data am also available. Created as a "testbed" data system for NASA's future Earth Observing System Data and Information System (EOSDIS), this testbed provides an opportunity to test both the technical requirements of an onune'd;ta system and the different ways in which the -general user, community would employ such a system. Initiated in December 1991, the basic data system experienced five major evolutionary changes In response to user requests and requirements. Features added with these changes were the addition of online browse, user subsetting, dynamic image Processing/navigation, a stand-alone data storage system, and movement,from an X-windows graphical user Interface (GUI) to a World Wide Web (WWW) interface. Over Its lifetime, the system has had as many as 2500 registered users. The system on the WWW has had over 2500 hits since October 1995. Many of these hits are by casual users that only take the GIF images directly from the interface screens and do not specifically order digital data. Still, there b a consistent stream of users ordering the navigated image data and related products (maps and so forth). We have recently added a real-time, seven- day, northwestern United States normalized difference vegetation index (NDVI) composite that has generated considerable Interest. Index Terms-Data system, earth science, online access, satellite data.
Zinzi, Angelo; Capria, Maria Teresa; Palomba, Ernesto; Antonelli, Lucio Angelo; Giommi, Paolo
In the recent years planetary exploration missions acquired data from minor bodies (i.e., dwarf planets, asteroid and comets) at a detail level never reached before. Since these objects often present very irregular shapes (as in the case of the comet 67P Churyumov-Gerasimenko target of the ESA Rosetta mission) "classical" bidimensional projections of observations are difficult to understand. With the aim of providing the scientific community a tool to access, visualize and analyze data in a new way, ASI Science Data Center started to develop MATISSE (Multi-purposed Advanced Tool for the Instruments for the Solar System Exploration - http://tools.asdc.asi.it/matisse.jsp) in late 2012. This tool allows 3D web-based visualization of data acquired by planetary exploration missions: the output could either be the straightforward projection of the selected observation over the shape model of the target body or the visualization of a high-order product (average/mosaic, difference, ratio, RGB) computed directly online with MATISSE. Standard outputs of the tool also comprise downloadable files to be used with GIS software (GeoTIFF and ENVI format) and 3D very high-resolution files to be viewed by means of the free software Paraview. During this period the first and most frequent exploitation of the tool has been related to visualization of data acquired by VIRTIS-M instruments onboard Rosetta observing the comet 67P. The success of this task, well represented by the good number of published works that used images made with MATISSE confirmed the need of a different approach to correctly visualize data coming from irregular shaped bodies. In the next future the datasets available to MATISSE are planned to be extended, starting from the addition of VIR-Dawn observations of both Vesta and Ceres and also using standard protocols to access data stored in external repositories, such as NASA ODE and Planetary VO.
The design of a cylindrical access-tube mounted waveguide was developed for in-situ soil water content sensing using time-domain reflectometry (TDR). To optimize the design with respect to sampling volume and losses, we derived the electromagnetic fields produced by a TDR sensor with cylindrical geo...
Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…
Boersma, Christiaan; Sánchez de Armas, F.; Ricca, A.; Cami, J.; Peeters, E.; Mattioda, A. L.; Bauschlicher, C. W., Jr.; Allamandola, L. J.
The features formerly known as the Unidentified Infrared (UIR) Emission Bands are now generally attributed to polycyclic aromatic hydrocarbons (PAHs). Exploitation of these features as astrophysical and astrochemical probes requires the IR properties of PAHs under interstellar conditions. To fulfill this need, we experimentally measured and theoretically computed the 2-2000 µm spectra of many PAHs over the past 18 years at NASA's Ames Research Center. Today's collection comprises about 600 theoretically computed and 60 laboratory measured spectra of PAHs in different forms. The molecules in the collection range in size from C10H8 to C130H28. For most of these, spectra are available for PAHs in their neutral and singly charged (+/-) states. In some cases, IR spectra of multiply charged species were also computed. The database includes pure PAHs; PAHs containing nitrogen (PANHs), oxygen, and silicon; PAHs with side groups; PAHs with extra hydrogens; and PAHs complexed with iron and magnesium. This collection of PAH spectra from 2 - 2000 µm has been assembled into a uniform database, which we will make publicly available on the web in early 2009. A WebGUI interface has been developed that can effectively interrogate the database using a variety of queries, such as formula, molecular name, charge, specific number of atoms, etc. Several molecules can be selected in such a process and one can obtain their 3-D structures, plot and co-add their spectra, adjust parameters such as the bandwidth, download their data and print graphs. The database can also be downloaded as a whole and IDL-routines are provided to interrogate it. This talk will present an overview of the contents and the web-GUI tools of the NASA Ames PAH IR Spectroscopic Database. Hands-on demonstrations will be available at the SOFIA Booth.
Bakay, Marina; Zhao, Po; Chen, Josephine; Hoffman, Eric P
more specific dysregulation induced by dystrophin deficiency. We found two Y-linked genes expressed solely in male muscle (RPS4Y, DDX3Y), and two autosomal genes expressed much more highly in female muscle (GRO2, ZNF91) (all comparisons P<0.01). Finally, we present the first web-accessible expression profiling database for all data, including image files (.dat), processed image files (.cel), and complete comparison files which are publicly available through a novel queriable web site, that permits query-by-gene across all profiles (http://microarray.cnmcresearch.org/pga). These data enumerate the full range of molecular changes associated downstream of dystrophin deficiency, and provide a web-accessible platform to study the specificity of transcriptional pathway alterations in muscle disease.
O'Neil, Daniel A.
Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.
Graham, J.; Kanov, K.; Givelberg, E.; Burns, R.; Eyink, G.; Szalay, A.; Meneveau, C.; Lee, M. K.; Malaya, N.; Moser, R. D.
In this presentation we describe a new public database archiving a DNS data set of the space-time evolution of fully developed channel flow at Reτ = 1000 . The database will contain data from a DNS of channel flow with domain-size of 8 π × 2 × 3 π , at the resolution of 2048 ×512 ×1536, with 2048 time-frames of velocity and pressure fields spanning about a flow-through time scale. After simulation, the data are ingested into the database cluster using a space-filling Morton-curve to index the computational space uniformly, and also to organize data partition and distribution. The database system allows users access and to process the data remotely through an interface based on the Web-Service model. Users are thus able to perform numerical experiments on the high-resolution DNS data using least capable desktop computers. Test calculations are performed to illustrate the usage of the system and to verify the correctness of the data. Construction of the database also involves developments of MPI-DB, a new tool to facilitate coupling of parallel simulations and databases. Support provided, among others, from the National Science Foundation CDI-II grant CMMI-0941530.
Estrada, Jorge; Bernadó, Pau; Blackledge, Martin; Sancho, Javier
Background The stability of proteins is governed by the heat capacity, enthalpy and entropy changes of folding, which are strongly correlated to the change in solvent accessible surface area experienced by the polypeptide. While the surface exposed in the folded state can be easily determined, accessibilities for the unfolded state at the atomic level cannot be obtained experimentally and are typically estimated using simplistic models of the unfolded ensemble. A web application providing realistic accessibilities of the unfolded ensemble of a given protein at the atomic level will prove useful. Results ProtSA, a web application that calculates sequence-specific solvent accessibilities of the unfolded state ensembles of proteins has been developed and made freely available to the scientific community. The input is the amino acid sequence of the protein of interest. ProtSA follows a previously published calculation protocol which uses the Flexible-Meccano algorithm to generate unfolded conformations representative of the unfolded ensemble of the protein, and uses the exact analytical software ALPHASURF to calculate atom solvent accessibilities, which are averaged on the ensemble. Conclusion ProtSA is a novel tool for the researcher investigating protein folding energetics. The sequence specific atom accessibilities provided by ProtSA will allow obtaining better estimates of the contribution of the hydrophobic effect to the free energy of folding, will help to refine existing parameterizations of protein folding energetics, and will be useful to understand the influence of point mutations on protein stability. PMID:19356231
Hernández, Ivet; Aguilar, Consuelo; González Sanón, Gaspar
Trophic webs of reef fishes in northwestern Cuba. I. Stomach contents. Studies on the reef fishes of Cuba are not rare, but most have two basic limitations: small sample sizes and exclusion of small species. Our study sampled more species and larger samples in the sublitoral region of Havana city (23 degrees 7.587' N, 82 degrees 25.793' W), 2-18 m deep. We collected fish weekly from October 2004 through February 2006 with traps and harpoon. Overfishing has modified the fish communities. We used the relative importance index to describe the diets of carnivore and omnivore species, and a modification of the relative abundance method for the herbivores and sponge-eating species. The main food items are benthonic crustaceans (crabs, shrimp, copepods) and bony fish (mainly demersal species). Most species are eurifagous and thus, less affected by anthropic disturbance than specialist feeders.
Giannoumis, G Anthony
Despite different historical traditions, previous research demonstrates a convergence between regulatory approaches in the United Kingdom and Norway. To understand this convergence, this article examines how different policy traditions influence the legal obligations of performance standards regulating web content for use by persons with disabilities. While convergence has led to similar policy approaches, I argue that national policy traditions have an impact on how governments establish legal obligations for standards compliance. The analysis reveals that national policy traditions influenced antidiscrimination legislation and the capacity and authority of regulatory agencies, which impacted the diverging legal obligations of standards in the United Kingdom and Norway. The analysis further suggests that policy actors mediate the reciprocal influence between national policy traditions and regulatory convergence mechanisms.
Bennett, W F; Spigos, D G; Vaswani, K V; Terrell, J E
This presentation describes our experiences using a web-based viewing software and a browser to view our picture archiving and communication system (PACS) images at a remote site with cable modem-internet communications. Our testing shows that using a cable modem to access our radiology webserver produces acceptable transmission speeds to remote sites. The average time-to-display (TTD) for 16 computed tomography (CT) images on the web-based intranet system in our hospital was 7 to 8 seconds. Using a cable modem and comparable equipment at a remote site, the average TTD is 16 seconds over the internet. The TTD does not significantly change during various hours of the day. Security for our hospital-based PACS is provided by a firewall. Access through the firewall is accomplished using virtual private network (VPN) software, a secure ID, and encryption. We have found that this is a viable method for after-hours subspecialty radiology consultation.
W.A.V.E.S. stands for the Web-Accessible Visualization and Extraction System. Implemented in 2007, this specialized data interface allows users to search for ocean carbon data and receive on screen tables of data, data plots, or data files to download. An interactive map assists in the search, which has many customized search and output parameters. Both discrete data and underway data from ships' cruises are available for search.
Barsoum, Emad; Kuester, Falko
The pervasive nature of web-based content has lead to the development of applications and user interfaces that port between a broad range of operating systems and databases, while providing intuitive access to static and time-varying information. However, the integration of this vast resource into virtual environments has remained elusive. In this paper we present an implementation of a 3D Web Browser (WebVR) that enables the user to search the internet for arbitrary information and to seamlessly augment this information into virtual environments. WebVR provides access to the standard data input and query mechanisms offered by conventional web browsers, with the difference that it generates active texture-skins of the web contents that can be mapped onto arbitrary surfaces within the environment. Once mapped, the corresponding texture functions as a fully integrated web-browser that will respond to traditional events such as the selection of links or text input. As a result, any surface within the environment can be turned into a web-enabled resource that provides access to user-definable data. In order to leverage from the continuous advancement of browser technology and to support both static as well as streamed content, WebVR uses ActiveX controls to extract the desired texture skin from industry strength browsers, providing a unique mechanism for data fusion and extensibility.
Ozyurt, I Burak; Keator, David B; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R; Bockholt, Jeremy; Grethe, Jeffrey S
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site.
Clark, Kenneth; Hosticka, Alice; Kent, Judi; Browne, Ron
Addresses issues of access to World Wide Web sites, mathematics and science content-resources available on the Web, and methods for integrating mathematics, science, and language arts instruction. (Author/ASK)
This study examines the content of and audience response to organ donation videos on YouTube, a Web 2.0 platform, with framing theory. Positive frames were identified in both video content and audience comments. Analysis revealed a reciprocity relationship between media frames and audience frames. Videos covered content categories such as kidney, liver, organ donation registration process, and youth. Videos were favorably rated. No significant differences were found between videos produced by organizations and individuals in the United States and those produced in other countries. The findings provide insight into how new communication technologies are shaping health communication in ways that differ from traditional media. The implications of Web 2.0, characterized by user-generated content and interactivity, for health communication and health campaign practice are discussed.
Murray, Kathleen R.; Hsieh, Inga K.
The Web-at-Risk project is a digital preservation project funded by the Library of Congress as part of the National Digital Information Infrastructure and Preservation Program. The project is developing a Web archiving service to enable curators to build, store, and manage archived collections of Web-published materials captured largely from U.S.…
Hadlaczky, Gergö; Westerlund, Joakim; Wasserman, Danuta; Balazs, Judit; Germanavicius, Arunas; Machín, Núria; Meszaros, Gergely; Sarchiapone, Marco; Värnik, Airi; Varnik, Peeter; Westerlund, Michael; Carli, Vladimir
Background Adolescents and young adults are among the most frequent Internet users, and accumulating evidence suggests that their Internet behaviors might affect their mental health. Internet use may impact mental health because certain Web-based content could be distressing. It is also possible that excessive use, regardless of content, produces negative consequences, such as neglect of protective offline activities. Objective The objective of this study was to assess how mental health is associated with (1) the time spent on the Internet, (2) the time spent on different Web-based activities (social media use, gaming, gambling, pornography use, school work, newsreading, and targeted information searches), and (3) the perceived consequences of engaging in those activities. Methods A random sample of 2286 adolescents was recruited from state schools in Estonia, Hungary, Italy, Lithuania, Spain, Sweden, and the United Kingdom. Questionnaire data comprising Internet behaviors and mental health variables were collected and analyzed cross-sectionally and were followed up after 4 months. Results Cross-sectionally, both the time spent on the Internet and the relative time spent on various activities predicted mental health (P<.001), explaining 1.4% and 2.8% variance, respectively. However, the consequences of engaging in those activities were more important predictors, explaining 11.1% variance. Only Web-based gaming, gambling, and targeted searches had mental health effects that were not fully accounted for by perceived consequences. The longitudinal analyses showed that sleep loss due to Internet use (ß=.12, 95% CI=0.05-0.19, P=.001) and withdrawal (negative mood) when Internet could not be accessed (ß=.09, 95% CI=0.03-0.16, P<.01) were the only consequences that had a direct effect on mental health in the long term. Perceived positive consequences of Internet use did not seem to be associated with mental health at all. Conclusions The magnitude of Internet use is
Rodda, S. N.; Lubman, D. I.; Cheetham, A.; Dowling, N. A.; Jackson, A. C.
Despite the exponential growth of non-appointment-based web counselling, there is limited information on what happens in a single session intervention. This exploratory study, involving a thematic analysis of 85 counselling transcripts of people seeking help for problem gambling, aimed to describe the presentation and content of online…
Silvestre, Jason; Vargas, Christina R; Ho, Olivia; Lee, Bernard T
Microsurgery fellowship applicants utilize Internet-based resources such as the San Francisco Match (SF Match) to manage their applications. In deciding where to apply, applicants rely on advice from mentors and online resources including microsurgery fellowship websites (MFWs). The purpose of this study was to evaluate the content and accessibility of MFWs. While microsurgery is practiced by many surgical specialties, this study focused on MFWs for programs available in the 2014 Microsurgery Fellowship Match. Program lists from the American Society for Reconstructive Microsurgery (ASRM) and the San Francisco Match (SF Match) were analyzed for the accessibility of MFW links. MFWs were evaluated for education and recruitment content, and MFW comprehensiveness was compared on the basis of program characteristics using chi square tests. Of the 25 fellowships available, only 18 had websites (72%). SF Match and ASRM listed similar programs (96% overlap) and provided website links (89%, 76%), but only a minority connected directly to the MFW (38%, 23%). A minority of programs were responsive via email inquiry (36%). MFWs maintained minimal education and recruitment content. MFW comprehensiveness was not associated with program characteristics. MFWs are often not readily accessible and contain limited information for fellowship applicants. Given the relative low-cost of website development, MFWs may be improved to facilitate fellow recruitment.
Pliutau, Denis; Prasad, Narashimha S.
Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.
Eberle, Jonas; Urban, Marcel; Hüttich, Christian; Schmullius, Christiane
Numerous datasets providing temperature information from meteorological stations or remote sensing satellites are available. However, the challenging issue is to search in the archives and process the time series information for further analysis. These steps can be automated for each individual product, if the pre-conditions are complied, e.g. data access through web services (HTTP, FTP) or legal rights to redistribute the datasets. Therefore a python-based package was developed to provide data access and data processing tools for MODIS Land Surface Temperature (LST) data, which is provided by NASA Land Processed Distributed Active Archive Center (LPDAAC), as well as the Global Surface Summary of the Day (GSOD) and the Global Historical Climatology Network (GHCN) daily datasets provided by NOAA National Climatic Data Center (NCDC). The package to access and process the information is available as web services used by an interactive web portal for simple data access and analysis. Tools for time series analysis were linked to the system, e.g. time series plotting, decomposition, aggregation (monthly, seasonal, etc.), trend analyses, and breakpoint detection. Especially for temperature data a plot was integrated for the comparison of two temperature datasets based on the work by Urban et al. (2013). As a first result, a kernel density plot compares daily MODIS LST from satellites Aqua and Terra with daily means from GSOD and GHCN datasets. Without any data download and data processing, the users can analyze different time series datasets in an easy-to-use web portal. As a first use case, we built up this complimentary system with remotely sensed MODIS data and in situ measurements from meteorological stations for Siberia within the Siberian Earth System Science Cluster (www.sibessc.uni-jena.de). References: Urban, Marcel; Eberle, Jonas; Hüttich, Christian; Schmullius, Christiane; Herold, Martin. 2013. "Comparison of Satellite-Derived Land Surface Temperature and Air
Long, J. W.
Lorenz is a product of the ASC Scientific Data Management effort. Lorenz is a web-based application designed to help computer centers make information and resources more easily available to their users.
Schlabach, David M.
The increased dependency on the World Wide Web by both laboratories and their customers has led LIMS developers to take advantage of thin-client web applications that provide both remote data entry and manipulation, along with remote reporting functionality. Use of an LIMS through a web browser allows a person to interact with a distant application, providing both remote administration and real-time analytical result delivery from virtually anywhere in the world. While there are many benefits of web-based LIMS applications, some concern must be given to these new methods of system architecture before justifying them as a suitable replacement for their traditional client-server systems. Developers and consumers alike must consider the security aspects of introducing a wide area network capable system into a production environment, as well as the concerns of data integrity and usability. PMID:18924736
Nevile, Liddy; Treviranus, Jutta
This paper describes the interoperability underpinning a new strategy for delivering accessible computer-based resources to individual learners based on their specified needs and preferences in the circumstances in which they are operating. The new accessibility strategy, known as "AccessForAll," augments the model of universal…
... HUMAN SERVICES Food and Drug Administration Accessible Medical Device Labeling in a Standard Content and... content and format for medical device labeling and the use of a repository containing medical device... session. Standard content and format of full labeling and a shortened version of labeling will...
Koscielny, Gautier; Yaikhom, Gagarine; Iyer, Vivek; Meehan, Terrence F.; Morgan, Hugh; Atienza-Herrero, Julian; Blake, Andrew; Chen, Chao-Kung; Easty, Richard; Di Fenza, Armida; Fiegel, Tanja; Grifiths, Mark; Horne, Alan; Karp, Natasha A.; Kurbatova, Natalja; Mason, Jeremy C.; Matthews, Peter; Oakley, Darren J.; Qazi, Asfand; Regnart, Jack; Retha, Ahmad; Santos, Luis A.; Sneddon, Duncan J.; Warren, Jonathan; Westerberg, Henrik; Wilson, Robert J.; Melvin, David G.; Smedley, Damian; Brown, Steve D. M.; Flicek, Paul; Skarnes, William C.; Mallon, Ann-Marie; Parkinson, Helen
The International Mouse Phenotyping Consortium (IMPC) web portal (http://www.mousephenotype.org) provides the biomedical community with a unified point of access to mutant mice and rich collection of related emerging and existing mouse phenotype data. IMPC mouse clinics worldwide follow rigorous highly structured and standardized protocols for the experimentation, collection and dissemination of data. Dedicated ‘data wranglers’ work with each phenotyping center to collate data and perform quality control of data. An automated statistical analysis pipeline has been developed to identify knockout strains with a significant change in the phenotype parameters. Annotation with biomedical ontologies allows biologists and clinicians to easily find mouse strains with phenotypic traits relevant to their research. Data integration with other resources will provide insights into mammalian gene function and human disease. As phenotype data become available for every gene in the mouse, the IMPC web portal will become an invaluable tool for researchers studying the genetic contributions of genes to human diseases. PMID:24194600
Koscielny, Gautier; Yaikhom, Gagarine; Iyer, Vivek; Meehan, Terrence F; Morgan, Hugh; Atienza-Herrero, Julian; Blake, Andrew; Chen, Chao-Kung; Easty, Richard; Di Fenza, Armida; Fiegel, Tanja; Grifiths, Mark; Horne, Alan; Karp, Natasha A; Kurbatova, Natalja; Mason, Jeremy C; Matthews, Peter; Oakley, Darren J; Qazi, Asfand; Regnart, Jack; Retha, Ahmad; Santos, Luis A; Sneddon, Duncan J; Warren, Jonathan; Westerberg, Henrik; Wilson, Robert J; Melvin, David G; Smedley, Damian; Brown, Steve D M; Flicek, Paul; Skarnes, William C; Mallon, Ann-Marie; Parkinson, Helen
The International Mouse Phenotyping Consortium (IMPC) web portal (http://www.mousephenotype.org) provides the biomedical community with a unified point of access to mutant mice and rich collection of related emerging and existing mouse phenotype data. IMPC mouse clinics worldwide follow rigorous highly structured and standardized protocols for the experimentation, collection and dissemination of data. Dedicated 'data wranglers' work with each phenotyping center to collate data and perform quality control of data. An automated statistical analysis pipeline has been developed to identify knockout strains with a significant change in the phenotype parameters. Annotation with biomedical ontologies allows biologists and clinicians to easily find mouse strains with phenotypic traits relevant to their research. Data integration with other resources will provide insights into mammalian gene function and human disease. As phenotype data become available for every gene in the mouse, the IMPC web portal will become an invaluable tool for researchers studying the genetic contributions of genes to human diseases.
van den Hengel, Ylva KA; van Loon, A Jeanne M; Rademakers, Jany
Background On more and more websites, consumers are provided with public reports about health care. This move toward provision of more comparative information has resulted in different information types being published that often contain contradictory information. Objective The objective was to assess the current state of the art in the presentation of online comparative health care information and to compare how the integration of different information types is dealt with on websites. The content analysis was performed in order to provide website managers and Internet researchers with a resource of knowledge about presentation formats being applied internationally. Methods A Web search was used to identify websites that contained comparative health care information. The websites were systematically examined to assess how three different types of information (provider characteristics and services, performance indicators, and health care user experience) were presented to consumers. Furthermore, a short survey was disseminated to the reviewed websites to assess how the presentation formats were selected. Results We reviewed 42 websites from the following countries: Australia, Canada, Denmark, Germany, Ireland, the Netherlands, Norway, the United Kingdom, the United States, and Sweden. We found the most common ways to integrate different information types were the two extreme options: no integration at all (on 36% of the websites) and high levels of integration in single tables on 41% of the websites). Nearly 70% of the websites offered drill down paths to more detailed information. Diverse presentation approaches were used to display comparative health care information on the Internet. Numbers were used on the majority of websites (88%) to display comparative information. Conclusions Currently, approaches to the presentation of comparative health care information do not seem to be systematically selected. It seems important, however, that website managers become
Hulse, Nathan C.; Long, Jie; Xu, Xiaomin; Tao, Cui
Infobuttons have proven to be an increasingly important resource in providing a standardized approach to integrating useful educational materials at the point of care in electronic health records (EHRs). They provide a simple, uniform pathway for both patients and providers to receive pertinent education materials in a quick fashion from within EHRs and Personalized Health Records (PHRs). In recent years, the international standards organization Health Level Seven has balloted and approved a standards-based pathway for requesting and receiving data for infobuttons, simplifying some of the barriers for their adoption in electronic medical records and amongst content providers. Local content, developed by the hosting organization themselves, still needs to be indexed and annotated with appropriate metadata and terminologies in order to be fully accessible via the infobutton. In this manuscript we present an approach for automating the annotation of internally-developed patient education sheets with standardized terminologies and compare and contrast the approach with manual approaches used previously. We anticipate that a combination of system-generated and human reviewed annotations will provide the most comprehensive and effective indexing strategy, thereby allowing best access to internally-created content via the infobutton. PMID:25954376
Elicker, Joelle D.; O'Malley, Alison L.; Williams, Christine M.
We examined whether students with access to a supplemental course Web site enhanced with e-mail, discussion boards, and chat room capability reacted to it more positively than students who used a Web site with the same content but no communication features. Students used the Web sites on a voluntary basis. At the end of the semester, students…
Kukafka, R; Lussier, Y A; Patel, V L; Cimino, J J
This paper describes how theory facilitated the development of educational content for the MI-HEART project, a tailored Web-based intervention designed to favorably influence the appropriateness and rapidity of decision-making in patients suffering from symptoms of acute myocardial infarction. There were five steps involved: 1) formulating the behavioral goal, 2) defining intervention objectives based on an analyses of the determinants of behavior, 3) developing an assessment tool to measure a person's status on these determinants, 4) creating tailored content that address individual variation on determinants of the health behavior and, 5) developing algorithms and a computer program that link responses from the assessment to specific tailored communication. The approach we describe largely distinguishes Web-based applications that are designed to change health behavior from those that simply impart information. Developers of Web-based applications that propose to improve health status by modifying health-related behaviors need the understanding that although it is said that we live in an "information age", simply increasing knowledge has not been effective in changing behaviors in most instances. Furthermore, the one-size fits all approach to developing educational content cannot address the needs, concerns and interests of different individuals. With informatics technology, our ability to collect information from individuals and provide educational content tailored to the specific information collected is not only possibly, but practical.
Iglesias, Ana; Moreno, Lourdes; Castro, Elena; Cuadra, Dolores
Nowadays the use of distance learning systems is widely extended in engineering education. Moreover, most of them use multimedia resources that sometimes are the only educational material available to provide certain educational knowledge to the students. Unfortunately, most of the current educational systems and their educational content present…
Iglesias, Ana; Moreno, Lourdes; Cuadra, Dolores; Castro, Elena
Nowadays the use of distance learning systems is widely extended in engineering education. Moreover, most of them use multimedia resources that sometimes are the only educational material available to provide certain educational knowledge to the students. Unfortunately, most of the current educational systems and their educational content present…
Leovic, Lydia K.
New telecommunications vehicles expand the possible ways that business is conducted. The hypermedia portion of the Internet, the World Wide Web, is such a telecommunications device. The Web is presently one of the most flexible and dynamic methods for electronic information dissemination. The level of technological sophistication necessary to…
The present research examined the use of Web 2.0 tools to improve students' vocabulary knowledge at the School of Foreign Languages, Gaziantep University. Current studies in literature mostly deal with descriptions of students' attitudes towards the reasons for the use of web-based platforms. However, integrating usual classroom environment with…
The World-Wide Web provides every internet citizen with access to an abundance of information, but it becomes increasingly difficult to identify the relevant pieces of information. Research in web mining tries to address this problem by applying techniques from data mining and machine learning to Web data and documents. This chapter provides a brief overview of web mining techniques and research areas, most notably hypertext classification, wrapper induction, recommender systems and web usage mining.
Bell, Hudson; Tang, Nelson K. H.
A user survey of 60 company Web sites (electronic commerce, entertainment and leisure, financial and banking services, information services, retailing and travel, and tourism) determined that 30% had facilities for conducting online transactions and only 7% charged for site access. Overall, Web sites were rated high in ease of access, content, and…
Weber, Kristi; Story, Mary; Harnack, Lisa
Americans are spending an increasing amount of time using "new media" like the Internet. There has been little research examining food and beverage Web sites' content and marketing practices, especially those that attract children and adolescents. The purpose of this study was to conduct a content analysis of food- and beverage-brand Web sites and the marketing techniques and advertising strategies present on these sites. The top five brands in eight food and beverage categories, 40 brands in total, were selected based on annual sales data from Brandweek magazine's annual "Superbrands" report. Data were collected using a standardized coding form. The results show a wide variety of Internet marketing techniques and advertising strategies targeting children and adolescents. "Advergaming" (games in which the advertised product is part of the game) was present on 63% of the Web sites. Half or more of the Web sites used cartoon characters (50%) or spokescharacters (55%), or had a specially designated children's area (58%) with a direct link from the homepage. With interactive media still in its developmental stage, there is a need to develop safeguards for children. Food and nutrition professionals need to advocate for responsible marketing techniques that will support the health of children.
Lloyd, Steven; Acker, James G.; Prados, Ana I.; Leptoukh, Gregory G.
One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite-based remote sensing data sets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable data set to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface.
Rocco, D; Liu, L; Critchlow, T
Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address these challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.
Geiger, Brian; Evans, R. R.; Cellitti, M. A.; Smith, K. Hogan; O'Neal, Marcia R.; Firsing, S. L., III; Chandan, P.
Background: The Internet can be an invaluable resource for obtaining health information by people with disabilities. Although valid and reliable information is available, previous research revealed barriers to accessing health information online. Health education specialists have the responsibilities to insure that it is accessible to all users.…
Tandy, Cindy; Meacham, Mike
The concern of this article is the difficulties faced by disabled students as technology grows and expands in academia. Although distance learning, web-based courses, and hybrid courses, among other venues, have improved the chances for many people suffering disabilities for obtaining degrees and thereby increasing their life chances, we have met…
Peters, Tom; Bell, Lori
This month's column focuses on online Web conferencing software. This year promises to have the right mix of conditions to create significant growth in the use of these systems among libraries; library consortia, networks, and associations; and other library-related organizations. Travel budgets are being cut, librarian positions are being…
Zhao, Zhengmai; Hu, Jian
This paper presents an information retrieval system for delivering educational visual materials through the World Wide Web. The system is designed to meet the following user requirements: lecturers prefer direct control over their visual resources; lecturers demand a browser-based interface that will allow them to create and modify their online…
Guercio, Angela; Stirbens, Kathleen A.; Williams, Joseph; Haiber, Charles
Searching for relevant information on the web is an important aspect of distance learning. This activity is a challenge for visually impaired distance learners. While sighted people have the ability to filter information in a fast and non sequential way, blind persons rely on tools that process the information in a sequential way. Learning is…
... notice of disciplinary or access denial action. 9.11 Section 9.11 Commodity and Securities Exchanges... OTHER ADVERSE ACTIONS Notice and Effective Date of Disciplinary Action or Access Denial Action § 9.11... effective by the exchange except as provided in § 9.12. (b) Contents of notice. For purposes of this...
... notice of disciplinary or access denial action. 9.11 Section 9.11 Commodity and Securities Exchanges... OTHER ADVERSE ACTIONS Notice and Effective Date of Disciplinary Action or Access Denial Action § 9.11... effective by the exchange except as provided in § 9.12. (b) Contents of notice. For purposes of this...
... notice of disciplinary or access denial action. 9.11 Section 9.11 Commodity and Securities Exchanges... OTHER ADVERSE ACTIONS Notice and Effective Date of Disciplinary Action or Access Denial Action § 9.11... effective by the exchange except as provided in § 9.12. (b) Contents of notice. For purposes of this...
Mitchell, Christine M.; Thurman, David A.
AutoHelp is a case-based, Web-accessible help desk for users of the EOSDIS. Its uses a combination of advanced computer and Web technologies, knowledge-based systems tools, and cognitive engineering to offload the current, person-intensive, help desk facilities at the DAACs. As a case-based system, AutoHelp starts with an organized database of previous help requests (questions and answers) indexed by a hierarchical category structure that facilitates recognition by persons seeking assistance. As an initial proof-of-concept demonstration, a month of email help requests to the Goddard DAAC were analyzed and partially organized into help request cases. These cases were then categorized to create a preliminary case indexing system, or category structure. This category structure allows potential users to identify or recognize categories of questions, responses, and sample cases similar to their needs. Year one of this research project focused on the development of a technology demonstration. User assistance 'cases' are stored in an Oracle database in a combination of tables linking prototypical questions with responses and detailed examples from the email help requests analyzed to date. When a potential user accesses the AutoHelp system, a Web server provides a Java applet that displays the category structure of the help case base organized by the needs of previous users. When the user identifies or requests a particular type of assistance, the applet uses Java database connectivity (JDBC) software to access the database and extract the relevant cases. The demonstration will include an on-line presentation of how AutoHelp is currently structured. We will show how a user might request assistance via the Web interface and how the AutoHelp case base provides assistance. The presentation will describe the DAAC data collection, case definition, and organization to date, as well as the AutoHelp architecture. It will conclude with the year 2 proposal to more fully develop the
Kim, Sunghwan; Thiessen, Paul A; Bolton, Evan E; Bryant, Stephen H
PubChem (http://pubchem.ncbi.nlm.nih.gov) is a public repository for information on chemical substances and their biological activities, developed and maintained by the US National Institutes of Health (NIH). PubChem contains more than 180 million depositor-provided chemical substance descriptions, 60 million unique chemical structures and 225 million bioactivity assay results, covering more than 9000 unique protein target sequences. As an information resource for the chemical biology research community, it routinely receives more than 1 million requests per day from an estimated more than 1 million unique users per month. Programmatic access to this vast amount of data is provided by several different systems, including the US National Center for Biotechnology Information (NCBI)'s Entrez Utilities (E-Utilities or E-Utils) and the PubChem Power User Gateway (PUG)-a common gateway interface (CGI) that exchanges data through eXtended Markup Language (XML). Further simplifying programmatic access, PubChem provides two additional general purpose web services: PUG-SOAP, which uses the simple object access protocol (SOAP) and PUG-REST, which is a Representational State Transfer (REST)-style interface. These interfaces can be harnessed in combination to access the data contained in PubChem, which is integrated with the more than thirty databases available within the NCBI Entrez system.
Keis, Felix; Chwala, Christian; Kunstmann, Harald
Using commercial microwave link networks for precipitation estimation has become popular in the last years. Acquiring the necessary data from the network operators is however still difficult. Usually, data is provided to researches with large temporal delay and at irregular basis. Driven by the demand to facilitate this data accessibility, a custom acquisition software for microwave links has been developed in joint cooperation with our industry partner Ericsson. It is capable of recording data from a great number of microwave links simultaneously and of forwarding the data instantaneously to a newly established KIT-internal database. It makes use of the Simple Network Management Protocol (SNMP) and collects the transmitter and receiver power levels via asynchronous SNMP requests. The software is currently in its first operational test phase, recording data from several hundred Ericsson microwave links in southern Germany. Furthermore the software is used to acquire data with 1 Hz temporal resolution from four microwave links operated by the skiing resort in Garmisch-Partenkirchen. For convenient accessibility of this amount of data we have developed a web frontend for the emerging microwave link database. It provides dynamic real time visualization and basic processing of the recorded transmitter and receiver power levels. Here we will present details of the custom data acquisition software with focus on the design of the KIT microwave link database and on the specifically developed web frontend.
Morris, John M.; Smith, Steven V.; Mablekos, Carole; Fekete, John
Describes the development of a model Web-based distance-learning graduate course in Engineering Management at Drexel University (Pennsylvania). Highlights include a new pedagogical model; vendor selection; knowledge engineering; time commitment; development management; copyright issues; and costs. (LRW)
White, Ben; Willmott, Lindy; Tilse, Cheryl; Wilson, Jill; Lawson, Deborah; Pearce, Angela; Dunn, Jeffrey; Aitken, Joanne F; Feeney, Rachel; Jowett, Stephanie
Objective The aim of the present study was to identify online resources community members may access to inform themselves about their legal duties and rights in end-of-life decision making.Methods Resource mapping identified online resources that members of the public in New South Wales, Victoria and Queensland are likely to identify, and assessed the ease or difficulty in locating them. Resources were then critically analysed for accessibility of language and format using the Patient Education Materials Assessment Tool (PEMAT).Results Identified resources differed considerably based on whether search terms identified by community members or experts were used. Most resources focused on advance directives, enduring powers of attorney and substitute decision making. Relatively few provided information about legal duties (e.g. powers and responsibilities of substitute decision makers) or resolving conflict with health practitioners. Accessibility (understandability and actionability) of resource content varied.Conclusions Although numerous resources on end-of-life law are available online, community members may not be able to identify relevant resources or find resource content accessible.What is known about the topic? Research on participation by patients in decision making about their treatment has focused primarily on medical rather than legal knowledge.What does this paper add? The present study investigated which online resources community members may access to inform themselves about the law on end-of-life decision making. The resources identified were analysed for ease of location and content accessibility.What are the implications for practitioners? Authors of online resources on end-of-life decision making should consider whether their resources can be: (1) identified by search terms used by the public; (2) understood by a general audience; and (3) readily used to promote reader action.
Li, R.; Shen, Y.; Huang, W.; Wu, H.
Kraft, Angelina; Sens, Irina; Löwe, Peter; Dreyer, Britta
Globally resolvable, persistent digital identifiers have become an essential tool to enable unambiguous links between published research results and their underlying digital resources. In addition, this unambiguous identification allows citation. In an ideal research world, any scientific content should be citable and the coherent content, as well as the citation itself, should be persistent. However, today's scientists do not just produce traditional research papers - they produce comprehensive digital collections of objects which, alongside digital texts, include digital resources such as research data, audiovisual media, digital lab journals, images, statistics and software code. Researchers start to look for services which allow management of these digital resources with minimum time investment. In light of this, we show how the German National Library of Science and Technology (TIB) develops supportive frameworks to accompany the life cycle of scientific knowledge generation and transfer. This includes technical infrastructures for • indexing, cataloguing, digital preservation, DOI names and licencing for text and digital objects (the TIB DOI registration, active since 2004) and • a digital repository for the deposition and provision of accessible, traceable and citeable research data (RADAR). One particular problem for the management of data originating from (collaborating) research infrastructures is their dynamic nature in terms of growth, access rights and quality. On a global scale, systems for access and preservation are in place for the big data domains (e.g. environmental sciences, space, climate). However, the stewardship for disciplines without a tradition of data sharing, including the fields of the so-called long tail, remains uncertain. The RADAR - Research Data Repository - project establishes a generic end-point data repository, which can be used in a collaborative way. RADAR enables clients to upload, edit, structure and describe their
Ames, Charles; Auernheimer, Brent; Lee, Young H.
A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.
Yang, Y Tony; Chen, Brian
Access to the Internet is increasingly critical for health information retrieval, access to certain government benefits and services, connectivity to friends and family members, and an array of commercial and social services that directly affect health. Yet older adults, particularly those with disabilities, are at risk of being left behind in this growing age- and disability-based digital divide. The Americans with Disabilities Act (ADA) was designed to guarantee older adults and persons with disabilities equal access to employment, retail, and other places of public accommodation. Yet older Internet users sometimes face challenges when they try to access the Internet because of disabilities associated with age. Current legal interpretations of the ADA, however, do not consider the Internet to be an entity covered by law. In this article, we examine the current state of Internet accessibility protection in the United States through the lens of the ADA, sections 504 and 508 of the Rehabilitation Act, state laws and industry guidelines. We then compare U.S. rules to those of OECD (Organisation for Economic Co-Operation and Development) countries, notably in the European Union, Canada, Japan, Australia, and the Nordic countries. Our policy recommendations follow from our analyses of these laws and guidelines, and we conclude that the biggest challenge in bridging the age- and disability-based digital divide is the need to extend accessibility requirements to private, not just governmental, entities and organizations.
Richard, S.; Allison, L.; Clark, R.; Coleman, C.; Chen, G.
The US Geoscience information network has developed metadata profiles for interoperable catalog services based on ISO19139 and the OGC CSW 2.0.2. Currently data services are being deployed for the US Dept. of Energy-funded National Geothermal Data System. These services utilize OGC Web Map Services, Web Feature Services, and THREDDS-served NetCDF for gridded datasets. Services and underlying datasets (along with a wide variety of other information and non information resources are registered in the catalog system. Metadata for registration is produced by various workflows, including harvest from OGC capabilities documents, Drupal-based web applications, transformation from tabular compilations. Catalog search is implemented using the ESRI Geoportal open-source server. We are pursuing various client applications to demonstrated discovery and utilization of the data services. Currently operational applications allow catalog search and data acquisition from map services in an ESRI ArcMap extension, a catalog browse and search application built on openlayers and Django. We are developing use cases and requirements for other applications to utilize geothermal data services for resource exploration and evaluation.
Catarci, Tiziana; De Giovanni, Loredana; Gabrielli, Silvia; Kimani, Stephen; Mirabella, Valeria
There exist various guidelines for facilitating the design, preparation, and deployment of accessible eLearning applications and contents. However, such guidelines prevalently address accessibility in a rather technical sense, without giving sufficient consideration to the cognitive aspects and issues related to the use of eLearning materials by learners with disabilities. In this paper we describe how a user-centered design process was applied to develop a method and set of guidelines for didactical experts to scaffold their creation of accessible eLearning content, based on a more sound approach to accessibility. The paper also discusses possible design solutions for tools supporting eLearning content authors in the adoption and application of the proposed approach.
Comrie, A. C.; Redmond, K.; Glueck, M. F.; Reinbold, H.
The Western Climate Mapping Consortium (WestMap) has developed a prototype web-based interactive access and resource interface to optimize public dissemination and usage of fine-scale spatial climate time series for the western United States. The western U.S. focus reflects the complex climate interactions and diverse geography that make resource management, policy considerations, and climate research challenging in this region. WestMap was conceived by a consortium comprised of the University of Arizona/CLIMAS, the Western Regional Climate Center (WRCC)/Desert Research Institute, and the PRISM group at Oregon State University, along with collaborators at Scripps Institute of Oceanography/California Applications Project, NOAA Climate Diagnostics Center, and the USDA Natural Resource Conservation Service. WestMap evolved in direct response to a multitude of requests to the WRCC and the RISAs from public and private stakeholder communities for lengthy time series of fine-scale spatial climate aggregated to user-specified domains, and related user-friendly web-based access and analysis tools. The WestMap interface is designed to link three stakeholder-driven components, 1) climate data development and operations (access, maintenance); 2) error assessment, data analysis, diagnostics, and related tools; and (3) data access, visualization, and educational resources. The 100-year PRISM 4km monthly temperature and precipitation series serve as the initial data archive, updating automatically once in operational mode. Operational user components are being designed to allow direct stakeholder access to user-specified data and resources most relevant to current needs in a timely manner. Requested resources currently in development and limited testing stages include clickable maps, regional aggregate capabilities, basic statistical analysis, time series visualization, error assessment, and download/print capability. Phased prototype testing, currently underway internally, will
Introduction The aim of our study was to investigate the extent to which Instructions to authors of the Croatian open access (OA) journals are addressing ethical issues. Do biomedical journals differ from the journals from other disciplines in that respect? Our hypothesis was that biomedical journals maintain much higher publication ethics standards. Materials and methods This study looked at 197 Croatian OA journals Instructions to authors to address the following groups of ethical issues: general terms; guidelines and recommendations; research approval and registration; funding and conflict of interest; peer review; redundant publications, misconduct and retraction; copyright; timeliness; authorship; and data accessibility. We further compared a subset of 159 non-biomedical journals with a subset of 38 biomedical journals. Content analysis was used to discern the ethical issues representation in the instructions to authors. Results The groups of biomedical and non-biomedical journals were similar in terms of originality (χ2 = 2.183, P = 0.140), peer review process (χ2 = 0.296, P = 0.586), patent/grant statement (χ2 = 2.184, P = 0.141), and timeliness of publication (χ2 = 0.369, P = 0.544). We identified significant differences among categories including ethical issues typical for the field of biomedicine, like patients (χ2 = 47.111, P < 0.001), and use of experimental animals (χ2 = 42.543, P < 0.001). Biomedical journals also rely on international editorial guidelines formulated by relevant professional organizations heavily, compared with non-biomedical journals (χ2 = 42.666, P < 0.001). Conclusion Low representation or absence of some key ethical issues in author guidelines calls for more attention to the structure and the content of Instructions to authors in Croatian OA journals. PMID:25672463
Truccolo, Ivana; Antonini, Marialuisa; Rinaldi, Fabio; Omero, Paolo; Ferrarin, Emanuela; De Paoli, Paolo; Tasso, Carlo
Background The use of complementary and alternative medicine (CAM) among cancer patients is widespread and mostly self-administrated. Today, one of the most relevant topics is the nondisclosure of CAM use to doctors. This general lack of communication exposes patients to dangerous behaviors and to less reliable information channels, such as the Web. The Italian context scarcely differs from this trend. Today, we are able to mine and analyze systematically the unstructured information available in the Web, to get an insight of people’s opinions, beliefs, and rumors concerning health topics. Objective Our aim was to analyze Italian Web conversations about CAM, identifying the most relevant Web sources, therapies, and diseases and measure the related sentiment. Methods Data have been collected using the Web Intelligence tool ifMONITOR. The workflow consisted of 6 phases: (1) eligibility criteria definition for the ifMONITOR search profile; (2) creation of a CAM terminology database; (3) generic Web search and automatic filtering, the results have been manually revised to refine the search profile, and stored in the ifMONITOR database; (4) automatic classification using the CAM database terms; (5) selection of the final sample and manual sentiment analysis using a 1-5 score range; (6) manual indexing of the Web sources and CAM therapies type retrieved. Descriptive univariate statistics were computed for each item: absolute frequency, percentage, central tendency (mean sentiment score [MSS]), and variability (standard variation σ). Results Overall, 212 Web sources, 423 Web documents, and 868 opinions have been retrieved. The overall sentiment measured tends to a good score (3.6 of 5). Quite a high polarization in the opinions of the conversation partaking emerged from standard variation analysis (σ≥1). In total, 126 of 212 (59.4%) Web sources retrieved were nonhealth-related. Facebook (89; 21%) and Yahoo Answers (41; 9.7%) were the most relevant. In total, 94 CAM
Elementary actions online establish an individual's existence on the web and her/his orientation toward different issues. In this sense, actions truly define a user in spaces like online forums and communities and the aggregate of elementary actions shape the atmosphere of these online spaces. This observation, coupled with the unprecedented scale…
Evitt, Marie Faust
One of the author's biggest challenges as a preschool teacher is helping children in a group see and touch and do. Hands-on explorations are important for everyone, but essential for young children. How can young children do hands-on explorations of spiders and their webs? Teachers do not want children handling all sorts of spiders. They worry…
Adcock, Amy B.; Duggan, Molly H.; Watson, Ginger S.; Belfore, Lee A.
This paper describes an assessment of a web-based interview simulation designed to teach empathetic helping skills. The system includes an animated character acting as a client and responses designed to recreate a simulated role-play, a common assessment method used for teaching these skills. The purpose of this study was to determine whether…
Charbonneau, Deborah Hile
Consumer-targeted prescription drug advertising serves as an interesting lens through which we can examine the portrayal of menopause in online drug advertisements. The aim of this study was to explore the portrayal of menopause on web sites sponsored by pharmaceutical companies for hormone therapies (HT). To unravel this question, a qualitative…
Akayuure, Peter; Apawu, Jones
The study was designed to engage prospective mathematics teachers in creating web learning modules. The aim was to examine the mathematical task and perceived pedagogical usability of the modules for mathematics instructions in Ghana. The study took place at University of Education, Winneba. Classes of 172 prospective mathematics teachers working…
Kadlec, J.; Ames, D. P.
The aim of the presented work is creating a freely accessible, dynamic and re-usable snow cover map of the world by combining snow extent and snow depth datasets from multiple sources. The examined data sources are: remote sensing datasets (MODIS, CryoLand), weather forecasting model outputs (OpenWeatherMap, forecast.io), ground observation networks (CUAHSI HIS, GSOD, GHCN, and selected national networks), and user-contributed snow reports on social networks (cross-country and backcountry skiing trip reports). For adding each type of dataset, an interface and an adapter is created. Each adapter supports queries by area, time range, or combination of area and time range. The combined dataset is published as an online snow cover mapping service. This web service lowers the learning curve that is required to view, access, and analyze snow depth maps and snow time-series. All data published by this service are licensed as open data; encouraging the re-use of the data in customized applications in climatology, hydrology, sports and other disciplines. The initial version of the interactive snow map is on the website snow.hydrodata.org. This website supports the view by time and view by site. In view by time, the spatial distribution of snow for a selected area and time period is shown. In view by site, the time-series charts of snow depth at a selected location is displayed. All snow extent and snow depth map layers and time series are accessible and discoverable through internationally approved protocols including WMS, WFS, WCS, WaterOneFlow and WaterML. Therefore they can also be easily added to GIS software or 3rd-party web map applications. The central hypothesis driving this research is that the integration of user contributed data and/or social-network derived snow data together with other open access data sources will result in more accurate and higher resolution - and hence more useful snow cover maps than satellite data or government agency produced data by
Lloyd, S. A.; Acker, J. G.; Prados, A. I.; Leptoukh, G. G.
One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite- based remote sensing datasets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable dataset to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface. Giovanni provides a simple way to visualize, analyze and access vast amounts of satellite-based Earth science data. Giovanni's features and practical examples of its use will be demonstrated, with an emphasis on how satellite remote sensing can help students understand recent events in the atmosphere and biosphere. Giovanni is actually a series of sixteen similar web-based data interfaces, each of which covers a single satellite dataset (such as TRMM, TOMS, OMI, AIRS, MLS, HALOE, etc.) or a group of related datasets (such as MODIS and MISR for aerosols, SeaWIFS and MODIS for ocean color, and the suite of A-Train observations co-located along the CloudSat orbital path). Recently, ground-based datasets have been included in Giovanni, including the Northern Eurasian Earth Science Partnership Initiative (NEESPI), and EPA fine particulate matter (PM2.5) for air quality. Model data such as the Goddard GOCART model and MERRA meteorological reanalyses (in process) are being increasingly incorporated into Giovanni to facilitate model- data intercomparison. A full suite of data
Mackley, Rob D.; Last, George V.; Allwardt, Craig H.
The Hanford Borehole Geologic Information System (HBGIS) is a prototype web-based graphical user interface (GUI) for viewing and downloading borehole geologic data. The HBGIS is being developed as part of the Remediation Decision Support function of the Soil and Groundwater Remediation Project, managed by Fluor Hanford, Inc., Richland, Washington. Recent efforts have focused on improving the functionality of the HBGIS website in order to allow more efficient access and exportation of available data in HBGIS. Users will benefit from enhancements such as a dynamic browsing, user-driven forms, and multi-select options for selecting borehole geologic data for export. The need for translating borehole geologic data into electronic form within the HBGIS continues to increase, and efforts to populate the database continue at an increasing rate. These new web-based tools should help the end user quickly visualize what data are available in HBGIS, select from among these data, and download the borehole geologic data into a consistent and reproducible tabular form. This revised user’s guide supersedes the previous user’s guide (PNNL-15362) for viewing and downloading data from HBGIS. It contains an updated data dictionary for tables and fields containing borehole geologic data as well as instructions for viewing and downloading borehole geologic data.
Fans of mobile devices are everywhere, and they are using their PDAs, smart phones, and mobile phones to access Web-based content. Chances are that they are trying to access your library's Web site or find library-based content for their devices. In this article, the author presents some tips on how to serve those who wants to grab some fast info…
Sugarman, Jeremy; Lee, Linda
The primary objective of this project was to design and evaluate a series of web-based educational modules on genetics research ethics for members of Institutional Review Boards and investigators to facilitate the development and oversight of important research that is sensitive to the relevant ethical, legal and social issues. After a needs assessment was completed in March of 2003, five online educational modules on the ethics of research in genetics were developed, tested, and made available through a host website for AGREE: http://agree.mc.duke.edu/index.html. The 5 modules are: (1) Ethics and Genetics Research in Populations; (2) Ethics in Behavioral Genetics Research; (3) Ethical Issues in Research on Gene-Environment Interactions; (4) Ethical Issues in Reproductive Genetics Research; and (5) Ethical Issues in Diagnostic and Therapeutic Research. The development process adopted a tested approach used at Duke University School of Medicine in providing education for researchers and IRB members, supplementing it with expert input and a rigorous evaluation. The host website also included a description of the AGREE; short bios on the AGREE Investigators and Expert Advisory Panel; streaming media of selected presentations from a conference, Working at the Frontiers of Law and Science: Applications of the Human Genome held October 2-3, 2003, at the University of North Carolina at Chapel Hill; and links to online resources in genomics, research ethics, ethics in genomics research, and related organizations. The web site was active beginning with the posting of the first module and was maintained throughout the project period. We have also secured agreement to keep the site active an additional year beyond the project period. AGREE met its primary objective of creating web-based educational modules related to the ethical issues in genetics research. The modules have been disseminated widely. While it is clearly easier to judge the quality of the educational experience
Kosyakov, S.; Kowalkowski, J.; Litvintsev, D.; Lueking, L.; Paterno, M.; White, S.P.; Autio, Lauri; Blumenfeld, B.; Maksimovic, P.; Mathis, M.; /Johns Hopkins U.
A high performance system has been assembled using standard web components to deliver database information to a large number of broadly distributed clients. The CDF Experiment at Fermilab is establishing processing centers around the world imposing a high demand on their database repository. For delivering read-only data, such as calibrations, trigger information, and run conditions data, we have abstracted the interface that clients use to retrieve data objects. A middle tier is deployed that translates client requests into database specific queries and returns the data to the client as XML datagrams. The database connection management, request translation, and data encoding are accomplished in servlets running under Tomcat. Squid Proxy caching layers are deployed near the Tomcat servers, as well as close to the clients, to significantly reduce the load on the database and provide a scalable deployment model. Details the system's construction and use are presented, including its architecture, design, interfaces, administration, performance measurements, and deployment plan.
VIGIL,FRANK; REEDER,ROXANA G.
The Factsheets web application was conceived out of the requirement to create, update, publish, and maintain a web site with dynamic research and development (R and D) content. Before creating the site, a requirements discovery process was done in order to accurately capture the purpose and functionality of the site. One of the high priority requirements for the site would be that no specialized training in web page authoring would be necessary. All functions of uploading, creation, and editing of factsheets needed to be accomplished by entering data directly into web form screens generated by the application. Another important requirement of the site was to allow for access to the factsheet web pages and data via the internal Sandia Restricted Network and Sandia Open Network based on the status of the input data. Important to the owners of the web site would be to allow the published factsheets to be accessible to all personnel within the department whether or not the sheets had completed the formal Review and Approval (R and A) process. Once the factsheets had gone through the formal review and approval process, they could then be published both internally and externally based on their individual publication status. An extended requirement and feature of the site would be to provide a keyword search capability to search through the factsheets. Also, since the site currently resides on both the internal and external networks, it would need to be registered with the Sandia search engines in order to allow access to the content of the site by the search engines. To date, all of the above requirements and features have been created and implemented in the Factsheet web application. These have been accomplished by the use of flat text databases, which are discussed in greater detail later in this paper.
Lewis, Barbara; Griffin, Melanie
Librarians have long struggled to find user-friendly mediums to provide meaningful information to patrons using bibliographies, pathfinders, and subject guides with varying degrees of success. Content management systems, such as Springshare's LibGuides, have recently been developed to facilitate the creation of online subject guides. Special…
Hind, Daniel; Wailoo, Allan J.; Sutcliffe, Paul
Abstract Background Sensationalized reporting styles and a distorted framing of health‐care issues in newspapers may trigger inappropriate commissioning decisions. We evaluated UK press coverage of pre‐licensing access to trastuzumab (Herceptin) for early breast cancer as a case study. Methods and findings Content analysis of newspaper articles published between April 2005 and May 2006 were coded by two researchers for interest groups represented, claims made and sensationalized reporting. Disagreements in coding were resolved by a third researcher. One thousand and ninety published articles were identified in the study period and a 20% sample (n = 218) was included in the content analysis. Most articles (76%, 95% CI 71–82) included claims about the clinical benefits of trastuzumab, and this was significantly higher than those expressing the uncertainty surrounding such benefits (6%, 95% CI 3–9) or those that discussed the potential harms (5%, 95% CI 2–8). Articles were significantly more likely to feature claims made by a breast cancer survivor or family member than any other interest group (P < 0.0001). Almost half of the articles carried some message to the effect that trastuzumab would make the difference between life and death (47%, 95% CI 40–53). Over a quarter (28%, 95% CI 22–34) suggested that trastuzumab is a ‘miracle drug’ or similar. Conclusions The benefits of drugs are highlighted, frequently using sensationalist language, without equal consideration of uncertainty or risks. Health‐care purchasers should express decisions in opportunity cost terms; journalists should give fairer coverage to such arguments. PMID:20673243
Blodgett, Cynthia S.
The purpose of this grounded theory study was to examine the process by which people with Mild Traumatic Brain Injury (MTBI) access information on the web. Recent estimates include amateur sports and recreation injuries, non-hospital clinics and treatment facilities, private and public emergency department visits and admissions, providing…
Freeman, Misty Danielle
The purpose of this research was to explore Webmasters' behaviors and factors that influence Web accessibility at postsecondary institutions. Postsecondary institutions that were accredited by the Southern Association of Colleges and Schools were used as the population. The study was based on the theory of planned behavior, and Webmasters'…
Koehler, Wallace; Mincey, Danielle
Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)
Fernández-Alemán, José Luis; Toval, Ambrosio
Bakir, Sena; Toydemir, Gamze; Boyacioglu, Dilek; Beekwilder, Jules; Capanoglu, Esra
Background: Vinegars based on fruit juices could conserve part of the health-associated compounds present in the fruits. However, in general very limited knowledge exists on the consequences of vinegar-making on different antioxidant compounds from fruit. In this study vinegars derived from apple and grape are studied. Methods: A number of steps, starting from the fermentation of the fruit juices to the formation of the final vinegars, were studied from an industrial vinegar process. The effect of each of the vinegar processing steps on content of antioxidants, phenolic compounds and flavonoids was studied, by spectroscopic methods and by high-performance liquid chromatography (HPLC). Results: The major observation was that spectrophotometric methods indicate a strong loss of antioxidant phenolic compounds during the transition from fruit wine to fruit vinegar. A targeted HPLC analysis indicates that metabolites such as gallic acid are lost in later stages of the vinegar process. Conclusion: The major conclusion of this work is that major changes occur in phenolic compounds during vinegar making. An untargeted metabolite analysis should be used to reveal these changes in more detail. In addition, the effect of vinegar processing on bio-accessibility of phenolic compounds was investigated by mimicking the digestive tract in an in vitro set up. This study is meant to provide insight into the potential of vinegar as a source of health-related compounds from fruit. PMID:27690020
Normalizing 13C values of animal tissue for lipid content is necessary to accurately interpret food web relationships from stable isotope analysis. This is because lipids are 13C-depleted relative to proteins and carbohydrates, and because lipid content varies among speci...
Guo, Yinni; Salvendy, Gavriel
To better fulfil customer satisfaction, a study of what content e-business web sites should contain is conducted. Based on background literature, a content preparation survey of 70 items was developed and completed by 428 white collar employees of an electronic company in mainland China. The survey aimed at examining the significant content…
Kenyon, Peggy L.
The effect of content interactivity on performance outcomes and satisfaction has been studied by researchers who compared the results of Web-based and computer-based learning to classroom learning. Few scholars have compared the effects of the same content produced at different levels (low and high) of interactivity and the resulting effects. The…
Considering various themes, this study aims to examine the content of web sites of universities that provide sports management education in higher education level in Turkey and in England. Within this framework, the websites of the higher education institutions that provide sports management education are analyzed by using the content analysis…
Giuliani, Matteo; Castelletti, Andrea; Fedorov, Roman; Fraternali, Piero
Snow is a key component of the hydrologic cycle in many regions of the world. Despite recent advances in environmental monitoring that are making a wide range of data available, continuous snow monitoring systems that can collect data at high spatial and temporal resolution are not well established yet, especially in inaccessible high-latitude or mountainous regions. The unprecedented availability of user-generated data on the web is opening new opportunities for enhancing real-time monitoring and modeling of environmental systems based on data that are public, low-cost, and spatiotemporally dense. In this paper, we contribute a novel crowdsourcing procedure for extracting snow-related information from public web images, either produced by users or generated by touristic webcams. A fully automated process fetches mountain images from multiple sources, identifies the peaks present therein, and estimates virtual snow indexes representing a proxy of the snow-covered area. Our procedure has the potential for complementing traditional snow-related information, minimizing costs and efforts for obtaining the virtual snow indexes and, at the same time, maximizing the portability of the procedure to several locations where such public images are available. The operational value of the obtained virtual snow indexes is assessed for a real-world water-management problem, the regulation of Lake Como, where we use these indexes for informing the daily operations of the lake. Numerical results show that such information is effective in extending the anticipation capacity of the lake operations, ultimately improving the system performance.
Omta, Wienand A; van Heesbeen, Roy G; Pagliero, Romina J; van der Velden, Lieke M; Lelieveld, Daphne; Nellen, Mehdi; Kramer, Maik; Yeong, Marley; Saeidi, Amir M; Medema, Rene H; Spruit, Marco; Brinkkemper, Sjaak; Klumperman, Judith; Egan, David A
High-content screening (HCS) can generate large multidimensional datasets and when aligned with the appropriate data mining tools, it can yield valuable insights into the mechanism of action of bioactive molecules. However, easy-to-use data mining tools are not widely available, with the result that these datasets are frequently underutilized. Here, we present HC StratoMineR, a web-based tool for high-content data analysis. It is a decision-supportive platform that guides even non-expert users through a high-content data analysis workflow. HC StratoMineR is built by using My Structured Query Language for storage and querying, PHP: Hypertext Preprocessor as the main programming language, and jQuery for additional user interface functionality. R is used for statistical calculations, logic and data visualizations. Furthermore, C++ and graphical processor unit power is diffusely embedded in R by using the rcpp and rpud libraries for operations that are computationally highly intensive. We show that we can use HC StratoMineR for the analysis of multivariate data from a high-content siRNA knock-down screen and a small-molecule screen. It can be used to rapidly filter out undesirable data; to select relevant data; and to perform quality control, data reduction, data exploration, morphological hit picking, and data clustering. Our results demonstrate that HC StratoMineR can be used to functionally categorize HCS hits and, thus, provide valuable information for hit prioritization.
Teramnus labialis and T. uncinatus are both underutilized legume species. Teramnus labialis is used as food in India while T. uncinatus has potential use in pasture mixes. Photoperiod-sensitive Teramnus accessions were grown in the greenhouse from 2010 to 2011 and evaluated for flavonol content, oil...
A waveguide-on-access-tube (WOAT) TDR sensor was invented and the design optimized through a combination of electromagnetic modeling and several rounds of prototyping and testing in air, water, mixtures of water and ethylene glycol, sand, and silty clay loam soils over a range of water contents and ...
Since the late 1980s, electromagnetic (EM) sensors for determination on of soil water content from within nonmetallic access tubes have been marketed as replacements for the neutron moisture meter (NMM); however, the accuracy, variability and physical significance of EM sensor field measurements hav...
... Administration (FDA) is correcting a document that appeared in the Federal Register of January 7, 2013 (78 FR 951... Federal Register of January 7, 2013, in FR Doc. 951-953, on page 952, the following correction is made... HUMAN SERVICES Food and Drug Administration Accessible Medical Device Labeling in a Standard Content...
Federal laws, including Section 508 of the Rehabilitation Act, mandate that people with disabilities have access to the same information that someone without a disability would have. 508 standards cover electronic and information technology (EIT) products.
Wang, Jin; Cao, Xianshuang; Ferchaud, Vanessa; Jiang, Hao; Tang, Feng; Chin, Kit L.
Abstract The leaves of Hibiscus sabdariffa L. have been used as traditional folk medicines for treating high blood pressure and fever. There are many accessions of H. sabdariffa L. throughout the world. To assess the chemical variations of 31 different accessions of H. sabdariffa L., fingerprinting analysis and quantitation of major flavonoids were performed by high‐performance liquid chromatography (HPLC). The HPLC method was validated for linearity, sensitivity, precision, repeatability and accuracy. A quadrupole‐time‐of‐flight mass spectrometry (Q‐TOF‐MS) was applied for the characterization of major compounds. A total of 9 compounds were identified, including 6 flavonoids and 3 phenolic acids. In the fingerprint analysis, similarity analysis (SA) and principal component analysis (PCA) were used to differentiate the 31 accessions of H. sabdariffa L. Based on the results of PCA and SA, the samples No. 15 and 19 appeared much different from the main group. The total content of five flavonoids varied greatly among different accessions, ranging from 3.35 to 23.30 mg/g. Rutin was found to be the dominant compound and the content of rutin could contribute to chemical variations among different accessions. This study was helpful to understand the chemical variations between different accessions of H. sabdariffa L., which could be used for quality control. © 2015 The Authors Biomedical Chromatography Published by John Wiley & Sons Ltd. PMID:26394363
Wang, Jin; Cao, Xianshuang; Ferchaud, Vanessa; Qi, Yadong; Jiang, Hao; Tang, Feng; Yue, Yongde; Chin, Kit L
The leaves of Hibiscus sabdariffa L. have been used as traditional folk medicines for treating high blood pressure and fever. There are many accessions of H. sabdariffa L. throughout the world. To assess the chemical variations of 31 different accessions of H. sabdariffa L., fingerprinting analysis and quantitation of major flavonoids were performed by high-performance liquid chromatography (HPLC). The HPLC method was validated for linearity, sensitivity, precision, repeatability and accuracy. A quadrupole-time-of-flight mass spectrometry (Q-TOF-MS) was applied for the characterization of major compounds. A total of 9 compounds were identified, including 6 flavonoids and 3 phenolic acids. In the fingerprint analysis, similarity analysis (SA) and principal component analysis (PCA) were used to differentiate the 31 accessions of H. sabdariffa L. Based on the results of PCA and SA, the samples No. 15 and 19 appeared much different from the main group. The total content of five flavonoids varied greatly among different accessions, ranging from 3.35 to 23.30 mg/g. Rutin was found to be the dominant compound and the content of rutin could contribute to chemical variations among different accessions. This study was helpful to understand the chemical variations between different accessions of H. sabdariffa L., which could be used for quality control. © 2015 The Authors Biomedical Chromatography Published by John Wiley & Sons Ltd.
Ajuwon, GA; Popoola, SO
Background The internet is a huge library with avalanche of information resources including healthcare information. There are numerous studies on use of electronic resources by healthcare providers including medical practitioners however, there is a dearth of information on the patterns of use of web-based health information resource by resident doctors in Nigeria. This study therefore investigates the influence of internet accessibility and demographic factors on utilization of web-based health information resources by resident doctors in tertiary healthcare institutions in Nigeria. Methods Descriptive survey design was adopted for this study. The population of study consisted of medical doctors undergoing residency training in 13 tertiary healthcare institutions in South-West Nigeria. The tertiary healthcare institutions were Federal Medical Centres, University Teaching Hospitals and Specialist Hospitals (Neuropsychiatric and Orthopaedic). A pre-tested, self-administered questionnaire was used for data collection. The Statistical Package for the Social Sciences (SPSS) was used for data analysis. Data were analyzed using descriptive statistics, Pearson Product Moment correlation and multiple regression analysis. Results The mean age of the respondents was 34 years and males were in the majority (69.0%). A total of 96.1% respondents had access to the Internet. E-mail (X̄=5.40, SD=0.91), Google (X̄=5.26, SD=1.38), Yahoo (X̄=5.15, SD=4.44) were used weekly by the respondents. Preparation for Seminar/Grand Round presentation (X̄=8.4, SD=1.92), research (X̄=7.8, SD=2.70) and communication (X̄=7.6, SD=2.60) were ranked high as purposes for use of web-based information resources. There is a strong, positive and significant relationship between internet accessibility and utilization of web-based health information resources (r=0.628, p<0.05). Internet accessibility (B=0.911) and demographic factors: gender (B=−2.027), designation (B=−0.343) educational
Versteeg, R.; Richardson, A.; Thomas, S.; Lu, B.; Neto, J.; Wheeler, M.; Rowe, T.; Parashar, M.; Ankeny, M.
Information on subsurface processes is required for a broad range of applications, including site remediation, groundwater management, fossil fuel production and CO2 sequestration. Data on these processes is obtained from diverse sensor networks, includes physical, hydrological and chemical sensors and semi permanent geophysical sensors (mainly seismic and resistivity). Currently, processing is done by specialists through the use of commercial and research software packages such as numerical inverse and forward models, statistical data analysis software and visualization and data presentation packages. Information is presented to stakeholders as tables, images and reports. Processing steps, data and assumptions used for information generation are mostly opaque to endusers. As data migrates between applications the steps taken in each application (e.g. in data reduction)are often only partly documented, resulting in irreproducible results. In this approach, interactive tuning of data processing in a systematic way (e.g. changing model parameters, visualization parameters or data used) or using data processing as a discovery tool is de facto impossible. We implemented a web accessible scientific workflow system for subsurface performance monitoring. This system integrates distributed, automated data acquisition from autonomous sensor networks with server side data management and information visualization through flexible browser based data access tools. Webservices are used for communication with the sensor networks and interaction with applications. This system was originally developed for a monitoring network at the Gilt Edge Mine Superfund site, but has now been implemented for a range of different sensor networks of different complexity. The workflow framework allows for rapid and easy integration in a modular, transparent and reproducible manner of a multitude of existing applications for data analysis and processes. By embedding applications in webservice
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters. Topics 2 through 7 will be relevant to data consumers, data providers and notably, due to the open-source nature of all OPeNDAP software to developers wishing to extend Hyrax, to build compatible clients and servers, and/or to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.
Long, L. Rodney; Pillemer, Stanley R.; Lawrence, Reva C.; Goh, Gin-Hua; Neve, Leif; Thoma, George R.
At the Lister Hill National Center for Biomedical Communications, a research and development division of the National Library of Medicine (NLM), we are developing a prototype multimedia database system to provide World Wide Web access to biomedical databases. WebMIRS (Web-based Medical Information Retrieval System) will allow access to databases containing text and images and will allow database query by standard SQL, by image content, or by a combination of the two. The system is being developed in the form of Java applets, which will communicate with the Informix DBMS on an NLM Sun workstation running the Solaris operating system. The system architecture will allow access from any hardware platform, which supports a Java-enabled Web browser, such as Netscape or Internet Explorer. Initial databases will include data from two national health surveys conducted by the National Center for Health Statistics (NCHS), and will include x-ray images from those surveys. In addition to describing in- house research in database access systems, this paper describes ongoing work toward querying by image content. Image content search capability will include capability to search for x-ray images similar to an input image with respect to vertebral morphometry used to characterize features such as fractures and disc space narrowing.
Bauer, R.; Scambos, T.; Haran, T.; Maurer, J.; Bohlander, J.
A prototype of the Antarctic Cryosphere Access Portal (A-CAP) has been released for public use. Developed at the National Snow and Ice Data Center (NSIDC) Antarctic Glaciological Data Center (AGDC), A-CAP aims to be a geo-visualization and data download tool for AGDC data and other Antarctic-wide parameters, including glaciology, ice core data, snow accumulation, satellite imagery, digital elevation models (DEMs), sea ice concentration, and many other cryosphere-related scientific measurements. The user can zoom in to a specific region as well as overlay coastlines, placenames, latitude/longitude, and other geographic information. In addition to providing an interactive Web interface, customizable A-CAP map images and source data are also accessible via specific Uniform Resource Locator strings (URLs) to a standard suite of Open Geospatial Consortium (OGC) services: Web Map Service (WMS), Web Feature Service (WFS), and Web Coverage Service (WCS). The international specifications of these services provide an interoperable framework for sharing maps and geospatial data over the Internet, allowing A-CAP products to be easily exchanged with other data centers worldwide and enabling remote access for users through OGC-compliant software applications such as ArcGIS, Google Earth, ENVI, and many others. A-CAP is built on MapServer, an Open Source development environment for building spatially-enabled Internet applications. MapServer uses data sets that have been formatted as GeoTIFF or Shapefile to allow rapid sub-setting and over-the-Web presentation of large geospatial data files, and has no requirement for a user-installed client software package (besides a Web browser).
Dutra, Jayne E.
This viewgraph presentation reviews the process of the redesign of the Daily . Planet news letter as a content management implementation project. This is a site that is an internal news site that acts as a communication vehicle for a large volume of content. The Objectives for the site redesign was: (1) Clean visual design, (2) Facilitation of publication processes, (3) More efficient maintenance mode, (4) Automated publishing to internal portal, (5) Better navigation through improved site IA, (6) Archiving and retrieval functionality, (7) Back to basics on fundamental business goals. The CM is a process not a software package
Kuna, Samuel T.; Shuttleworth, David; Chi, Luqi; Schutte-Rodin, Sharon; Friedman, Eliot; Guo, Hengyi; Dhand, Sandeep; Yang, Lin; Zhu, Jingsan; Bellamy, Scarlett L.; Volpp, Kevin G.; Asch, David A.
Study Objectives: We tested whether providing adults with obstructive sleep apnea (OSA) with daily Web-based access to their positive airway pressure (PAP) usage over 3 mo with or without a financial incentive in the first week improves adherence and functional outcomes. Setting: Academic- and community-based sleep centers. Participants: One hundred thirty-eight adults with newly diagnosed OSA starting PAP treatment. Interventions: Participants were randomized to: usual care, usual care with access to PAP usage, or usual care with access to PAP usage and a financial incentive. PAP data were transmitted daily by wireless modem from the participants' PAP unit to a website where hours of usage were displayed. Participants in the financial incentive group could earn up to $30/day in the first week for objective PAP use ≥ 4 h/day. Measurements and Results: Mean hours of daily PAP use in the two groups with access to PAP usage data did not differ from each other but was significantly greater than that in the usual care group in the first week and over 3 mo (P < 0.0001). Average daily use (mean ± standard deviation) during the first week of PAP intervention was 4.7 ± 3.3 h in the usual care group, and 5.9 ± 2.5 h and 6.3 ± 2.5 h in the Web access groups with and without financial incentive respectively. Adherence over the 3-mo intervention decreased at a relatively constant rate in all three groups. Functional Outcomes of Sleep Questionnaire change scores at 3 mo improved within each group (P < 0.0001) but change scores of the two groups with Web access to PAP data were not different than those in the control group (P > 0.124). Conclusions: Positive airway pressure adherence is significantly improved by giving patients Web access to information about their use of the treatment. Inclusion of a financial incentive in the first week had no additive effect in improving adherence. Citation: Kuna ST, Shuttleworth D, Chi L, Schutte-Rodin S, Friedman E, Guo H, Dhand S, Yang
de Alarcón, P A; Gupta, A; Carazo, J M
Nowadays we are experiencing a remarkable growth in the number of databases that have become accessible over the Web. However, in a certain number of cases, for example, in the case of BioImage, this information is not of a textual nature, thus posing new challenges in the design of tools to handle these data. In this work, we concentrate on the development of new mechanisms aimed at "querying" these databases of complex data sets by their intrinsic content, rather than by their textual annotations only. We concentrate our efforts on a subset of BioImage containing 3D images (volumes) of biological macromolecules, implementing a first prototype of a "query-by-content" system. In the context of databases of complex data types the term query-by-content makes reference to those data modeling techniques in which user-defined functions aim at "understanding" (to some extent) the informational content of the data sets. In these systems the matching criteria introduced by the user are related to intrinsic features concerning the 3D images themselves, hence, complementing traditional queries by textual key words only. Efficient computational algorithms are required in order to "extract" structural information of the 3D images prior to storing them in the database. Also, easy-to-use interfaces should be implemented in order to obtain feedback from the expert. Our query-by-content prototype is used to construct a concrete query, making use of basic structural features, which are then evaluated over a set of three-dimensional images of biological macromolecules. This experimental implementation can be accessed via the Web at the BioImage server in Madrid, at http://www.bioimage.org/qbc/index.html.
The WWW is now in widespread use for delivering on-line learning content in many large-scale education settings. Given such widespread usage, it is feasible to accumulate data concerning the most useful learning experiences of past students and share them with future students. Browsing events that depict how past students utilized the learning…
Given the challenging economic climate in the United States, many academics are looking to open-access electronic textbooks as a way to provide students with traditional textbook content at a more financially advantageous price. Open access refers to "the free and widely available information throughout the World Wide Web. Once an article's…
.... 62.13 Section 62.13 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) CRITERIA AND PROCEDURES FOR EMERGENCY ACCESS TO NON-FEDERAL AND REGIONAL LOW-LEVEL WASTE DISPOSAL FACILITIES Request for a Commission... radioactive waste in a licensed storage facility; (3) Obtaining access to a disposal facility by...
Lonergan, James M.
The "digital divide," the separation between those with access to new technologies and those without, is seen by many as one of the leading equity issues in the United States. Computer and Internet access varies widely across the United States, with better educated people, those with more money, and whites more likely to have Internet…
Ruppar, Andrea L.; Allcock, Heather; Gonsier-Gerdin, Jean
In this review, we applied Bronfenbrenner's ecological systems theory to examine factors that support or restrict access to the general curriculum for students with significant disabilities. We organize the literature in relationship to factors within the micro-, meso-, macro-, exo-, and chronosystems that influence decisions about access to the…
Stark, David; Kannappan, S. J.; Wei, L. H.; Baker, A. J.; Haynes, M. P.; Giovanelli, R.; Heitsch, F.; RESOLVE Team; ALFALFA Team
The RESOLVE (REsolved Spectroscopy Of a Local VolumE) Survey is a census of gas, stars, and dark matter in 1500 galaxies down to dwarf-scale baryonic masses of 109 Msun, occupying a range of cluster, group, and filament environments in the local cosmic web. We discuss strategies to estimate the gas mass in HI, H2, and warmer phases. RESOLVE falls largely within the footprint of the ongoing ALFALFA survey, allowing us to acquire accurate HI data for much of the sample. Any missing HI masses will be estimated from color and environment data, based on trends calibrated using the ALFALFA data set. Initially, our constraints on the molecular gas component will be largely indirect, based on either AKARI FIR data or a new technique presented here that links CO-derived H2/HI ratios to stellar-mass normalized color gradients. We discuss additional strategies under development to better measure molecular gas and constrain the mass in warmer phases. In particular, we describe observational constraints on the nature of additional gas that is detected dynamically in a sample of very blue, gas-dominated galaxies, possibly representing a warm-hot phase or a low-metallicity molecular component. Obtaining a full gas census for the RESOLVE survey will allow us to model gas phase transitions and star formation, specifically examining how baryonic mass component ratios and conversion timescales depend on galaxy mass and environment.
Noel-Levitz, Inc, 2009
Have you updated your Web site today? Is it possible that answering "yes" to this simple question is the key to the success of your marketing and recruiting efforts? In the current recruitment arena, the ability to update and maintain this one high-value asset (your Web site) might be the key to the potency of your institutional…
Sun, Ping; Unger, Jennifer B; Palmer, Paula H; Gallaher, Peggy; Chou, Chih-Ping; Baezconde-Garbanati, Lourdes; Sussman, Steve; Johnson, C Anderson
The World Wide Web (WWW) poses a distinct capability to offer interventions tailored to the individual's characteristics. To fine tune the tailoring process, studies are needed to explore how Internet accessibility and usage are related to demographic, psychosocial, behavioral, and other health related characteristics. This study was based on a cross-sectional survey conducted on 2373 7th grade students of various ethnic groups in Southern California. Measures of Internet use included Internet use at school or at home, Email use, chat-room use, and Internet favoring. Logistic regressions were conducted to assess the associations between Internet uses with selected demographic, psychosocial, behavioral variables and self-reported health statuses. The proportion of students who could access the Internet at school or home was 90% and 40%, separately. Nearly all (99%) of the respondents could access the Internet either at school or at home. Higher SES and Asian ethnicity were associated with higher internet use. Among those who could access the Internet and after adjusting for the selected demographic and psychosocial variables, depression was positively related with chat-room use and using the Internet longer than 1 hour per day at home, and hostility was positively related with Internet favoring (All ORs = 1.2 for +1 STD, p < 0.05). Less parental monitoring and more unsupervised time were positively related to email use, chat-room use, and at home Internet use (ORs for +1 STD ranged from 1.2 to 2.0, all p < 0.05), but not related to at school Internet use. Substance use was positively related to email use, chat-room use, and at home Internet use (OR for "used" vs. "not used" ranged from 1.2 to 4.0, p < 0.05). Self-reported health problems were associated with higher levels of Internet use at home but lower levels of Internet use at school. More physical activity was related to more email use (OR = 1.3 for +1 STD), chat room use (OR = 1.2 for +1 STD), and at school
Ayabe, Yoshiko; Kanasashi, Tsutomu; Hijii, Naoki; Takenaka, Chisato
The accident at the Fukushima Dai-ichi nuclear power plant seriously contaminated a large area in northeast Japan with a large amount of radioactive material. Consequently, various organisms, including arthropods, in the ecosystem have been contaminated with radiocesium ((137)Cs) through the food chain. We previously showed that the web spider Nephila clavata was contaminated with (137)Cs and that the level of contamination, which varied among spider individuals, was independent of the amount of prey consumed. The present study aimed to clarify the mechanisms that could determine the level of (137)Cs contamination in N. clavata. We first demonstrated the patterns of contents of over 30 elements in N. clavata that were collected at two forest sites (PS and ES) in Fukushima and then focused on the relationships between the contents of the alkali metals Li, Na, K, and Rb and the accumulation of (137)Cs in the spiders; Cs is an alkali metal and is expected to act similarly to Li, Na, K, and Rb. We also focused on the content of the non-alkali element, Cu, which is an essential element for oxygen transport in spiders. We found that Na content correlated positively with (137)Cs accumulation at both sites, which suggested that (137)Cs accumulation in N. clavata was related with the dynamics of Na. The K-, Rb-, and Cu-(137)Cs relationships were site specific; the relationships were significant at site PS, but not significant at site ES. Factors causing the site specific relationships and the probable pathway for (137)Cs transfer from soil to plants and then to higher trophic levels are discussed in terms of the transfer processes of the alkali metals.
Areeda, J. S.; Smith, J. R.; Lundgren, A. P.; Maros, E.; Macleod, D. M.; Zweizig, J.
Tong, Chuan; Chen, Yaling; Tang, Fufu; Xu, Feifei; Huang, Yan; Chen, Hao; Bao, Jinsong
Starch physicochemical properties determine the eating and cooking quality of rice. The genetic diversity in the apparent amylose content (AAC) and pasting viscosity parameters of 20 geographically diverse rice accessions were investigated. It was found that AAC and pasting viscosities differed widely among different accessions, but each accession performed relatively stably across two environments. Analysis of variance (ANOVA) indicated that all traits were predominantly controlled by genotypic variance, but the genotype×environment interaction effects were also significant except for AAC and PT. Significant correlations were found for each parameter between 2years (P<0.001). Association mapping identified a total of 22 main-effect quantitative trait loci (QTLs) responsible for all traits except for CPV. This study showed that starch physicochemical properties of rice were highly stable and mainly controlled by genetic factors, and gave insight into the molecular improvement of eating quality using marker assisted breeding with the identified QTLs/genes.
Tariq, Amina; Richardson, Lauren; Byrne, Mary; Robinson, Maureen; Li, Ling; Westbrook, Johanna I; Baysari, Melissa T
Background Medication is the most common intervention in health care, and written medication information can affect consumers’ medication-related behavior. Research has shown that a large proportion of Australians search for medication information on the Internet. Objective To evaluate the medication information content, based on consumer medication information needs, and usability of 4 Australian health websites: Better Health Channel, myDr, healthdirect, and NPS MedicineWise . Methods To assess website content, the most common consumer medication information needs were identified using (1) medication queries to the healthdirect helpline (a telephone helpline available across most of Australia) and (2) the most frequently used medications in Australia. The most frequently used medications were extracted from Australian government statistics on use of subsidized medicines in the community and the National Census of Medicines Use. Each website was assessed to determine whether it covered or partially covered information and advice about these medications. To assess website usability, 16 consumers participated in user testing wherein they were required to locate 2 pieces of medication information on each website. Brief semistructured interviews were also conducted with participants to gauge their opinions of the websites. Results Information on prescription medication was more comprehensively covered on all websites (3 of 4 websites covered 100% of information) than nonprescription medication (websites covered 0%-67% of information). Most websites relied on consumer medicines information leaflets to convey prescription medication information to consumers. Information about prescription medication classes was less comprehensive, with no website providing all information examined about antibiotics and antidepressants. Participants (n=16) were able to locate medication information on websites in most cases (accuracy ranged from 84% to 91%). However, a number of
Kunicki, T.; Blodgett, D. L.; Booth, N. L.; Suftin, I.; Walker, J. I.
Environmental modelers from fields of study including climatology, hydrology, geology, and ecology need common, cross-discipline data sources and processing methods to enable working with large remote datasets. Watershed modelers, for example, need downscaled climate model data and land-cover data summaries to predict streamflow for various future climate scenarios. In turn, ecological modelers need the predicted streamflow conditions to understand how habitat of biotic communities might be affected. The U.S. Geological Survey Geo Data Portal project addresses these needs by providing a flexible application built on open-standard Web services that integrates and streamlines data retrieval and analysis. Open Geospatial Consortium Web Processing Services (WPS) were developed to allow interoperable access to data from servers delivering both defacto standard Climate and Forecast (CF) convention datasets and OGC standard Web Coverage Services (WCS). The Geo Data Portal can create commonly needed derivatives of data in numerous formats. As an example use case, a user can upload a shapefile specifying a region of interest (e.g. a watershed), pick a climate simulation, and retrieve a spreadsheet of predicted daily maximum temperature in that region up to 2100. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data. The Geo Data Portal resulting from this project will be demonstrated accessing a range of climate and landscape data.
Vines, Aleksander; Hamre, Torill; Lygre, Kjetil
The GreenSeas project (Development of global plankton data base and model system for eco-climate early warning) aims to advance the knowledge and predictive capacities of how marine ecosystems will respond to global change. A main task has been to set up a data delivery and monitoring core service following the open and free data access policy implemented in the Global Monitoring for the Environment and Security (GMES) programme. A key feature of the system is its ability to compare data from different datasets, including an option to upload one's own netCDF files. The user can for example search in an in situ database for different variables (like temperature, salinity, different elements, light, specific plankton types or rate measurements) with different criteria (bounding box, date/time, depth, Longhurst region, cruise/transect) and compare the data with model data. The user can choose model data or Earth observation data from a list, or upload his/her own netCDF files to use in the comparison. The data can be visualized on a map, as graphs and plots (e.g. time series and property-property plots), or downloaded in various formats. The aim is to ensure open and free access to historical plankton data, new data (EO products and in situ measurements), model data (including estimates of simulation error) and biological, environmental and climatic indicators to a range of stakeholders, such as scientists, policy makers and environmental managers. We have implemented a web-based GIS(Geographical Information Systems) system and want to demonstrate the use of this. The tool is designed for a wide range of users: Novice users, who want a simple way to be able to get basic information about the current state of the marine planktonic ecosystem by utilizing predefined queries and comparisons with models. Intermediate level users who want to explore the database on their own and customize the prefedined setups. Advanced users who want to perform complex queries and
Johnson, G. W.; Gonzalez, J.; Brady, J. J.; Gaylord, A.; Manley, W. F.; Cody, R.; Dover, M.; Score, R.; Garcia-Lavigne, D.; Tweedie, C. E.
ARMAP 3D allows users to dynamically interact with information about U.S. federally funded research projects in the Arctic. This virtual globe allows users to explore data maintained in the Arctic Research & Logistics Support System (ARLSS) database providing a very valuable visual tool for science management and logistical planning, ascertaining who is doing what type of research and where. Users can “fly to” study sites, view receding glaciers in 3D and access linked reports about specific projects. Custom “Search” tasks have been developed to query by researcher name, discipline, funding program, place names and year and display results on the globe with links to detailed reports. ARMAP 3D was created with ESRI’s free ArcGIS Explorer (AGX) new build 900 providing an updated application from build 500. AGX applications provide users the ability to integrate their own spatial data on various data layers provided by ArcOnline (http://resources.esri.com/arcgisonlineservices). Users can add many types of data including OGC web services without any special data translators or costly software. ARMAP 3D is part of the ARMAP suite (http://armap.org), a collection of applications that support Arctic science tools for users of various levels of technical ability to explore information about field-based research in the Arctic. ARMAP is funded by the National Science Foundation Office of Polar Programs Arctic Sciences Division and is a collaborative development effort between the Systems Ecology Lab at the University of Texas at El Paso, Nuna Technologies, the INSTAAR QGIS Laboratory, and CH2M HILL Polar Services.
Tosaka, Yuji; Weng, Cathy
Content-enriched metadata in bibliographic records is considered helpful to library users in identifying and selecting library materials for their needs. The paper presents a study, using circulation data from a medium-sized academic library, of the effect of content-enriched records on library materials usage. The study also examines OPAC search…
Jani, Jayshree S.; Pierce, Dean; Ortiz, Larry; Sowbel, Lynda
This article provides an assessment of the current situation in social work education regarding the teaching of content on diversity, with a focus on implications for social work theory, practice, and education. The article provides a critical analysis of the historical development of approaches to teaching diversity content in social work…
Description: Obtain Surface meteorology and Solar Energy (SSE) data Available for locations, global/regional areas, ... Provided for 1° latitude by 1° longitude grid cells over the 22-year period July 1983 through June 2005 ...
Cochrane, Thomas; Antonczak, Laurent; Wagner, Daniel
The advent of web 2.0 has enabled new forms of collaboration centred upon user-generated content, however, mobile social media is enabling a new wave of social collaboration. Mobile devices have disrupted and reinvented traditional media markets and distribution: iTunes, Google Play and Amazon now dominate music industry distribution channels,…
This, then, is the current status of the project: Since we made the switch to Intradoc, we are now treating the project as a document and image management system. In reality, it could be considered a document and content management system since we can manage almost any file input to the system such as video or audio. At present, however, we are concentrating on images. As mentioned above, my CRADA funding was only targeted at including thumbnails of images in Intradoc. We still had to modify Intradoc so that it would compress images submitted to the system. All processing of files submitted to Intradoc is handled in what is called the Document Refinery. Even though MrSID created thumbnails in the process of compressing an image, work needed to be done to somehow build this capability into the Document Refinery. Therefore we made the decision to contract the Intradoc Engineering Team to perform this custom development work. To make Intradoc even more capable of handling images, we have also contracted for customization of the Document Refinery to accept Adobe PhotoShop and Illustrator file in their native format.
Gregory, Michelle L.; Payne, Deborah A.; McColgin, Dave; Cramer, Nick O.; Love, Douglas V.
In recent years, one of the advances of the World Wide Web is social media and one of the fastest growing aspects of social media is the blogosphere. Blogs make content creation easy and are highly accessible through web pages and syndication. With their growing influence, a need has arisen to be able to monitor the opinions and insight revealed within their content. In this paper we describe a technical approach for analyzing the content of blog data using a visual analytic tool, IN-SPIRE, developed by Pacific Northwest National Laboratory. We highlight the capabilities of this tool that are particularly useful for information gathering from blog data.
The NASA ADS Abstract Service and the Distributed Astronomy Digital Library [and] Project Soup: Comparing Evaluations of Digital Collection Efforts [and] Cross-Organizational Access Management: A Digital Library Authentication and Authorization Architecture [and] BibRelEx: Exploring Bibliographic Databases by Visualization of Annotated Content-based Relations [and] Semantics-Sensitive Retrieval for Digital Picture Libraries [and] Encoded Archival Description: An Introduction and Overview.
Kurtz, Michael J.; Eichorn, Guenther; Accomazzi, Alberto; Grant, Carolyn S.; Demleitner, Markus; Murray, Stephen S.; Jones, Michael L. W.; Gay, Geri K.; Rieger, Robert H.; Millman, David; Bruggemann-Klein, Anne; Klein, Rolf; Landgraf, Britta; Wang, James Ze; Li, Jia; Chan, Desmond; Wiederhold, Gio; Pitti, Daniel V.
Includes six articles that discuss a digital library for astronomy; comparing evaluations of digital collection efforts; cross-organizational access management of Web-based resources; searching scientific bibliographic databases based on content-based relations between documents; semantics-sensitive retrieval for digital picture libraries; and…
... within § 62.1(c) of this part; (e) The low-level waste generation facility(ies) producing the waste for... limited to— (i) Type of waste (e.g. solidified oil, scintillation fluid, failed equipment); (ii) Principal... EMERGENCY ACCESS TO NON-FEDERAL AND REGIONAL LOW-LEVEL WASTE DISPOSAL FACILITIES Request for a...
Morgan, Tannis; Carey, Stephen
Two of the major challenges to international students' right of access to higher education are geographical/economic isolation and academic literacy in English (Carey, 1999; Hamel, 2007). The authors propose that adopting open course models in traditional universities, through blended or online delivery, can offer benefits to the institutions and…
Hawley, Jesse; Simpson, Stephen J; Wilder, Shawn M
The nutritional composition of diets can vary widely in nature and have large effects on the growth, reproduction and survival of animals. Many animals, especially herbivores, will tightly regulate the nutritional composition of their body, which has been referred to as nutritional homeostasis. We tested how experimental manipulation of the lipid and protein content of live prey affected the nutrient reserves and subsequent diet regulation of web-building spiders, Argiope keyserlingi. Live locusts were injected with experimental solutions containing specific amounts of lipid and protein and then fed to spiders. The nutrient composition of the spiders' bodies was directly related to the nutrient composition of the prey on which they fed. We then conducted an experiment where spiders were fed either high lipid or high protein prey and subsequently provided with two large unmanipulated locusts. Prior diet did not affect the amount or ratio of lipid and protein ingested by spiders when feeding on unmanipulated prey. Argiope keyserlingi were flexible in the storage of lipid and protein in their bodies and did not bias their extraction of nutrients from prey to compensate for previously biased diets. Some carnivores, especially those that experience frequent food limitation, may be less likely to strictly regulate their body composition than herbivores because food limitation may encourage opportunistic ingestion and assimilation of nutrients.
Yasini, Mobin; Duclos, Catherine; Lamy, Jean-Baptiste; Venot, Alain
Laboratory tests are not always prescribed appropriately. Guidelines for some important laboratory tests have been developed by expert panels in the Parisian region to maximize the appropriateness of laboratory medicine. However; these recommendations are not frequently consulted by physicians and nurses. We developed a system facilitating consultation of these guidelines, to increase their usability. Elements of information contained in these documents were identified and included in recommendations of different categories. UML modeling was used to represent these categories and their relationships to each other in the guidelines. We used the generated model to implement a computerized interface. The prototype interface, based on web-based technology was found to be rapid and easy to use. By clicking on provided keywords, information about the subject sought is highlighted whilst retaining the entire text of the guideline on-screen.
Blodgett, D. L.; Walker, J. I.; Read, J. S.
The USGS Geo Data Portal (GDP) project started in 2010 with the goal of providing climate and landscape model output data to hydrology and ecology modelers in model-ready form. The system takes a user-specified collection of polygons and a gridded time series dataset and returns a time series of spatial statistics for each polygon. The GDP is designed for scalability and is generalized such that any data, hosted anywhere on the Internet adhering to the NetCDF-CF conventions, can be processed. Five years into the project, over 600 unique users from more than 200 organizations have used the system's web user interface and some datasets have been accessed thousands of times. In addition to the web interface, python and R client libraries have seen steady usage growth and several third-party web applications have been developed to use the GDP for easy data access. Here, we will present lessons learned and improvements made after five years of operation of the system's user interfaces, processing server, and data holdings. A vision for the future availability and processing of massive climate and landscape data will be outlined.
The Web is growing and changing from a paradigm of static publishing to one of participation and interaction. This change has implications for people with disabilities who rely on access to the Web for employment, information, entertainment, and increased independence. The interactive and collaborative nature of Web 2.0 can present access problems for some users. There are some best practices which can be put in place today to improve access. New specifications such as Accessible Rich Internet Applications (ARIA) and IAccessible2 are opening the doors to increasing the accessibility of Web 2.0 and beyond.
Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access to digital imaging and communication in medicine persistent object protocol
Wu, Hui-Qun; Lv, Zheng-Min; Geng, Xing-Yun; Jiang, Kui; Tang, Le-Min; Zhou, Guo-Min; Dong, Jian-Cheng
AIM To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS) framework in conformance with digital imaging and communication in medicine (DICOM) and health level 7 (HL7) protocol to realize fundus images and reports sharing and communication through internet. METHODS Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE) Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO) protocol, which contains three tiers. RESULTS In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME) type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images. CONCLUSION Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology. PMID:24392341
Newman, R. L.; Lindquist, K. G.; Clemesha, A.; Vernon, F. L.
Since April 2004 the EarthScope USArray seismic network has grown to over 400 broadband stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. Providing secure, yet open, access to real-time and archived data for a broad range of audiences is best served by a series of platform agnostic low-latency web-based applications. We present a framework of tools that interface between the world wide web and Boulder Real Time Technologies Antelope Environmental Monitoring System data acquisition and archival software. These tools provide audiences ranging from network operators and geoscience researchers, to funding agencies and the general public, with comprehensive information about the experiment. This ranges from network-wide to station-specific metadata, state-of-health metrics, event detection rates, archival data and dynamic report generation over a stations two year life span. Leveraging open source web-site development frameworks for both the server side (Perl, Python and PHP) and client-side (Flickr, Google Maps/Earth and jQuery) facilitates the development of a robust extensible architecture that can be tailored on a per-user basis, with rapid prototyping and development that adheres to web-standards.
This viewgraph presentation gives an overview of the Access to Space website, including information on the 'tool boxes' available on the website for access opportunities, performance, interfaces, volume, environments, 'wish list' entry, and educational outreach.
Botvin, Judith D
Allina Hospitals & Clinics, Minneapolis, receives increased visitors after improving its Web site, Medformation.com. The system is one of those named by Hospitals & Health Networks as "one of the 100 Most Wired Hospitals."
Hutchens, Chad; Clark, Jason
At their core, XML feeds are content-delivery vehicles. This fact has not always been highlighted in library conversations surrounding RSS and ATOM. The authors have looked to extend the conversation by offering a proof of concept application using RSS as a means to deliver all types of library data: PDFs, docs, images, video--to people where and…
Lorenz, Martin M; Hayot Carbonero, Christine; Smith, Lydia; Udén, Peter
This study compared 38 sainfoin and 2 Lotus accessions to their respective tannin contents, N buffer solubility, and in vitro protein degradation. Tannin contents were measured by a protein precipitation method using either bovine serum albumin or Rubisco and by the colorimetric HCl/butanol method. Precipitation of bovine serum albumin and Rubisco was highly correlated (R(2) = 0.939). Correlations between the protein precipitation variants and the HCl/butanol method were relatively low (R(2) < 0.6). Protein degradation was measured at 4 h of incubation in an inhibited in vitro system and could not be explained by any of the tannin assays (R(2) < 0.03) and only partially by N buffer solubility (R(2) ≤ 0.433). Decisive factors other than the quantity of tannins or their ability to precipitate proteins must be considered. Resistance of soluble protein toward degradation can possibly be caused by tannin protein binding.
Qiao, Liang; Li, Ying; Chen, Xin; Yang, Sheng; Gao, Peng; Liu, Hongjun; Feng, Zhengquan; Nian, Yongjian; Qiu, Mingguo
Florczyk, A. J.; Nogueras-Iso, J.; Zarazaga-Soria, F. J.; Béjar, R.
Orthoimages are essential in many Web applications to facilitate the background context that helps to understand other georeferenced information. Catalogues and service registries of Spatial Data Infrastructures do not necessarily register all the services providing access to imagery data on the Web, and it is not easy to automatically identify whether the data offered by a Web service are directly imagery data or not. This work presents a method for an automatic detection of the orthoimage layers offered by Web Map Services. The method combines two types of heuristics. The first one consists in analysing the text in the capabilities document. The second type is content-based heuristics, which analyse the content offered by the Web Map Service layers. These heuristics gather and analyse the colour features of a sample collection of image fragments that represent the offered content. An experiment has been performed over a set of Web Map Service layers, which have been fetched from a repository of capabilities documents gathered from the Web. This has proven the efficiency of the method (precision of 87% and recall of 60%). This functionality has been offered as a Web Processing Service, and it has been integrated within the Virtual Spain project to provide a catalogue of orthoimages and build realistic 3D views.
Martin-Diener, Eva; Bauer, Georg; Braun-Fahrländer, Charlotte; Martin, Brian W
Background Web-based interventions are popular for promoting healthy lifestyles such as physical activity. However, little is known about user characteristics, adherence, attrition, and predictors of repeated participation on open access physical activity websites. Objective The focus of this study was Active-online, a Web-based individually tailored physical activity intervention. The aims were (1) to assess and compare user characteristics and adherence to the website (a) in the open access context over time from 2003 to 2009, and (b) between trial participants and open access users; and (2) to analyze attrition and predictors of repeated use among participants in a randomized controlled trial compared with registered open access users. Methods Data routinely recorded in the Active-online user database were used. Adherence was defined as: the number of pages viewed, the proportion of visits during which a tailored module was begun, the proportion of visits during which tailored feedback was received, and the time spent in the tailored modules. Adherence was analyzed according to six one-year periods (2003-2009) and according to the context (trial or open access) based on first visits and longest visits. Attrition and predictors of repeated participation were compared between trial participants and open access users. Results The number of recorded visits per year on Active-online decreased from 42,626 in 2003-2004 to 8343 in 2008-2009 (each of six one-year time periods ran from April 23 to April 22 of the following year). The mean age of users was between 38.4 and 43.1 years in all time periods and both contexts. The proportion of women increased from 49.5% in 2003-2004 to 61.3% in 2008-2009 (P< .001). There were differences but no consistent time trends in adherence to Active-online. The mean age of trial participants was 43.1 years, and 74.9% were women. Comparing contexts, adherence was highest for registered open access users. For open access users, adherence
Gonçalves, Bruno; Ramasco, José J.
The increasing ubiquity of Internet access and the frequency with which people interact with it raise the possibility of using the Web to better observe, understand, and monitor several aspects of human social behavior. Web sites with large numbers of frequently returning users are ideal for this task. If these sites belong to companies or universities, their usage patterns can furnish information about the working habits of entire populations. In this work, we analyze the properly anonymized logs detailing the access history to Emory University’s Web site. Emory is a medium-sized university located in Atlanta, Georgia. We find interesting structure in the activity patterns of the domain and study in a systematic way the main forces behind the dynamics of the traffic. In particular, we find that linear preferential linking, priority-based queuing, and the decay of interest for the contents of the pages are the essential ingredients to understand the way users navigate the Web.
Duncan, R. G.; Saperia, D.; Dulbandzhyan, R.; Shabot, M. M.; Polaschek, J. X.; Jones, D. T.
The advent of the World-Wide-Web protocols and client-server technology has made it easy to build low-cost, user-friendly, platform-independent graphical user interfaces to health information systems and to integrate the presentation of data from multiple systems. The authors describe a Web interface for a clinical data repository (CDR) that was moved from concept to production status in less than six months using a rapid prototyping approach, multi-disciplinary development team, and off-the-shelf hardware and software. The system has since been expanded to provide an integrated display of clinical data from nearly 20 disparate information systems. PMID:11825172
Morrison, James; Kaufman, John
Vascular access is invaluable in the treatment of hospitalized patients. Central venous catheters provide a durable and long-term solution while saving patients from repeated needle sticks for peripheral IVs and blood draws. The initial catheter placement procedure and long-term catheter usage place patients at risk for infection. The goal of this project was to develop a system to track and evaluate central line-associated blood stream infections related to interventional radiology placement of central venous catheters. A customized web-based clinical database was developed via open-source tools to provide a dashboard for data mining and analysis of the catheter placement and infection information. Preliminary results were gathered over a 4-month period confirming the utility of the system. The tools and methodology employed to develop the vascular access tracking system could be easily tailored to other clinical scenarios to assist in quality control and improvement programs.
Karlsen, Randi; Bonander, Jason
In recent years the Web has come into its own as a social platform where health consumers are actively creating and consuming Web content. Moreover, as the Web matures, consumers are gaining access to personalized applications adapted to their health needs and interests. The creation of personalized Web applications relies on extracted information about the users and the content to personalize. The Social Web itself provides many sources of information that can be used to extract information for personalization apart from traditional Web forms and questionnaires. This paper provides a review of different approaches for extracting information from the Social Web for health personalization. We reviewed research literature across different fields addressing the disclosure of health information in the Social Web, techniques to extract that information, and examples of personalized health applications. In addition, the paper includes a discussion of technical and socioethical challenges related to the extraction of information for health personalization. PMID:21278049
Anderson, Jane A; Godwin, Kyler M; Saleem, Jason J; Russell, Scott; Robinson, Joshua J; Kimmel, Barbara
This article reports redesign strategies identified to create a Web-based user-interface for the Self-management TO Prevent (STOP) Stroke Tool. Members of a Stroke Quality Improvement Network (N = 12) viewed a visualization video of a proposed prototype and provided feedback on implementation barriers/facilitators. Stroke-care providers (N = 10) tested the Web-based prototype in think-aloud sessions of simulated clinic visits. Participants' dialogues were coded into themes. Access to comprehensive information and the automated features/systematized processes were the primary accessibility and usability facilitator themes. The need for training, time to complete the tool, and computer-centric care were identified as possible usability barriers. Patient accountability, reminders for best practice, goal-focused care, and communication/counseling themes indicate that the STOP Stroke Tool supports the paradigm of patient-centered care. The STOP Stroke Tool was found to prompt clinicians on secondary stroke-prevention clinical-practice guidelines, facilitate comprehensive documentation of evidence-based care, and support clinicians in providing patient-centered care through the shared decision-making process that occurred while using the action-planning/goal-setting feature of the tool.
Persin, Ronald C.
The purpose of this study was to investigate whether statistically significant differences existed between high school Honors Physics websites and those of Advanced Placement (AP) Physics in terms of Web-design, National Science Education Standards (NSES) Physics content, and NSES Science Process standards. The procedure began with the selection of 152 sites comprising two groups with equal sample sizes of 76 for Honors Physics and for Advanced Placement Physics. The websites used in the study were accumulated using the Google(TM) search engine. To find Honors Physics websites, the search words "honors physics high school" were entered as the query into the search engine. To find sites for Advanced Placement Physics, the query, "advanced placement physics high school," was entered into the search engine. The evaluation of each website was performed using an instrument developed by the researcher based on three attributes: Web-design, NSES Physics content, and NSES Science Process standards. A "1" was scored if the website was found to have each attribute, otherwise a "0" was given. This process continued until all 76 websites were evaluated for each of the two types of physics websites, Honors and Advanced Placement. Subsequently the data were processed using Excel functions and the SPSS statistical software program. The mean and standard deviation were computed individually for the three attributes under consideration. Three, 2-tailed, independent samples t tests were performed to compare the two groups of physics websites separately on the basis of Web Design, Physics Content, and Science Process. The results of the study indicated that there was only one statistically significant difference between high school Honors Physics websites and those of AP Physics. The only difference detected was in terms of National Science Education Standards Physics content. It was found that Advanced Placement Physics websites contained more NSES physics content than Honors
Ross, A.; Stackhouse, P. W.; Tisdale, B.; Tisdale, M.; Chandler, W.; Hoell, J. M., Jr.; Kusterer, J.
The NASA Langley Research Center Science Directorate and Atmospheric Science Data Center have initiated a pilot program to utilize Geographic Information System (GIS) tools that enable, generate and store climatological averages using spatial queries and calculations in a spatial database resulting in greater accessibility of data for government agencies, industry and private sector individuals. The major objectives of this effort include the 1) Processing and reformulation of current data to be consistent with ESRI and openGIS tools, 2) Develop functions to improve capability and analysis that produce "on-the-fly" data products, extending these past the single location to regional and global scales. 3) Update the current web sites to enable both web-based and mobile application displays for optimization on mobile platforms, 4) Interact with user communities in government and industry to test formats and usage of optimization, and 5) develop a series of metrics that allow for monitoring of progressive performance. Significant project results will include the the development of Open Geospatial Consortium (OGC) compliant web services (WMS, WCS, WFS, WPS) that serve renewable energy and agricultural application products to users using GIS software and tools. Each data product and OGC service will be registered within ECHO, the Common Metadata Repository, the Geospatial Platform, and Data.gov to ensure the data are easily discoverable and provide data users with enhanced access to SSE data, parameters, services, and applications. This effort supports cross agency, cross organization, and interoperability of SSE data products and services by collaborating with DOI, NRCan, NREL, NCAR, and HOMER for requirements vetting and test bed users before making available to the wider public.
Yates, Paul C.
Discusses the importance of assessment of materials on the World Wide Web that may be freely accessible to both instructors and students. Evaluates web sites that cover the periodic table in terms of content and design. (Contains 16 references.) (Author/YDS)
Hansen, Helle Ploug; Draborg, Eva; Pedersen, Claus Duedal; Lamont, Ronald F; Jørgensen, Jan Stener
Background In Denmark, all pregnant women are offered screening in early pregnancy to estimate the risk of having a fetus with Down syndrome. Pregnant women participating in the screening program should be provided with information and support to allow them to make an informed choice. There is increasing interest in the use of Web-based technology to provide information and digital solutions for the delivery of health care. Objective The aim of this study was to develop an eHealth tool that contained accurate and relevant information to allow pregnant women to make an informed choice about whether to accept or reject participation in screening for Down syndrome. Methods The development of the eHealth tool involved the cooperation of researchers, technology experts, clinicians, and users. The underlying theoretical framework was based on participatory design, the International Patient Decision Aid Standards (IPDAS) Collaboration guide to develop a patient decision aid, and the roadmap for developing eHealth technologies from the Center for eHealth Research and Disease Management (CeHRes). The methods employed were a systematic literature search, focus group interviews with 3 care providers and 14 pregnant women, and 2 weeks of field observations. A qualitative descriptive approach was used in this study. Results Relevant themes from pregnant women and care providers with respect to information about Down syndrome screening were identified. Based on formalized processes for developing patient decision aids and eHealth technologies, an interactive website containing information about Down syndrome, methods of screening, and consequences of the test was developed. The intervention was based on user requests and needs, and reflected the current hospital practice and national guidelines. Conclusions This paper describes the development and content of an interactive website to support pregnant women in making informed choices about Down syndrome screening. To develop the
Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William
Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology-specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology- specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the
Esteve, M.; Molina, B.; Palau, C.; Fortino, G.
To date e-Learning material has usually been accessed and delivered through a central web server. As the number of users, the amount of information, the frequency of accesses and the volume of data increase, together with the introduction of multimedia streaming applications, a decentralized content distribution architecture is necessary. In this…
Klug, Hermann; Kmoch, Alexander
Transboundary and cross-catchment access to hydrological data is the key to designing successful environmental policies and activities. Electronic maps based on distributed databases are fundamental for planning and decision making in all regions and for all spatial and temporal scales. Freshwater is an essential asset in New Zealand (and globally) and the availability as well as accessibility of hydrological information held by or held for public authorities and businesses are becoming a crucial management factor. Access to and visual representation of environmental information for the public is essential for attracting greater awareness of water quality and quantity matters. Detailed interdisciplinary knowledge about the environment is required to ensure that the environmental policy-making community of New Zealand considers regional and local differences of hydrological statuses, while assessing the overall national situation. However, cross-regional and inter-agency sharing of environmental spatial data is complex and challenging. In this article, we firstly provide an overview of the state of the art standard compliant techniques and methodologies for the practical implementation of simple, measurable, achievable, repeatable, and time-based (SMART) hydrological data management principles. Secondly, we contrast international state of the art data management developments with the present status for groundwater information in New Zealand. Finally, for the topics (i) data access and harmonisation, (ii) sensor web enablement and (iii) metadata, we summarise our findings, provide recommendations on future developments and highlight the specific advantages resulting from a seamless view, discovery, access, and analysis of interoperable hydrological information and metadata for decision making.
Albarracín, Micaela; José González, Rolando; Drago, Silvina Rosa
A combination of soaking and extrusion processes of whole rice grain was studied. The effects of temperature (35-55 °C) and time (24-48 h) of soaking treatment on phytic acid (PA), protein and ashes losses using a factorial design were evaluated. Taking into account ash, protein and PA losses, whole rice was soaked 24 h at 45 °C and extruded using a Brabender single screw extruder. Effects of extrusion temperature (160-190 °C) and moisture content (14-19 g/100 g) on product characteristics were evaluated using surface response methodology. Values corresponding to the different responses were: Expansion (1.64-3.28), Specific Volume (5.68-11.06 cm(3)/g), Water absorption (3.41-4.43 mL/g) and Solubility (45.44-66.20 g/100 g). The content of PA was reduced from 740.09 to 163.47 mg/100 g (77%) after both processes, resulting in a higher mineral bio-accessibility, and a 7.3% decrease of protein digestibility. Total soluble phenolics and trolox equivalent antioxidant capacity (TEAC) were affected according to the treatment. Both treatments were important to obtain a nutritionally improved whole grain product.
Usaj, Matej; Tan, Yizhao; Wang, Wen; VanderSluis, Benjamin; Zou, Albert; Myers, Chad L; Costanzo, Michael; Andrews, Brenda; Boone, Charles
Providing access to quantitative genomic data is key to ensure large-scale data validation and promote new discoveries. TheCellMap.org serves as a central repository for storing and analyzing quantitative genetic interaction data produced by genome-scale Synthetic Genetic Array (SGA) experiments with the budding yeast Saccharomyces cerevisiae In particular, TheCellMap.org allows users to easily access, visualize, explore and functionally annotate genetic interactions, or to extract and reorganize sub-networks, using data-driven network layouts in an intuitive and interactive manner.
Cifuentes, Lauren; Sharp, Amy; Bulu, Sanser; Benz, Mike; Stough, Laura M.
We report on an investigation into the design, development, implementation, and evaluation of an informational and instructional Website in order to generate guidelines for instructional designers of read/write Web environments. We describe the process of design and development research, the problem addressed, the theory-based solution, and the…
Frisch, Jennifer K.; Jackson, Paula C.; Murray, Meg C.
WIKIed Biology is a National Science Foundation Transforming Undergraduate Education in Science, Technology, Engineering, and Mathematics interdisciplinary project in which the authors developed and implemented a model for student centered, inquiry-driven instruction using Web 2.0 technologies to increase inquiry and conceptual understanding in…
The purpose of this qualitative study was to analyze the self initiated conversations held by school principals on web2.o platforms, such as blogs, through the lens of current leadership standards. The online writings of thirteen school principals were analyzed using grounded theory techniques (Strauss and Corbin, 1998) to elucidate emerging…
Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.
The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…
The need for social inclusion, informed choice and the facilitation of independent living for people with learning disabilities (LD) is being emphasised ever more by government, professionals, academics and, indeed, by people with LD themselves, particularly in self-advocacy groups. Achieving goals around inclusion and autonomy requires access to…
Ichihara, Yasuyo G.
Internet imaging is used as interactive visual communication. It is different form other electronic imaging fields because the imaging is transported from one client to many others. If you and I each had different color vision, we may see Internet Imaging differently. So what do you see in a digital color dot picture such as the Ishihara pseudoisochromatic plates? The ishihara pseudoisochromatic test is the most widely used screening test for red-green color deficiency. The full verison contains 38 plates. Plates 18-21 are hidden digit designs. For example, plate 20 has 45 hidden digit designs that cannot be seen by normal trichromats but can be distinguished by most color deficient observers. In this study, we present a new digital color pallette. This is the web accessibility palette where the same information on Internet imaging can be seen correctly by any color vision person. For this study, we have measured the Ishihara pseudoisochromatic test. We used the new Minolta 2D- colorimeter system, CL1040i that can define all pixels in a 4cm x 4cm square to take measurements. From the results, color groups of 8 to 10 colors in the Ishihara plates can be seen on isochromatic lines of CIE-xy color spaces. On each plate, the form of a number is composed of 4 colors and the background colors are composed of the remaining 5 colors. Normal trichromats, it is difficult to find the difference between the 4 color group which makes up the form of the number and the 5 color group of the background colors. We also found that for normal trichromats, colors like orange and red that are highly salient are included in the warm color group and are distinguished form the cool color group of blue, green and gray. Form the results of our analysis of the Ishihara pseudoisochromatic test we suggest the web accessibility palette consists of 4 colors.
Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund
In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.
Campbell, Jerry D.
Considers the need for reliable, scholarly access to the Web and suggests that the Association for Research Libraries, in partnership with OCLC and the Library of Congress, develop a so-called scholar's portal. Topics include quality content; enhanced library services; and gateway functions, including access to commercial databases and focused…
Bledsoe, Johnny Mark
The content created by digital natives via collaborative Web 2.0 applications provides a rich source of unique knowledge and social capital for their virtual communities of interest. The problem addressed in this study was the limited understanding of older digital immigrants who use Web 2.0 applications to access, distribute, or enhance these…
Timberlake, Maria T.
Federal special education policy stipulates that students with disabilities receive access to the general education curriculum but does not prescribe what meaningful access entails. There is little research on how educators interpret their responsibility to create academic access. Street level bureaucracy theory was utilized to explore the…
DiLullo, Camille; Coughlin, Patrick; D'Angelo, Marina; McGuinness, Michael; Bandle, Jesse; Slotkin, Eric M; Shainker, Scott A; Wenger, Christopher; Berray, Scott J
As anatomy course hours have decreased, it has become increasingly important to provide tools that facilitate laboratory task efficiency. Digital video clips were created to present dissection guidance to medical students. The video clips communicate challenging aspects of the dissection process with succinct visual demonstrations easily accessed via an online course site. Students were asked to complete a survey designed to assess the quality and utility of the videos. Survey respondents indicated that the videos enhanced the quality of the anatomy course as well as their individual performances. This teaching tool enhances student competencies in human gross anatomy.
Irwin, Jeannie Y; Thyvalikakath, Thankam; Schleyer, Titus; Wali, Teena; Kerr, Ross
Background/Aims: In the United States, about 8,000 people a year die from oral cancer and more than 30,000 new cases are diagnosed annually. A recent study showed that 80% of American adult Internet users have searched the Web for health information and 15% of those specifically searched for dental health information. Having high quality oral cancer information available via the Web is important given the significance of this health problem. The goal of this study was to evaluate the quality and content of multiple English and Spanish oral cancer websites. Methods: We developed a search strategy using the keywords: oral cancer, mouth cancer, and tongue cancer to find oral cancer sites via Medline Plus, Google, and Yahoo. We then used the translations cancer oral, cancer de la boca, and cancer de la lengua to search Medline Plus en Español, Google Español, and Yahoo Telemundo. We added sites to the datasets based on inclusion/exclusion criteria. Two native speaking raters evaluated each site within their set for quality using the modified Information Quality Tool (IQT). We then developed a survey tool to asses the content of the sites. Two native speaking oral cancer experts evaluated each site within their set using this new tool. Results: Our search strategy produced 24 English language sites and 24 Spanish language sites for evaluation. English language websites had an average IQT score of 74.7 (out of 100) and average content score of 51.5 (out of 100). Spanish-language sites had an average IQT score of 48.8 and an average content score of 25.9. Conclusions: Despite higher scores for the English language websites, our analysis showed that there was a great variation in overall quality and content with room for improvement for both language types. English sites could make the biggest improvements by providing more information about their sponsors and who controls site content as well as updated and fixing links and author credentials. The Spanish sites should
van Beuzekom, Brigitte
This paper reviews recent measurement work on User-Created Content (UCC) undertaken in OECD countries. It shows that UCC is emerging as a significant area of economic and social activity worthy of consideration for official measurement and discusses the implications for the OECD Model Survey on ICT Access and Use by Households and Individuals.…
Cimino, J J; Sengupta, S; Clayton, P D; Patel, V L; Kushniruk, A; Huang, X
We are developing the Patient Clinical Information System (PatCIS) project at Columbia-Presbyterian Medical Center to provide patients with access to health information, including their own medical records (permitting them to contribute selected aspects to the record), educational materials and automated decision support. The architecture of the system allows for multiple, independent components which make use of central services for managing security and usage logging functions. The design accommodates a variety of data entry, data display and decision support tools and provides facilities for tracking system usage and questionnaires. The user interface minimizes hypertext-related disorientation and cognitive overload; our success in this regard is the subject of on-going evaluation.
The Internet can be an excellent tool to help people with learning disabilities access relevant and appropriately written information. However, little work has been undertaken to ascertain web design or content preferences for this cohort. This paper examines methods to address this issue. Twenty five participants were presented with three web…
Lowell, Nathan; Roberts, Stephanie
Provides design options for making Web sites more accessible to blind or visually impaired users. Highlights include valid alt-tags on graphics; helpful link text; cascading style sheets (CSSs) to separate content from page structure; descriptive links to tell what a significant graphic is about; and not using text only as a valid alternative.…
Shimazu, Keiko; Ozaki, Tomonobu; Furukawa, Koichi
KM (Knowledge Management) systems have recently been adopted within the realm of enterprise management. On the other hand, data mining technology is widely acknowledged within Information systems' R&D Divisions. Specially, acquisition of meaningful information from Web usage data has become one of the most exciting eras. In this paper, we employ a Web based KM system and propose a framework for applying Web Usage Mining technology to KM data. As it turns out, task duration varies according to different user operations such as referencing a table-of-contents page, down-loading a target file, and writing to a bulletin board. This in turn makes it possible to easily predict the purpose of the user's task. By taking these observations into account, we segmented access log data manually. These results were compared with results abstained by applying the constant interval method. Next, we obtained a segmentation rule of Web access logs by applying a machine-learning algorithm to manually segmented access logs as training data. Then, the newly obtained segmentation rule was compared with other known methods including the time interval method by evaluating their segmentation results in terms of recall and precision rates and it was shown that our rule attained the best results in both measures. Furthermore, the segmented data were fed to an association rule miner and the obtained association rules were utilized to modify the Web structure.
McNair, Peter D.; Fang, Jade; Schwarzwaelder, Stephan; Jackson, Terri
Background: Hospital-based clinicians have little information about the outcomes of their care, much less how those outcomes compare with those of their peers. A variety of care quality indicators have been developed, but comparisons tend to be hospitalwide, and often irrelevant to the practice and patient group of many hospital clinicians. Moreover, information is not enough to transform clinical practice, as the human response to such comparisons is, “I’m doing the best I know how.” What is needed is granular, clinically specific feedback with peer-mediated advice about how “positive deviants” achieve better results. Objective: This case study reports on the development and implementation of a web-accessible comparative outcomes tool, ExPLORE Clinical Practice, for hospitals and clinicians in California. Methods: We use iterative development and refinement of web tools to report comparative outcomes; incremental development of suites of procedure-patient outcome pairs specific to particular medical specialty groups; testing and refinement of response time metrics to reduce delays in report generation; and introduction of a comments section for each measure that assists with interpretation and ties results to strategies found to lead to better clinical outcomes. Results: To date, 76 reports, each with 115 to 251 statistically evaluated outcomes, are available electronically to compare individual hospitals in California to statewide outcomes. Discussion and Conclusions: ExPLORE Clinical Practice is one of a number of emerging systems that attempt to lever available data to improve patient outcomes. The ExPLORE Clinical Practice system combines a clinical focus on highly specific outcome measures with attention to technical issues such as crafting an intuitive user interface and graphic presentation. This case study illustrates the important advances made in using data to support clinicians to improve care for patients. We see this information as a way to
Scorcioni, Ruggero; Polavaram, Sridevi; Ascoli, Giorgio A.
L-Measure (LM) is a freely available software tool for the quantitative characterization of neuronal morphology. LM computes a large number of neuroanatomical parameters from 3D digital reconstruction files starting from and combining a set of core metrics. After more than six years of development and use in the neuroscience community, LM enables the execution of commonly adopted analyses as well as of more advanced functions. This report illustrates several LM protocols: (a) extraction of basic morphological parameters, (b) computation of frequency distributions, (c) measurements from user-specified sub regions of the neuronal arbors, (d) statistical comparison between two groups of cells, and (e) filtered selections and searches from collections of neurons based on any boolean combination of the available morphometric measures. These functionalities are easily accessed and deployed through a user-friendly graphical interface, and typically execute within few minutes on a set of ~20 neurons. The tool is available at http://krasnow.gmu.edu/cn3 for either online use on any Java-enabled browser and platform, or download for local execution under Windows and Linux. PMID:18451794
da Silva, André Constantino; Freire, Fernanda Maria Pereira; de Arruda, Alan Victor Pereira; da Rocha, Heloísa Vieira
e-Learning environments offer content, such text, audio, video, animations, using the Web infrastructure and they are designed to users interacting with keyboard, mouse and a medium-sized screen. Mobile devices, such as smartphones and tablets, have enough computation power to render Web pages, allowing browsing the Internet and access e-Learning…
Wijekumar, Kausalai; Meyer, Bonnie J. F.; Lei, Puiwa
Reading in the content areas of science, social studies, and current events is a difficult task that is even more elusive to Spanish speaking English language learners. There is a huge increase in children transitioning from their L1 (e.g., Spanish) to L2 (e.g., English) in classrooms across the US. These ELs face challenges due to a lack of…
Oskay, Özge Özyalçin; Odabasi, Zuhal
The effects of technological developments occurred new requirements in educational area. Today's teachers should know the content knowledge they teach, have pedagogical knowledge about teaching and learning methods and besides should use the technological tools effectively. Depending on these, new concepts such as Technological Pedagogical Content…
The task of managing the GLOBE Online Teacher s Guide during this time period focused on transforming the technology behind the delivery system of this document. The web application transformed from a flat file retrieval system to a dynamic database access approach. The new methodology utilizes Java Server Pages (JSP) on the front-end and an Oracle relational database on the backend. This new approach allows users of the web site, mainly teachers, to access content efficiently by grade level and/or by investigation or educational concept area. Moreover, teachers can gain easier access to data sheets and lab and field guides. The new online guide also included updated content for all GLOBE protocols. The GLOBE web management team was given documentation for maintaining the new application. Instructions for modifying the JSP templates and managing database content were included in this document. It was delivered to the team by the end of October, 2003. The National Geophysical Data Center (NGDC) continued to manage the school study site photos on the GLOBE website. 333 study site photo images were added to the GLOBE database and posted on the web during this same time period for 64 schools. Documentation for processing study site photos was also delivered to the new GLOBE web management team. Lastly, assistance was provided in transferring reference applications such as the Cloud and LandSat quizzes and Earth Systems Online Poster from NGDC servers to GLOBE servers along with documentation for maintaining these applications.
Devi, R Suganya; Manjula, D; Siddharth, R K
Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling.
Menon, Kartik; Dagli, Cihan H.
The heterogeneity and the lack of structure that permeates much of the ever expanding information sources on the WWW makes it difficult for the user to properly and efficiently access different web pages. Different users have different needs from the same web page. It is necessary to train the system to understand the needs and demands of the users. In other words there is a need for efficient and proper web mining. In this paper issues and possible ways of training the system and providing high level of organization for semi structured data available on the web is discussed. Web pages can be evolved based on history of query searches, browsing, links traversed and observation of the user behavior like book marking and time spent on viewing. Fuzzy clustering techniques help in grouping natural users and groups, neural networks, association rules and web traversals patterns help in efficient sequential anaysis based on previous searches and queries by the user. In this paper we analyze web server logs using above mentioned techniques to know more about user interactions. Analyzing these web server logs help to closely understand the user behavior and his/her web access pattern.
McEneny-King, Alanna; Foster, Gary; Edginton, Andrea N
Background Hemophilia is an inherited bleeding disorder caused by a deficiency in a specific clotting factor. This results in spontaneous bleeding episodes and eventual arthropathy. The mainstay of hemophilia treatment is prophylactic replacement of the missing factor, but an optimal regimen remains to be determined. Rather, individualized prophylaxis has been suggested to improve both patient safety and resource utilization. However, uptake of this approach has been hampered by the demanding sampling schedules and complex calculations required to obtain individual estimates of pharmacokinetic (PK) parameters. The use of population pharmacokinetics (PopPK) can alleviate this burden by reducing the number of plasma samples required for accurate estimation, but few tools incorporating this approach are readily available to clinicians. Objective The Web-accessible Population Pharmacokinetic Service - Hemophilia (WAPPS-Hemo) project aims to bridge this gap by providing a Web-accessible service for the reliable estimation of individual PK parameters from only a few patient samples. This service is predicated on the development of validated brand-specific PopPK models. Methods We describe the data analysis plan for the development and evaluation of each PopPK model to be incorporated into the WAPPS-Hemo platform. The data sources and structure of the dataset are discussed first, followed by the procedures for handling both data below limit of quantification (BLQ) and absence of such BLQ data. Next, we outline the strategies for building the appropriate structural and covariate models, including the possible need for a process algorithm when PK behavior varies between subjects or significant covariates are not provided. Prior to use in a prospective manner, the models will undergo extensive evaluation using a variety of techniques such as diagnostic plots, bootstrap analysis and cross-validation. Finally, we describe the incorporation of a validated PopPK model into the
Chen, Hsiang; Wigand, R. T.; Nilan, M. S.
Reports on Web users' optimal flow experiences to examine positive aspects of Web experiences that could be linked to theory applied to other media and then incorporated into Web design. Discusses the use of content-analytic procedures to analyze open-ended questionnaires that examined Web users' perceived flow experiences. (Author/LRW)
Fels, Deborah I; Richards, Jan; Hardman, Jim; Lee, Daniel G
The WORLD WIDE WEB has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The present article describes a system that allows sign language-only Web pages to be created and linked through a video-based technique called sign-linking. In two studies, 14 Deaf participants examined two iterations of signlinked Web pages to gauge the usability and learnability of a signing Web page interface. The first study indicated that signing Web pages were usable by sign language users but that some interface features required improvement. The second study showed increased usability for those features; users consequently couldnavigate sign language information with ease and pleasure.
Flavonols including quercetin, kaempferol, myricetin, and fatty acids in plants have many useful health attributes including antioxidants, cholesterol lowering, and cancer prevention. Six accessions of roselle, Hibiscus sabdariffa calyces were evaluated for quercetin, kaempferol, and myricetin conte...
This Policy establishes that the U.S. Environmental Protection Agency will operate and maintain a public access Web site to assist in fulfilling the Agency’s mission - to protect the environment and public health.
The web site is a library's most important feature. Patrons use the web site for numerous functions, such as renewing materials, placing holds, requesting information, and accessing databases. The homepage is the place they turn to look up the hours, branch locations, policies, and events. Whether users are at work, at home, in a building, or on…
Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB) in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics) is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA structures is provided. RNA FRABASE
VisPort: Web-Based Access to Community-Specific Visualization Functionality [Shedding New Light on Exploding Stars: Visualization for TeraScale Simulation of Neutrino-Driven Supernovae (Final Technical Report)
Baker, M Pauline
The VisPort visualization portal is an experiment in providing Web-based access to visualization functionality from any place and at any time. VisPort adopts a service-oriented architecture to encapsulate visualization functionality and to support remote access. Users employ browser-based client applications to choose data and services, set parameters, and launch visualization jobs. Visualization products typically images or movies are viewed in the user's standard Web browser. VisPort emphasizes visualization solutions customized for specific application communities. Finally, VisPort relies heavily on XML, and introduces the notion of visualization informatics - the formalization and specialization of information related to the process and products of visualization.
Stackhouse, P. W.; Barnett, A. J.; Tisdale, M.; Tisdale, B.; Chandler, W.; Hoell, J. M., Jr.; Westberg, D. J.; Quam, B.
The NASA LaRC Atmospheric Science Data Center has deployed it's beta version of an existing geophysical parameter website employing off the shelf Geographic Information System (GIS) tools. The revitalized web portal is entitled the "Surface meteorological and Solar Energy" (SSE - https://eosweb.larc.nasa.gov/sse/) and has been supporting an estimated 175,000 users with baseline solar and meteorological parameters as well as calculated parameters that enable feasibility studies for a wide range of renewable energy systems, particularly those systems featuring solar energy technologies. The GIS tools enable, generate and store climatological averages using spatial queries and calculations (by parameter for the globe) in a spatial database resulting in greater accessibility by government agencies, industry and individuals. The data parameters are produced from NASA science projects and reformulated specifically for the renewable energy industry and other applications. This first version includes: 1) processed and reformulated set of baseline data parameters that are consistent with Esri and open GIS tools, 2) development of a limited set of Python based functions to compute additional parameters "on-the-fly" from the baseline data products, 3) updated the current web sites to enable web-based displays of these parameters for plotting and analysis and 4) provided for the output of data parameters in geoTiff, ASCII and .netCDF data formats. The beta version is being actively reviewed through interaction with a group of collaborators from government and industry in order to test web site usability, display tools and features, and output data formats. This presentation provides an overview of this project and the current version of the new SSE-GIS web capabilities through to the end usage. This project supports cross agency and cross organization interoperability and access to NASA SSE data products and OGC compliant web services and aims also to provide mobile platform
Niger (Guizotia abyssinica, L.) is a desirable oilseed crop for birdseed, especially for finches (Spinus spp.) because of its high ratio of unsaturated to saturated fatty acids and relatively high oil content. In 2012, phenotypic traits, seed oil and fatty acid content measurements were made on 14 p...
Kain, Zeev N.; Fortier, Michelle A.; Chorney, Jill MacLaren; Mayes, Linda
Background Due to cost-containment efforts, preparation programs for outpatient surgery are currently not available to the majority of children and parents. The recent dramatic growth in the Internet presents a unique opportunity to transform how children and their parents are prepared for surgery. In this article we describe the development of a Web-based tailored preparation program for children and parents undergoing surgery (WebTIPS). Development of Program A multidisciplinary taskforce agreed that a Web-based tailored intervention comprised of intake, matrix and output modules was the preferred approach. Next, the content of the various intake variables, the matrix logic and the output content was developed. The output product has a parent component and a child component and is described in http://surgerywebtips.com/about.php. The child component makes use of preparation strategies such as information provision, modeling, play and coping skills training. The parent component of WebTIPS includes strategies such as information provision, coping skills training, relaxation and distraction techniques. A reputable animation and Web-design company developed a secured Web-based product based on the above description. Conclusions In this article we describe the development of a Web-based tailored preoperative preparation program that can be accessed by children and parents multiple times before and after surgery. A follow-up article in this issue of Anesthesia & Analgesia describes formative evaluation and preliminary efficacy testing of this Web-based tailored preoperative preparation program. PMID:25790212
Wan, Yik-Ki J.; Staes, Catherine J.
Healthcare organizations use care pathways to standardize care, but once developed, adoption rates often remain low. One challenge for usage concerns clinicians’ difficulty in accessing guidance when it is most needed. Although the HL7 ‘Infobutton Standard’ allows clinicians easier access to external references, access to locally-developed resources often requires clinicians to deviate from their normal electronic health record (EHR) workflow to use another application. To address this gap between internal and external resources, we reviewed the literature and existing practices at the University of Utah Health Care. We identify the requirements to meet the needs of a healthcare enterprise and clinicians, describe the design and development of a prototype to aggregate both internal and external resources from within or outside the EHR, and evaluated strengths and limitations of the prototype. The system is functional but not implemented in a live EHR environment. We suggest next steps and enhancements. PMID:28269964
up in this attitude as well. Electronic information includes a variety of object types such as electronic journals, e-books, databases , data sets...firewalls, require passwords to access, are hidden within Web-accessible databases , or require payment. The major lesson from efforts to develop selection...pages or those that are created out of a database , portal system, or content management system. The American Astronomical Society (AAS) has perhaps
Seed oil and fatty acids in plants have human health implications. Oil from roselle (Hibiscus sabdariffa L.) seeds are used in Taiwan as a diuretic, laxative, and tonic. The objectives of this study were to evaluate seeds from 17 roselle accessions for oil and fatty acid variation in a greenhouse. S...
Orenstein, David I.
Hardware and software filters, which sift through keywords placed in Internet search engines and online databases, work to limit the return of information from these sources. By their very purpose, filters exist to decrease the amount of information researchers can access. The purpose of this study is to gain insight into the perceptions key…
Petrelli, Daniela; Auld, Daniel
Purpose: This paper aims to provide an initial understanding of the constraints that historical video collections pose to video retrieval technology and the potential that online access offers to both archive and users. Design/methodology/approach: A small and unique collection of videos on customs and folklore was used as a case study. Multiple…
Csir, Floyd J.
This paper applies an evaluation method for World Wide Web sites that provide access to online reference materials at academic and public libraries. The evaluation of Web sites was performed with a questionnaire form focusing on Web site currency, accuracy and relevancy; Web site organization/structure; Web site presentation; URL maintenance; and…
Describes Web survey methodologies used to study the content of the Web, and discusses search engines and the concept of crawling the Web. Highlights include Web page selection methodologies; obstacles to reliable automatic indexing of Web sites; publicly indexable pages; crawling parameters; and tests for file duplication. (Contains 62…
My project at KSC during my spring 2011 internship was to develop a Ruby on Rails application to manage Content Documents..A Content Document is a collection of documents and information that describes what software is installed on a Launch Control System Computer. It's important for us to make sure the tools we use everyday are secure, up-to-date, and properly licensed. Previously, keeping track of the information was done by Excel and Word files between different personnel. The goal of the new application is to be able to manage and access the Content Documents through a single database backed web application. Our LCS team will benefit greatly with this app. Admin's will be able to login securely to keep track and update the software installed on each computer in a timely manner. We also included exportability such as attaching additional documents that can be downloaded from the web application. The finished application will ease the process of managing Content Documents while streamlining the procedure. Ruby on Rails is a very powerful programming language and I am grateful to have the opportunity to build this application.
Ciltas, Alper; Guler, Gursel; Sozbilir, Mustafa
In this study, a content analysis of research is aimed in the field of mathematics education of Turkish researchers. To this aim, the investigation of 359 article were made which were accessed from web in full text between 1987 and 2009 years and which were published in the field of mathematics education from 32 different journals. 27 of these…
The nature and scope of available documents are changing significantly in many areas of document analysis and retrieval as complex, heterogeneous collections become accessible to virtually everyone via the web. The increasing level of diversity presents a great challenge for document image content categorization, indexing, and retrieval.…
Evmenova, Anya S.; Graff, Heidi J.; Behrmann, Michael M.
There has been a slight increase in the number of studies focused on the strategies used to introduce content-based instruction to students with moderate/severe disability. However, interventions for students with significant intellectual disability (ID) are lacking adapted materials to make instruction available in all major academic areas…
Vine, Elaine W.
Previous research suggests that young ESOL learners in mainstream English-medium classrooms are afforded limited opportunities to engage with curriculum content. This paper reports on a study of a five-year-old boy from Samoa who was just beginning to learn English in a mainstream New Zealand classroom. Interactions between the boy and his …
Scheschy, Virginia M.
The World Wide Web and browsers such as Netscape and Mosaic have simplified access to electronic resources. Today, technical services librarians can share in the wealth of information available on the Web. One of the premier Web sites for acquisitions librarians is AcqWeb, a cousin of the AcqNet listserv. In addition to interesting news items,…
By Robin Meckley, Contributing Writer OneSearch, an exciting new resource from the Scientific Library, is now available to the NCI at Frederick community. This new resource provides a quick and easy way to search multiple Scientific Library resources and collections using a single search box for journal articles, books, media, and more. A large central index is compiled from more than 7,000 publishers and content providers outside the library’s holdings.
Gayet, Surya; van Maanen, Leendert; Heilbron, Micha; Paffen, Chris L E; Van der Stigchel, Stefan
The content of visual working memory (VWM) affects the processing of concurrent visual input. Recently, it has been demonstrated that stimuli are released from interocular suppression faster when they match rather than mismatch a color that is memorized for subsequent recall. In order to investigate the nature of the interaction between visual representations elicited by VWM and visual representations elicited by retinal input, we modeled the perceptual processes leading up to this difference in suppression durations. We replicated the VWM modulation of suppression durations, and fitted sequential sampling models (linear ballistic accumulators) to the response time data. Model comparisons revealed that the data was best explained by a decrease in threshold for visual input that matches the content of VWM. Converging evidence was obtained by fitting similar sequential sampling models (shifted Wald model) to published datasets. Finally, to confirm that the previously observed threshold difference reflected processes occurring before rather than after the stimuli were released from suppression, we applied the same procedure to the data of an experiment in which stimuli were not interocularly suppressed. Here, we found no decrease in threshold for stimuli that match the content of VWM. We discuss our findings in light of a preactivation hypothesis, proposing that matching visual input taps into the same neural substrate that is already activated by a representation concurrently maintained in VWM, thereby reducing its threshold for reaching visual awareness.
Bi, Yingdong; Li, Wei; Xiao, Jialei; Lin, Hong; Liu, Ming; Liu, Miao; Luan, Xiaoyan; Zhang, Bixian; Xie, Xuejun; Guo, Donglin; Lai, Yongcai
Isoflavone, a group of secondary metabolites in soybean, is beneficial to human health. Improving isoflavone content in soybean seeds has become one of the most important breeding objectives. However, the narrow genetic base of soybean cultivars hampered crop improvement. Wild soybean is an extraordinarily important gene pool for soybean breeding. In order to select an optimal germplasm for breeding programs to increase isoflavone concentration, 36 F1 soybean progenies from different parental accessions (cultivars, wild, Semi-wild and Interspecific) with various total isoflavone (TIF) concentration (High, Middle, Low) were analyzed for their isoflavone content. Results showed that male parents, except for Cultivars, showed positive GCA effects. In particular, wild soybean had higher positive GCA effects for TIF concentration. Both MP and BP heterosis value declined in the hybrid in which male parents were wild soybean, semi-wild soybean, interspecific offspring and cultivar in turn. In general, combining ability and heterosis in hybrids which had relative higher TIF concentration level parents showed better performance than those which had lower TIF concentration level parents. These results indicated characteristics of isoflavone content were mainly governed by additive type of gene action, and wild relatives could be utilized for breeding of soybean cultivars with this trait. A promising combination was found as the best potential hybrid for isoflavone content improvement.
Even as weblogs, content management systems, and other forms of automated Web posting and journals are changing the way people create and place content on the Web, new Web pages mushroom overnight. However, many new Web designers produce Web pages that seem to ignore fundamental principles of "good design": full of colored backgrounds, animated…
Scaramozzino, Jeanine Marie
The development of an interactive web-based science information literacy tutorial that introduces undergraduate science majors to basic components of scientific literature is described. The tutorial introduces concepts, vocabulary and resources necessary for understanding and accessing information. The tutorial content is based on the Association…
This study was to conducted to define how metatags are used by Ohio public library webmasters and to determine the de facto standard for metatag usage. The 106 Ohio public library World Wide Web sites accessible through the Ohio Public Libraries Information Network (OPLIN) were evaluated using a statistical analysis of the HTLM code and content of…
SWMPrats.net is a web-based resource that provides accessible approaches to using SWMP data. The website includes a user forum with instructional ‘Plots of the Month’; links to workshop content; and a description of the SWMPr data analysis package for R. Interactive...
Raimond, Yves; Scott, Tom; Oliver, Silver; Sinclair, Patrick; Smethurst, Michael
The BBC publishes large amounts of content online, as text, audio and video. As the amount of content grows, we need to make it easy for users to locate items of interest and to draw coherent journeys across them. In this chapter, we describe our use of Semantic Web technologies for achieving this goal. We focus in particular on three BBC Web sites: BBC Programmes, BBC Music and BBC Wildlife Finder, and how those Web sites effectively use the wider Web as their Content Management System.
Portis, Ezio; Portis, Flavio; Valente, Luisa; Moglia, Andrea; Barchi, Lorenzo; Lanteri, Sergio; Acquadro, Alberto
The recently acquired genome sequence of globe artichoke (Cynara cardunculus var. scolymus) has been used to catalog the genome’s content of simple sequence repeat (SSR) markers. More than 177,000 perfect SSRs were revealed, equivalent to an overall density across the genome of 244.5 SSRs/Mbp, but some 224,000 imperfect SSRs were also identified. About 21% of these SSRs were complex (two stretches of repeats separated by <100 nt). Some 73% of the SSRs were composed of dinucleotide motifs. The SSRs were categorized for the numbers of repeats present, their overall length and were allocated to their linkage group. A total of 4,761 perfect and 6,583 imperfect SSRs were present in 3,781 genes (14.11% of the total), corresponding to an overall density across the gene space of 32,5 and 44,9 SSRs/Mbp for perfect and imperfect motifs, respectively. A putative function has been assigned, using the gene ontology approach, to the set of genes harboring at least one SSR. The same search parameters were applied to reveal the SSR content of 14 other plant species for which genome sequence is available. Certain species-specific SSR motifs were identified, along with a hexa-nucleotide motif shared only with the other two Compositae species (sunflower (Helianthus annuus) and horseweed (Conyza canadensis)) included in the study. Finally, a database, called “Cynara cardunculus MicroSatellite DataBase” (CyMSatDB) was developed to provide a searchable interface to the SSR data. CyMSatDB facilitates the retrieval of SSR markers, as well as suggested forward and reverse primers, on the basis of genomic location, genomic vs genic context, perfect vs imperfect repeat, motif type, motif sequence and repeat number. The SSR markers were validated via an in silico based PCR analysis adopting two available assembled transcriptomes, derived from contrasting globe artichoke accessions, as templates. PMID:27648830
Portis, Ezio; Portis, Flavio; Valente, Luisa; Moglia, Andrea; Barchi, Lorenzo; Lanteri, Sergio; Acquadro, Alberto
The recently acquired genome sequence of globe artichoke (Cynara cardunculus var. scolymus) has been used to catalog the genome's content of simple sequence repeat (SSR) markers. More than 177,000 perfect SSRs were revealed, equivalent to an overall density across the genome of 244.5 SSRs/Mbp, but some 224,000 imperfect SSRs were also identified. About 21% of these SSRs were complex (two stretches of repeats separated by <100 nt). Some 73% of the SSRs were composed of dinucleotide motifs. The SSRs were categorized for the numbers of repeats present, their overall length and were allocated to their linkage group. A total of 4,761 perfect and 6,583 imperfect SSRs were present in 3,781 genes (14.11% of the total), corresponding to an overall density across the gene space of 32,5 and 44,9 SSRs/Mbp for perfect and imperfect motifs, respectively. A putative function has been assigned, using the gene ontology approach, to the set of genes harboring at least one SSR. The same search parameters were applied to reveal the SSR content of 14 other plant species for which genome sequence is available. Certain species-specific SSR motifs were identified, along with a hexa-nucleotide motif shared only with the other two Compositae species (sunflower (Helianthus annuus) and horseweed (Conyza canadensis)) included in the study. Finally, a database, called "Cynara cardunculus MicroSatellite DataBase" (CyMSatDB) was developed to provide a searchable interface to the SSR data. CyMSatDB facilitates the retrieval of SSR markers, as well as suggested forward and reverse primers, on the basis of genomic location, genomic vs genic context, perfect vs imperfect repeat, motif type, motif sequence and repeat number. The SSR markers were validated via an in silico based PCR analysis adopting two available assembled transcriptomes, derived from contrasting globe artichoke accessions, as templates.
Namkung, Young; Almanza, Barbara A
Despite a growing concern over food safety issues, as well as a growing dependence on the Internet as a source of information, little research has been done to examine the presence and relevance of food safety-related information on Web sites. The study reported here conducted Web site analysis in order to examine the current operational status of governmental Web sites on food safety issues. The study also evaluated Web site usability, especially information dimensionalities such as utility, currency, and relevance of content, from the perspective of the English-speaking consumer. Results showed that out of 192 World Health Organization members, 111 countries operated governmental Web sites that provide information about food safety issues. Among 171 searchable Web sites from the 111 countries, 123 Web sites (71.9 percent) were accessible, and 81 of those 123 (65.9 percent) were available in English. The majority of Web sites offered search engine tools and related links for more information, but their availability and utility was limited. In terms of content, 69.9 percent of Web sites offered information on foodborne-disease outbreaks, compared with 31.5 percent that had travel- and health-related information.
Clark, G T
This article looks at six problems that vex educators and how web-based teaching might help solve them. These problems include: (1) limited access to educational content, (2) need for asynchronous access to educational content, (3) depth and diversity of educational content, (4) training in complex problem solving, (5) promotion of lifelong learning behaviors and (6) achieving excellence in education. The advantages and disadvantage of web-based educational content for each problem are discussed. The article suggests that when a poorly organized course with inaccurate and irrelevant content is placed online, it solves no problems. However some of the above issues can be partially or fully solved by hosting well-constructed teaching modules on the web. This article also reviews the literature investigating the efficacy of off-site education as compared to that provided on-site. The conclusion of this review is that teleconference-based and web-based delivery of educational content can be as effective as traditional classroom-based teaching assuming the technologic problems sometimes associated with delivering teaching content to off-site locations do not interfere in the learning process. A suggested hierarchy for rating and comparing e-learning concepts and methods is presented for consideration.
Wild, Emily C.
On-line geoscience bibliographic citations and access points to citations are exponentially increasing as commercial, non-profit, and government agencies worldwide publish materials electronically. On-line bibliographic tools capture cited works, and open access content allows for freely obtained citations and documents. For this newsletter, citations from the numerous journals and books listed in the "Recent Papers" section of the EXPLORE newsletters from 2008-2011 were used to provide freely-accessible web sites to determine the availability of bibliographic information.
SRD 69 NIST Chemistry WebBook (Web, free access) The NIST Chemistry WebBook contains: Thermochemical data for over 7000 organic and small inorganic compounds; thermochemistry data for over 8000 reactions; IR spectra for over 16,000 compounds; mass spectra for over 33,000 compounds; UV/Vis spectra for over 1600 compounds; electronic and vibrational spectra for over 5000 compounds; constants of diatomic molecules(spectroscopic data) for over 600 compounds; ion energetics data for over 16,000 compounds; thermophysical property data for 74 fluids.
Beslay, Nathalie; Jeunehomme, Marie
Web 2.0 sites are considered to be hosting providers and not publishers of user-generated content. The liability of hosting providers' liability is defined by the law enacted on June 21, 2004, on confidence in the digital economy. Hosting providers must promptly remove the information they host or make its access impossible once they are informed of its illegality. They are required to obtain and retain data to enable identification of any person who has contributed to content hosted by them. The liability of hosting providers has arisen in numerous disputes about user-produced content in various situations (discussion lists, blogs, etc.). The National Board of Physicians has developed specific ethical guidelines for web sites devoted to health issues and specifically for physician-authored content. The National Board of Physicians acknowledges that physicians can present themselves, their office, and their specific practice on their web site, notwithstanding any restrictions otherwise applicable to advertising.
Urua, Ikootobong Sunday; Uyoh, Edak Aniedi; Ntui, Valentine Otang; Okpako, Elza Cletus
Proximate composition, amino acid levels and anti-nutrient factors (polyphenols, phytic acid and oxalate) in the seeds of Parkia biglobosa were determined at three stages: raw, boiled and fermented. The highest anti-nutrient factor present in the raw state was oxalate, while phytic acid was the least. The amino acid of the raw seeds matched favourably to the World Health Organization reference standard. After processing, boiling increased fat, crude fibre and protein, while it reduced moisture, ash and the anti-nutrient content in 64% of the cases examined. Fermentation reduced ash, crude fibre and carbohydrate in all the accessions. It increased the moisture, fat and protein, while reducing the anti-nutrient factors in 78% of the cases. The high levels of protein, fat and amino acids coupled with the low levels of the anti-nutrients in the boiled and fermented seeds make Parkia a good source of nutrients for humans and livestock.
Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li
The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.
Pardun, Carol J.; Lamb, Larry
Describes the Web presence in print advertisements to determine how marketers are creating bridges between traditional advertising and the Internet. Content analysis showed Web addresses in print ads; categories of advertisers most likely to link print ads with Web sites; and whether the Web site attempts to develop a database of potential…
Czerkawski, Betül Özkan
The Semantic Web enables increased collaboration among computers and people by organizing unstructured data on the World Wide Web. Rather than a separate body, the Semantic Web is a functional extension of the current Web made possible by defining relationships among websites and other online content. When explicitly defined, these relationships…
Tuyub-Che, Jemina; Moo-Mukul, Angel; Vazquez-Flota, Felipe A.; Miranda-Ham, Maria L.
In the past few years, there has been a renewed interest in studying a wide variety of food products that show beneficial effects on human health. Capsicum is an important agricultural crop, not only because its economic importance, but also for the nutritional values of its pods, mainly due to the fact that they are an excellent source of antioxidant compounds, and also of specific constituents such as the pungent capsaicinoids localized in the placental tissue. This current study was designed to evaluate the antioxidant capacity and total phenolic contents from fruits tissues of two Capsicum chinense accessions, namely, Chak k'an-iik (orange) and MR8H (red), at contrasting maturation stages. Results showed that red immature placental tissue, with a Trolox equivalent antioxidant capacity (TEAC) value of 55.59 μmols TE g−1 FW, exhibited the strongest total antioxidant capacity using both the 2,2-diphenyl-1-picrylhydrazyl (DPPH) and the CUPRAC methods. Placental tissue also had the highest total phenolic content (27 g GAE 100 g−1 FW). The antioxidant capacity of Capsicum was directly related to the total amount of phenolic compounds detected. In particular, placentas had high levels of capsaicinoids, which might be the principal responsible for their strong antioxidant activities. PMID:24683361
WebTheme is a system designed to facilitate world wide web information access and retrieval through visualization. It consists of two principal pieces, a WebTheme Server which allows users to enter in a query and automatocally harvest and process information of interest, and a WebTheme browser, which allows users to work with both Galaxies and Themescape visualizations of their data within a JAVA capable world wide web browser. WebTheme is an Internet solution, meaning that access to the server and the resulting visualizations can all be performed through the use of a WWW browser. This allows users to access and interact with SPIRE (Spatial Paradigm for Information Retrieval and Exploration) based visualizations through a web browser regardless of what computer platforms they are running on. WebTheme is specifically designed to create databases by harvesting and processing WWW home pages available on the Internet.
Millard, W. David; Stoops, LaMar R.; Dorow, Kevin E.
Web Operational Status Boards (WebOSB)is a web-based application designed to acquire, display, and update highly dynamic status information between multiple users and jurisdictions. WebOSB is able to disseminate real-time status informationsupport the timely sharing of informationwith constant, dynamic updates via personal computers and the Internet between emergency operations centers (EOCs), incident command centers, and to users outside the EOC who need to know the information (hospitals, shelters, schools). The WebOSB application far exceeds outdated information-sharing methods used by emergency workers: whiteboards, Word and Excel documents, or even locality-specific Web sites. WebOSBs capabilities include the following elements: - Secure access. Multiple users can access information on WebOSB from any personal computer with Internet access and a secure ID. Privileges are use to control access and distribution of status information and to identify users who are authorized to add or edit information. - Simultaneous update. WebOSB provides options for users to add, display, and update dynamic information simultaneously at all locations involved in the emergency management effort, A single status board can be updated from multiple locations enabling shelters and hospitals to post bed availability or list decontamination capability. - On-the-fly modification. Allowing the definition of an existing status board to be modified on-the-fly can be an asset during an emergency, where information requirements can change quickly. The status board designer feature allows an administrator to quickly define, modi,, add to, and implement new status boards in minutes without needing the help of Web designers and computer programmers. - Publisher/subscriber notification. As a subscriber, each user automatically receives notification of any new information relating to specific status boards. The publisher/subscriber feature automatically notified each user of any new
Thackeray, Rosemary; Neiger, Brad L; Hanson, Carl L; McKenzie, James F
The second generation of Internet-based applications (i.e., Web 2.0), in which users control communication, holds promise to significantly enhance promotional efforts within social marketing campaigns. Web 2.0 applications can directly engage consumers in the creative process by both producing and distributing information through collaborative writing, content sharing, social networking, social bookmarking, and syndication. Web 2.0 can also enhance the power of viral marketing by increasing the speed at which consumers share experiences and opinions with progressively larger audiences. Because of the novelty and potential effectiveness of Web 2.0, social marketers may be enticed to prematurely incorporate related applications into promotional plans. However, as strategic issues such as priority audience preferences, selection of appropriate applications, tracking and evaluation, and related costs are carefully considered, Web 2.0 will expand to allow health promotion practitioners more direct access to consumers with less dependency on traditional communication channels.
Whitehead, E. James, Jr.
Discusses the WebDAV Distributed Authoring Protocol which provides standards that allow easier collaborative authoring over the World Wide Web. Topics include Hypertext Transfer Protocol (HTTP), overwrite prevention, access control, searching, metadata, XML (Extensible Markup Language), and Uniform Resource Identifier (URI). (LRW)
Rodriguez, Jose Manuel; Carro, Angel; Valencia, Alfonso; Tress, Michael L
This paper introduces the APPRIS WebServer (http://appris.bioinfo.cnio.es) and WebServices (http://apprisws.bioinfo.cnio.es). Both the web servers and the web services are based around the APPRIS Database, a database that presently houses annotations of splice isoforms for five different vertebrate genomes. The APPRIS WebServer and WebServices provide access to the computational methods implemented in the APPRIS Database, while the APPRIS WebServices also allows retrieval of the annotations. The APPRIS WebServer and WebServices annotate splice isoforms with protein structural and functional features, and with data from cross-species alignments. In addition they can use the annotations of structure, function and conservation to select a single reference isoform for each protein-coding gene (the principal protein isoform). APPRIS principal isoforms have been shown to agree overwhelmingly with the main protein isoform detected in proteomics experiments. The APPRIS WebServer allows for the annotation of splice isoforms for individual genes, and provides a range of visual representations and tools to allow researchers to identify the likely effect of splicing events. The APPRIS WebServices permit users to generate annotations automatically in high throughput mode and to interrogate the annotations in the APPRIS Database. The APPRIS WebServices have been implemented using REST architecture to be flexible, modular and automatic.
Sharma, M. K.; Kumar, Rajeev
WebOS (Web based operating system) is a new form of Operating Systems. You can use your desktop as a virtual desktop on the web, accessible via a browser, with multiple integrated built-in applications that allow the user to easily manage and organize her data from any location. Desktop on web can be named as WEBtop. This paper starts with a introduction of WebOS and its benefits. For this paper, We have reviewed some most interesting WebOS available nowadays and tried to provide a detailed description of their features. We have identified some parameters as comparison criteria among them. A technical review is given with research design and future goals to design better web based operating systems is a part of this study. Findings of the study conclude this paper.
Bull, Glen; Hammond, Thomas; Ferster, Bill
Web 2.0 tools offer new possibilities for teaching and learning. PrimaryAccess is a Web 2.0 tool designed for K-12 history education. PrimaryAccess shares many of the characteristics of other Web 2.0 applications, but its educational focus makes it different from generic Web applications. Our work developing and researching PrimaryAccess has…
Virtanen, Jaana; Rasi, Päivi
In this article we present and discuss the process of developing and implementing a PBL-based course entitled Moving Images in Teaching and Learning that was held at the University of Lapland, Finland. In the course of the project, this fairly traditional face-to-face course was redesigned into a blended PBL course by integrating Web 2.0…