Earthdata Search: How Usability Drives Innovation To Enable A Broad User Base
NASA Astrophysics Data System (ADS)
Reese, M.; Siarto, J.; Lynnes, C.; Shum, D.
2017-12-01
Earthdata Search (https://search.earthdata.nasa.gov) is a modern web application allowing users to search, discover, visualize, refine, and access NASA Earth Observation data using a wide array of service offerings. Its goal is to ease the technical burden on data users by providing a high-quality application that makes it simple to interact with NASA Earth observation data, freeing them to spend more effort on innovative endeavors. This talk would detail how we put end users first in our design and development process, focusing on usability and letting usability needs drive requirements for the underlying technology. Just a few examples of how this plays out practically, Earthdata Search teams with a lightning fast metadata repository, allowing it to be an extremely responsive UI that updates as the user changes criteria not only at the dataset level, but also at the file level. This results in a better exploration experience as the time penalty is greatly reduced. Also, since Earthdata Search uses metadata from over 35,000 datasets that are managed by different data providers, metadata standards, quality and consistency will vary. We found that this was negatively impacting users' search and exploration experience. We have resolved this problem with the introduction of "humanizers", which is a community-driven process to both "smooth out" metadata values and provide non-jargonistic representations of some content within the Earthdata Search UI. This is helpful for both the experience data scientist and our users that are brand new to the discipline.
Multiple-Objective Stepwise Calibration Using Luca
Hay, Lauren E.; Umemoto, Makiko
2007-01-01
This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.
Triggers and monitoring in intelligent personal health record.
Luo, Gang
2012-10-01
Although Web-based personal health records (PHRs) have been widely deployed, the existing ones have limited intelligence. Previously, we introduced expert system technology and Web search technology into the PHR domain and proposed the concept of an intelligent PHR (iPHR). iPHR provides personalized healthcare information to facilitate users' daily activities of living. The current iPHR is passive and follows the pull model of information distribution. This paper introduces triggers and monitoring into iPHR to make iPHR become active. Our idea is to let medical professionals pre-compile triggers and store them in iPHR's knowledge base. Each trigger corresponds to an abnormal event that may have potential medical impact. iPHR keeps collecting, processing, and analyzing the user's medical data from various sources such as wearable sensors. Whenever an abnormal event is detected from the user's medical data, the corresponding trigger fires and the related personalized healthcare information is pushed to the user using natural language generation technology, expert system technology, and Web search technology.
Collaboration using open standards and open source software (examples of DIAS/CEOS Water Portal)
NASA Astrophysics Data System (ADS)
Miura, S.; Sekioka, S.; Kuroiwa, K.; Kudo, Y.
2015-12-01
The DIAS/CEOS Water Portal is a part of the DIAS (Data Integration and Analysis System, http://www.editoria.u-tokyo.ac.jp/projects/dias/?locale=en_US) systems for data distribution for users including, but not limited to, scientists, decision makers and officers like river administrators. One of the functions of this portal is to enable one-stop search and access variable water related data archived multiple data centers located all over the world. This portal itself does not store data. Instead, according to requests made by users on the web page, it retrieves data from distributed data centers on-the-fly and lets them download and see rendered images/plots. Our system mainly relies on the open source software GI-cat (http://essi-lab.eu/do/view/GIcat) and open standards such as OGC-CSW, Opensearch and OPeNDAP protocol to enable the above functions. Details on how it works will be introduced during the presentation. Although some data centers have unique meta data format and/or data search protocols, our portal's brokering function enables users to search across various data centers at one time. And this portal is also connected to other data brokering systems, including GEOSS DAB (Discovery and Access Broker). As a result, users can search over thousands of datasets, millions of files at one time. Users can access the DIAS/CEOS Water Portal system at http://waterportal.ceos.org/.
Planetary Data Systems (PDS) Imaging Node Atlas II
NASA Technical Reports Server (NTRS)
Stanboli, Alice; McAuley, James M.
2013-01-01
The Planetary Image Atlas (PIA) is a Rich Internet Application (RIA) that serves planetary imaging data to the science community and the general public. PIA also utilizes the USGS Unified Planetary Coordinate system (UPC) and the on-Mars map server. The Atlas was designed to provide the ability to search and filter through greater than 8 million planetary image files. This software is a three-tier Web application that contains a search engine backend (MySQL, JAVA), Web service interface (SOAP) between server and client, and a GWT Google Maps API client front end. This application allows for the search, retrieval, and download of planetary images and associated meta-data from the following missions: 2001 Mars Odyssey, Cassini, Galileo, LCROSS, Lunar Reconnaissance Orbiter, Mars Exploration Rover, Mars Express, Magellan, Mars Global Surveyor, Mars Pathfinder, Mars Reconnaissance Orbiter, MESSENGER, Phoe nix, Viking Lander, Viking Orbiter, and Voyager. The Atlas utilizes the UPC to translate mission-specific coordinate systems into a unified coordinate system, allowing the end user to query across missions of similar targets. If desired, the end user can also use a mission-specific view of the Atlas. The mission-specific views rely on the same code base. This application is a major improvement over the initial version of the Planetary Image Atlas. It is a multi-mission search engine. This tool includes both basic and advanced search capabilities, providing a product search tool to interrogate the collection of planetary images. This tool lets the end user query information about each image, and ignores the data that the user has no interest in. Users can reduce the number of images to look at by defining an area of interest with latitude and longitude ranges.
Space Communications Artificial Intelligence for Link Evaluation Terminal (SCAILET)
NASA Technical Reports Server (NTRS)
Shahidi, Anoosh
1991-01-01
A software application to assis end-users of the Link Evaluation Terminal (LET) for satellite communication is being developed. This software application incorporates artificial intelligence (AI) techniques and will be deployed as an interface to LET. The high burst rate (HBR) LET provides 30 GHz transmitting/20 GHz receiving, 220/110 Mbps capability for wideband communications technology experiments with the Advanced Communications Technology Satellite (ACTS). The HBR LET and ACTS are being developed at the NASA Lewis Research Center. The HBR LET can monitor and evaluate the integrity of the HBR communications uplink and downlink to the ACTS satellite. The uplink HBR transmission is performed by bursting the bit-pattern as a modulated signal to the satellite. By comparing the transmitted bit pattern with the received bit pattern, HBR LET can determine the bit error rate BER) under various atmospheric conditions. An algorithm for power augmentation is applied to enhance the system's BER performance at reduced signal strength caused by adverse conditions. Programming scripts, defined by the design engineer, set up the HBR LET terminal by programming subsystem devices through IEEE488 interfaces. However, the scripts are difficult to use, require a steep learning curve, are cryptic, and are hard to maintain. The combination of the learning curve and the complexities involved with editing the script files may discourage end-users from utilizing the full capabilities of the HBR LET system. An intelligent assistant component of SCAILET that addresses critical end-user needs in the programming of the HBR LET system as anticipated by its developers is described. A close look is taken at the various steps involved in writing ECM software for a C&P, computer and at how the intelligent assistant improves the HBR LET system and enhances the end-user's ability to perform the experiments.
AIRSAR Web-Based Data Processing
NASA Technical Reports Server (NTRS)
Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne
2007-01-01
The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.
Space Communication Artificial Intelligence for Link Evaluation Terminal (SCAILET)
NASA Technical Reports Server (NTRS)
Shahidi, Anoosh K.; Schlegelmilch, Richard F.; Petrik, Edward J.; Walters, Jerry L.
1992-01-01
A software application to assist end-users of the high burst rate (HBR) link evaluation terminal (LET) for satellite communications is being developed. The HBR LET system developed at NASA Lewis Research Center is an element of the Advanced Communications Technology Satellite (ACTS) Project. The HBR LET is divided into seven major subsystems, each with its own expert. Programming scripts, test procedures defined by design engineers, set up the HBR LET system. These programming scripts are cryptic, hard to maintain and require a steep learning curve. These scripts were developed by the system engineers who will not be available for the end-users of the system. To increase end-user productivity a friendly interface needs to be added to the system. One possible solution is to provide the user with adequate documentation to perform the needed tasks. With the complexity of this system the vast amount of documentation needed would be overwhelming and the information would be hard to retrieve. With limited resources, maintenance is another reason for not using this form of documentation. An advanced form of interaction is being explored using current computer techniques. This application, which incorporates a combination of multimedia and artificial intelligence (AI) techniques to provided end-users with an intelligent interface to the HBR LET system, is comprised of an intelligent assistant, intelligent tutoring, and hypermedia documentation. The intelligent assistant and tutoring systems address the critical programming needs of the end-user.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1992-01-01
The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document.
Environmental factor(tm) system: RCRA hazardous waste handler information (on CD-ROM). Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-11-01
Environmental Factor(trademark) RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity, and compliance history for facilities found in the EPA Research Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management, and minimization by companies who are large quantity generators; and (3) Data on the waste management practices of treatment, storage, and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action, or violation information, TSD status, generator and transporter status, and more. (2) View compliance information - dates of evaluation, violation, enforcement, and corrective action. (3) Lookup facilities by waste processing categories of marketing, transporting, processing, and energy recovery. (4) Use owner/operator information and names, titles, and telephone numbers of project managers for prospecting. (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving, and exporting.« less
16 CFR 1212.3 - Requirements for multi-purpose lighters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... reset when or before the user lets go of the lighter. (5) The child-resistant mechanism of a multi... operation can occur; (ii) Have a manual mechanism for turning off the flame when the hands-free function is used; and either (iii) Automatically reset when or before the user lets go of the lighter when the...
Environmental Factor(tm) system: RCRA hazardous waste handler information (on cd-rom). Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-04-01
Environmental Factor(tm) RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity and compliance history for facilities found in the EPA Resource Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management and minimization by companies who are large quantity generators, and (3) Data on the waste management practices of treatment, storage and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action or violation information, TSD status, generator and transporter status and more; (2) View compliance information - dates of evaluation, violation, enforcement and corrective action; (3) Lookup facilities by waste processing categories of marketing, transporting, processing and energy recovery; (4) Use owner/operator information and names, titles and telephone numbers of project managers for prospecting; and (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving and exporting. Hotline support is also available for no additional charge.« less
Environmental Factor{trademark} system: RCRA hazardous waste handler information
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1999-03-01
Environmental Factor{trademark} RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity and compliance history for facilities found in the EPA Resource Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management and minimization by companies who are large quantity generators, and (3) Data on the waste management practices of treatment, storage and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action or violation information, TSD status, generator and transporter status and more; (2) View compliance information -- dates of evaluation, violation, enforcement and corrective action; (3) Lookup facilities by waste processing categories of marketing, transporting, processing and energy recovery; (4) Use owner/operator information and names, titles and telephone numbers of project managers for prospecting; and (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving and exporting. Hotline support is also available for no additional charge.« less
NASA Astrophysics Data System (ADS)
Zaslavsky, I.; Valentine, D.; Richard, S. M.; Gupta, A.; Meier, O.; Peucker-Ehrenbrink, B.; Hudman, G.; Stocks, K. I.; Hsu, L.; Whitenack, T.; Grethe, J. S.; Ozyurt, I. B.
2017-12-01
EarthCube Data Discovery Hub (DDH) is an EarthCube Building Block project using technologies developed in CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability) to enable geoscience users to explore a growing portfolio of EarthCube-created and other geoscience-related resources. Over 1 million metadata records are available for discovery through the project portal (cinergi.sdsc.edu). These records are retrieved from data facilities, including federal, state and academic sources, or contributed by geoscientists through workshops, surveys, or other channels. CINERGI metadata augmentation pipeline components 1) provide semantic enhancement based on a large ontology of geoscience terms, using text analytics to generate keywords with references to ontology classes, 2) add spatial extents based on place names found in the metadata record, and 3) add organization identifiers to the metadata. The records are indexed and can be searched via a web portal and standard search APIs. The added metadata content improves discoverability and interoperability of the registered resources. Specifically, the addition of ontology-anchored keywords enables faceted browsing and lets users navigate to datasets related by variables measured, equipment used, science domain, processes described, geospatial features studied, and other dataset characteristics that are generated by the pipeline. DDH also lets data curators access and edit the automatically generated metadata records using the CINERGI metadata editor, accept or reject the enhanced metadata content, and consider it in updating their metadata descriptions. We consider several complex data discovery workflows, in environmental seismology (quantifying sediment and water fluxes using seismic data), marine biology (determining available temperature, location, weather and bleaching characteristics of coral reefs related to measurements in a given coral reef survey), and river geochemistry (discovering observations relevant to geochemical measurements outside the tidal zone, given specific discharge conditions).
The DIAS/CEOS Water Portal, distributed system using brokering architecture
NASA Astrophysics Data System (ADS)
Miura, Satoko; Sekioka, Shinichi; Kuroiwa, Kaori; Kudo, Yoshiyuki
2015-04-01
The DIAS/CEOS Water Portal is a one of the DIAS (Data Integration and Analysis System, http://www.editoria.u-tokyo.ac.jp/projects/dias/?locale=en_US) systems for data distribution for users including, but not limited to, scientists, decision makers and officers like river administrators. This portal has two main functions; one is to search and access data and the other is to register and share use cases which use datasets provided via this portal. This presentation focuses on the first function, to search and access data. The Portal system is distributed in the sense that, while the portal system is located in Tokyo, the data is located in archive centers which are globally distributed. For example, some in-situ data is archived at the National Center for Atmospheric Research (NCAR) Earth Observing Laboratory in Boulder, Colorado, USA. The NWP station time series and global gridded model output data is archived at the Max Planck Institute for Meteorology (MPIM) in cooperation with the World Data Center for Climate in Hamburg, Germany. Part of satellite data is archived at DIAS storage at the University of Tokyo, Japan. This portal itself does not store data. Instead, according to requests made by users on the web page, it retrieves data from distributed data centers on-the-fly and lets them download and see rendered images/plots. Although some data centers have unique meta data format and/or data search protocols, our portal's brokering function enables users to search across various data centers at one time, like one-stop shopping. And this portal is also connected to other data brokering systems, including GEOSS DAB (Discovery and Access Broker). As a result, users can search over thousands of datasets, millions of files at one time. Our system mainly relies on the open source software GI-cat (http://essi-lab.eu/do/view/GIcat), Opensearch protocol and OPeNDAP protocol to enable the above functions. Details on how it works will be introduced during the presentation. Users can access the DIAS/CEOS Water Portal system at http://waterportal.ceos.org/.
User-centered design and the development of patient decision aids: protocol for a systematic review.
Witteman, Holly O; Dansokho, Selma Chipenda; Colquhoun, Heather; Coulter, Angela; Dugas, Michèle; Fagerlin, Angela; Giguere, Anik Mc; Glouberman, Sholom; Haslett, Lynne; Hoffman, Aubri; Ivers, Noah; Légaré, France; Légaré, Jean; Levin, Carrie; Lopez, Karli; Montori, Victor M; Provencher, Thierry; Renaud, Jean-Sébastien; Sparling, Kerri; Stacey, Dawn; Vaisson, Gratianne; Volk, Robert J; Witteman, William
2015-01-26
Providing patient-centered care requires that patients partner in their personal health-care decisions to the full extent desired. Patient decision aids facilitate processes of shared decision-making between patients and their clinicians by presenting relevant scientific information in balanced, understandable ways, helping clarify patients' goals, and guiding decision-making processes. Although international standards stipulate that patients and clinicians should be involved in decision aid development, little is known about how such involvement currently occurs, let alone best practices. This systematic review consisting of three interlinked subreviews seeks to describe current practices of user involvement in the development of patient decision aids, compare these to practices of user-centered design, and identify promising strategies. A research team that includes patient and clinician representatives, decision aid developers, and systematic review method experts will guide this review according to the Cochrane Handbook and PRISMA reporting guidelines. A medical librarian will hand search key references and use a peer-reviewed search strategy to search MEDLINE, EMBASE, PubMed, Web of Science, the Cochrane Library, the ACM library, IEEE Xplore, and Google Scholar. We will identify articles across all languages and years describing the development or evaluation of a patient decision aid, or the application of user-centered design or human-centered design to tools intended for patient use. Two independent reviewers will assess article eligibility and extract data into a matrix using a structured pilot-tested form based on a conceptual framework of user-centered design. We will synthesize evidence to describe how research teams have included users in their development process and compare these practices to user-centered design methods. If data permit, we will develop a measure of the user-centeredness of development processes and identify practices that are likely to be optimal. This systematic review will provide evidence of current practices to inform approaches for involving patients and other stakeholders in the development of patient decision aids. We anticipate that the results will help move towards the establishment of best practices for the development of patient-centered tools and, in turn, help improve the experiences of people who face difficult health decisions. PROSPERO CRD42014013241.
NASA Astrophysics Data System (ADS)
Pound, M. W.; Wolfire, M. G.; Amarnath, N. S.
2003-12-01
The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS, located at http://dustem.astro.umd.edu) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars. DIRT has been used by the astrophysics community for about 5 years. Users can automatically and efficiently search grids of pre-calculated models to fit their data. A large set of physical parameters and dust types are included in the model database, which contains over 500,000 models. We are adding new functionality to DIRT to support new missions like SIRTF and SOFIA. A new Instrument module allows for plotting of the model points convolved with the spatial and spectral responses of the selected instrument. This lets users better fit data from specific instruments. Currently, we have implemented modules for the Infrared Array Camera (IRAC) and Multiband Imaging Photometer (MIPS) on SIRTF.
Human interface to large multimedia databases
NASA Astrophysics Data System (ADS)
Davis, Ben; Marks, Linn; Collins, Dave; Mack, Robert; Malkin, Peter; Nguyen, Tam
1994-04-01
The emergence of high-speed networking for multimedia will have the effect of turning the computer screen into a window on a very large information space. As this information space increases in size and complexity, providing users with easy and intuitive means of accessing information will become increasingly important. Providing access to large amounts of text has been the focus of work for hundreds of years and has resulted in the evolution of a set of standards, from the Dewey Decimal System for libraries to the recently proposed ANSI standards for representing information on-line: KIF, Knowledge Interchange Format, and CG's, Conceptual Graphs. Certain problems remain unsolved by these efforts, though: how to let users know the contents of the information space, so that they know whether or not they want to search it in the first place, how to facilitate browsing, and, more specifically, how to facilitate visual browsing. These issues are particularly important for users in educational contexts and have been the focus of much of our recent work. In this paper we discuss some of the solutions we have prototypes: specifically, visual means, visual browsers, and visual definitional sequences.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1992-01-01
The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document. The EC&M Software Maintenance Manual, Version 1.0 (NASA-CR-189161) is a programmer's guide that describes current implementation of the EC&M software from a technical perspective. An overview of the EC&M software, computer algorithms, format representation, and computer hardware configuration are included in the manual.
User Authentication and Authorization Challenges in a Networked Library Environment.
ERIC Educational Resources Information Center
Machovec, George S.
1997-01-01
Discusses computer user authentication and authorization issues when libraries need to let valid users access databases and information services without making the process too difficult for either party. Common solutions are explained, including filtering, passwords, and kerberos (cryptographic authentication scheme for secure use over public…
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1993-01-01
The Communication Protocol Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Communication Protocol Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Communication Protocol Software allows users to control and configure the Intermediate Frequency Switch Matrix (IFSM) on board the ACTS to yield a desired path through the spacecraft payload. Besides IFSM control, the C&PM Software System is also responsible for instrument control during HBR-LET experiments, uplink power control of the HBR-LET to demonstrate power augmentation during signal fade events, and data display. The Communication Protocol Software User's Guide, Version 1.0 (NASA CR-189162) outlines the commands and procedures to install and operate the Communication Protocol Software. Configuration files used to control the IFSM, operator commands, and error recovery procedures are discussed. The Communication Protocol Software Maintenance Manual, Version 1.0 (NASA CR-189163, to be published) is a programmer's guide to the Communication Protocol Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Communication Protocol Software, computer algorithms, format representations, and computer hardware configuration. The Communication Protocol Software Test Plan (NASA CR-189164, to be published) provides a step-by-step procedure to verify the operation of the software. Included in the Test Plan is command transmission, telemetry reception, error detection, and error recovery procedures.
Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.
Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen
2012-02-01
Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.
Deploying the ODISEES Ontology-guided Search in the NASA Earth Exchange (NEX)
NASA Astrophysics Data System (ADS)
Huffer, E.; Gleason, J. L.; Cotnoir, M.; Spaulding, R.; Deardorff, G.
2016-12-01
Robust, semantically rich metadata can support data discovery and access, and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Despite this, for users not already familiar with the data in a given archive, most metadata is insufficient to help them find appropriate data for their projects. With this in mind, the Ontology-driven Interactive Search Environment (ODISEES) Data Discovery Portal was developed to enable users to find and download data variables that satisfy precise, parameter-level criteria, even when they know little or nothing about the naming conventions employed by data providers, or where suitable data might be archived. ODISEES relies on an Earth science ontology and metadata repository that provide an ontological framework for describing NASA data holdings with enough detail and fidelity to enable researchers to find, compare and evaluate individual data variables. Users can search for data by indicating the specific parameters desired, and comparing the results in a table that lets them quickly determine which data is most suitable. ODISEES and OLYMPUS, a tool for generating the semantically enhanced metadata used by ODISEES, are being developed in collaboration with the NASA Earth Exchange (NEX) project at the NASA Ames Research Center to prototype a robust data discovery and access service that could be made available to NEX users. NEX is a collaborative platform that provides researchers with access to TB to PB-scale datasets and analysis tools to operate on those data. By integrating ODISEES into the NEX Web Portal we hope to enable NEX users to locate datasets relevant to their research and download them directly into the NAS environment, where they can run applications using those datasets on the NAS supercomputers. This poster will describe the prototype integration of ODISEES into the NEX portal development environment, the mechanism implemented to use NASA APIs to retrieve data, and the approach to transfer data into the NAS supercomputing environment. Finally, we will describe the end-to-end demonstration of the capabilities implemented. This work was funded by the Advanced Information Systems Technology Program of NASA's Research Opportunities in Space and Earth Science.
SETI group let by Barney Oliver, John Wolfe and John Billingham (in middle standing) lead a 1976
NASA Technical Reports Server (NTRS)
1976-01-01
SETI group let by Barney Oliver, John Wolfe and John Billingham (in middle standing) lead a 1976 discussion on the best strategies in the Search for Extraterrestrial Intelligence. Joining the discussion are L-R; Charles Seeger, Dario Black, Mary Connors, (Oliver, Wolfe, Billingham) and Larry Lesyna, (seated) Mark Stull.
NASA Technical Reports Server (NTRS)
Shahidi, Anoosh K.; Schlegelmilch, Richard F.; Petrik, Edward J.; Walters, Jerry L.
1991-01-01
A software application to assist end-users of the link evaluation terminal (LET) for satellite communications is being developed. This software application incorporates artificial intelligence (AI) techniques and will be deployed as an interface to LET. The high burst rate (HBR) LET provides 30 GHz transmitting/20 GHz receiving (220/110 Mbps) capability for wideband communications technology experiments with the Advanced Communications Technology Satellite (ACTS). The HBR LET can monitor and evaluate the integrity of the HBR communications uplink and downlink to the ACTS satellite. The uplink HBR transmission is performed by bursting the bit-pattern as a modulated signal to the satellite. The HBR LET can determine the bit error rate (BER) under various atmospheric conditions by comparing the transmitted bit pattern with the received bit pattern. An algorithm for power augmentation will be applied to enhance the system's BER performance at reduced signal strength caused by adverse conditions.
On the Lawfulness of the Decision to Terminate Memory Search
ERIC Educational Resources Information Center
Harbison, J. Isaiah; Dougherty, Michael R.; Davelaar, Eddy J.; Fayyad, Basma
2009-01-01
Nearly every memory retrieval episode ends with a decision to terminate memory search. Yet, no research has investigated whether these search termination decisions are systematic, let alone whether they are made consistent with a particular rule. In the present paper, we used a modified free-recall paradigm to examine the decision to terminate…
SCAILET - An intelligent assistant for satellite ground terminal operations
NASA Technical Reports Server (NTRS)
Shahidi, A. K.; Crapo, J. A.; Schlegelmilch, R. F.; Reinhart, R. C.; Petrik, E. J.; Walters, J. L.; Jones, R. E.
1992-01-01
Space communication artificial intelligence for the link evaluation terminal (SCAILET) is an experimenter interface to the link evaluation terminal (LET) developed by NASA through the application of artificial intelligence to an advanced ground terminal. The high-burst-rate (HBR) LET provides the required capabilities for wideband communications experiments with the advanced communications technology satellite (ACTS). The HBR-LET terminal consists of seven major subsystems and is controlled and monitored by a minicomputer through an IEEE-488 or RS-232 interface. Programming scripts configure HBR-LET and allow data acquisition but are difficult to use and therefore the full capabilities of the system are not utilized. An intelligent assistant module was developed as part of the SCAILET module and solves problems encountered during configuration of the HBR-LET system. This assistant is a graphical interface with an expert system running in the background and allows users to configure instrumentation, program sequences and reference documentation. The simplicity of use makes SCAILET a superior interface to the ASCII terminal and continuous monitoring allows nearly flawless configuration and execution of HBR-LET experiments.
Larry J. Gangi
2006-01-01
The FIREMON Analysis Tools program is designed to let the user perform grouped or ungrouped summary calculations of single measurement plot data, or statistical comparisons of grouped or ungrouped plot data taken at different sampling periods. The program allows the user to create reports and graphs, save and print them, or cut and paste them into a word processor....
Developing Formal Object-oriented Requirements Specifications: A Model, Tool and Technique.
ERIC Educational Resources Information Center
Jackson, Robert B.; And Others
1995-01-01
Presents a formal object-oriented specification model (OSS) for computer software system development that is supported by a tool that automatically generates a prototype from an object-oriented analysis model (OSA) instance, lets the user examine the prototype, and permits the user to refine the OSA model instance to generate a requirements…
Design and Development of a User Interface for the Dynamic Model of Software Project Management.
1988-03-01
rectory of the user’s choice for future...the last choice selected. Let us assume for the sake of this tour that the user has selected all eight choices . ESTIMATED ACTUAL PROJECT SIZE DEFINITION...manipulation of varaibles in the * •. TJin~ca model "h ... ser Inter ace for the Dynamica model was designed b in iterative process of prototyping
A Compositional Relevance Model for Adaptive Information Retrieval
NASA Technical Reports Server (NTRS)
Mathe, Nathalie; Chen, James; Lu, Henry, Jr. (Technical Monitor)
1994-01-01
There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.
Weber, Alexander E; Zuke, William; Mayer, Erik N; Forsythe, Brian; Getgood, Alan; Verma, Nikhil N; Bach, Bernard R; Bedi, Asheesh; Cole, Brian J
2018-02-01
There has been an increasing interest in lateral-based soft tissue reconstructive techniques as augments to anterior cruciate ligament reconstruction (ACLR). The objective of these procedures is to minimize anterolateral rotational instability of the knee after surgery. Despite the relatively rapid increase in surgical application of these techniques, many clinical questions remain. To provide a comprehensive update on the current state of these lateral-based augmentation procedures by reviewing the origins of the surgical techniques, the biomechanical data to support their use, and the clinical results to date. Systematic review. A systematic search of the literature was conducted via the Medline, EMBASE, Scopus, SportDiscus, and CINAHL databases. The search was designed to encompass the literature on lateral extra-articular tenodesis (LET) procedures and the anterolateral ligament (ALL) reconstruction. Titles and abstracts were reviewed for relevance and sorted into the following categories: anatomy, biomechanics, imaging/diagnostics, surgical techniques, and clinical outcomes. The search identified 4016 articles. After review for relevance, 31, 53, 27, 35, 45, and 78 articles described the anatomy, biomechanics, imaging/diagnostics, surgical techniques, and clinical outcomes of either LET procedures or the ALL reconstruction, respectively. A multitude of investigations were available, revealing controversy in addition to consensus in several categories. The level of evidence obtained from this search was not adequate for systematic review or meta-analysis; thus, a current concepts review of the anatomy, biomechanics, imaging, surgical techniques, and clinical outcomes was performed. Histologically, the ALL appears to be a distinct structure that can be identified with advanced imaging techniques. Biomechanical evidence suggests that the anterolateral structures of the knee, including the ALL, contribute to minimizing anterolateral rotational instability. Cadaveric studies of combined ACLR-LET procedures demonstrated overconstraint of the knee; however, these findings have yet to be reproduced in the clinical literature. The current indications for LET augmentation in the setting of ACLR and the effect on knee kinematic and joint preservation should be the subject of future research.
SAFOD Brittle Microstructure and Mechanics Knowledge Base (SAFOD BM2KB)
NASA Astrophysics Data System (ADS)
Babaie, H. A.; Hadizadeh, J.; di Toro, G.; Mair, K.; Kumar, A.
2008-12-01
We have developed a knowledge base to store and present the data collected by a group of investigators studying the microstructures and mechanics of brittle faulting using core samples from the SAFOD (San Andreas Fault Observatory at Depth) project. The investigations are carried out with a variety of analytical and experimental methods primarily to better understand the physics of strain localization in fault gouge. The knowledge base instantiates an specially-designed brittle rock deformation ontology developed at Georgia State University. The inference rules embedded in the semantic web languages, such as OWL, RDF, and RDFS, which are used in our ontology, allow the Pellet reasoner used in this application to derive additional truths about the ontology and knowledge of this domain. Access to the knowledge base is via a public website, which is designed to provide the knowledge acquired by all the investigators involved in the project. The stored data will be products of studies such as: experiments (e.g., high-velocity friction experiment), analyses (e.g., microstructural, chemical, mass transfer, mineralogical, surface, image, texture), microscopy (optical, HRSEM, FESEM, HRTEM]), tomography, porosity measurement, microprobe, and cathodoluminesence. Data about laboratories, experimental conditions, methods, assumptions, equipments, and mechanical properties and lithology of the studied samples will also be presented on the website per investigation. The ontology was modeled applying the UML (Unified Modeling Language) in Rational Rose, and implemented in OWL-DL (Ontology Web Language) using the Protégé ontology editor. The UML model was converted to OWL-DL by first mapping it to Ecore (.ecore) and Generator model (.genmodel) with the help of the EMF (Eclipse Modeling Framework) plugin in Eclipse. The Ecore model was then mapped to a .uml file, which later was converted into an .owl file and subsequently imported into the Protégé ontology editing environment. The web-interface was developed in java using eclipse as the IDE. The web interfaces to query and submit data were implemented applying JSP, servlets, javascript, and AJAX. The Jena API, a Java framework for building Semantic Web applications, was used to develop the web-interface. Jena provided a programmatic environment for RDF, RDFS, OWL, and SPARQL query engine. Building web applications with AJAX helps retrieving data from the server asynchronously in the background without interfering with the display and behavior of the existing page. The application was deployed on an apache tomcat server at GSU. The SAFOD BM2KB website provides user-friendly search, submit, feedback, and other services. The General Search option allows users to search the knowledge base by selecting the classes (e.g., Experiment, Surface Analysis), their respective attributes (e.g., apparatus, date performed), and the relationships to other classes (e.g., Sample, Laboratory). The Search by Sample option allows users to search the knowledge base based on sample number. The Search by Investigator lets users to search the knowledge base by choosing an investigator who is involved in this project. The website also allows users to submit new data. The Submit Data option opens a page where users can submit the SAFOD data to our knowledge base by selecting specific classes and attributes. The submitted data then become available for query as part of the knowledge base. The SAFOD BM2KB can be accessed from the main SAFOD website.
Software Architecture for a Virtual Environment for Nano Scale Assembly (VENSA).
Lee, Yong-Gu; Lyons, Kevin W; Feng, Shaw C
2004-01-01
A Virtual Environment (VE) uses multiple computer-generated media to let a user experience situations that are temporally and spatially prohibiting. The information flow between the user and the VE is bidirectional and the user can influence the environment. The software development of a VE requires orchestrating multiple peripherals and computers in a synchronized way in real time. Although a multitude of useful software components for VEs exists, many of these are packaged within a complex framework and can not be used separately. In this paper, an architecture is presented which is designed to let multiple frameworks work together while being shielded from the application program. This architecture, which is called the Virtual Environment for Nano Scale Assembly (VENSA), has been constructed for interfacing with an optical tweezers instrument for nanotechnology development. However, this approach can be generalized for most virtual environments. Through the use of VENSA, the programmer can rely on existing solutions and concentrate more on the application software design.
Software Architecture for a Virtual Environment for Nano Scale Assembly (VENSA)
Lee, Yong-Gu; Lyons, Kevin W.; Feng, Shaw C.
2004-01-01
A Virtual Environment (VE) uses multiple computer-generated media to let a user experience situations that are temporally and spatially prohibiting. The information flow between the user and the VE is bidirectional and the user can influence the environment. The software development of a VE requires orchestrating multiple peripherals and computers in a synchronized way in real time. Although a multitude of useful software components for VEs exists, many of these are packaged within a complex framework and can not be used separately. In this paper, an architecture is presented which is designed to let multiple frameworks work together while being shielded from the application program. This architecture, which is called the Virtual Environment for Nano Scale Assembly (VENSA), has been constructed for interfacing with an optical tweezers instrument for nanotechnology development. However, this approach can be generalized for most virtual environments. Through the use of VENSA, the programmer can rely on existing solutions and concentrate more on the application software design. PMID:27366610
Single event upset suspectibility testing of the Xilinx Virtex II FPGA
NASA Technical Reports Server (NTRS)
Carmichael, C.; Swift, C.; Yui, G.
2002-01-01
Heavy ion testing of the Xilinx Virtex II was conducted on the configuration, block RAM and user flip flop cells to determine their static single-event upset susceptibility using LETs of 1.2 to 60 MeVcm^2/mg. A software program specifically designed to count errors in the FPGA was used to reveal L1/e, values (the LET at which the cross section is l/e times the saturation cross-section) and single-event functional-interrupt failures.
JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)
NASA Astrophysics Data System (ADS)
Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.
2010-12-01
The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.
User-Centered Indexing for Adaptive Information Access
NASA Technical Reports Server (NTRS)
Chen, James R.; Mathe, Nathalie
1996-01-01
We are focusing on information access tasks characterized by large volume of hypermedia connected technical documents, a need for rapid and effective access to familiar information, and long-term interaction with evolving information. The problem for technical users is to build and maintain a personalized task-oriented model of the information to quickly access relevant information. We propose a solution which provides user-centered adaptive information retrieval and navigation. This solution supports users in customizing information access over time. It is complementary to information discovery methods which provide access to new information, since it lets users customize future access to previously found information. It relies on a technique, called Adaptive Relevance Network, which creates and maintains a complex indexing structure to represent personal user's information access maps organized by concepts. This technique is integrated within the Adaptive HyperMan system, which helps NASA Space Shuttle flight controllers organize and access large amount of information. It allows users to select and mark any part of a document as interesting, and to index that part with user-defined concepts. Users can then do subsequent retrieval of marked portions of documents. This functionality allows users to define and access personal collections of information, which are dynamically computed. The system also supports collaborative review by letting users share group access maps. The adaptive relevance network provides long-term adaptation based both on usage and on explicit user input. The indexing structure is dynamic and evolves over time. Leading and generalization support flexible retrieval of information under similar concepts. The network is geared towards more recent information access, and automatically manages its size in order to maintain rapid access when scaling up to large hypermedia space. We present results of simulated learning experiments.
Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel
2012-01-01
Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753
Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel
2012-01-01
Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.
Guide star catalogue data retrieval software 2
NASA Technical Reports Server (NTRS)
Smirnov, O. M.; Malkov, O. YU.
1992-01-01
The Guide Star Catalog (GSC), being the largest astronomical catalog to date, is widely used by the astronomical community for all sorts of applications, such as statistical studies of certain sky regions, searches for counterparts to observational phenomena, and generation of finder charts. It's format (2 CD-ROM's) requires minimum hardware and is ideally suited for all sorts of conditions, especially observations. Unfortunately, the actual GSC data is not easily accessible. It takes the form of FITS tables, and the coordinates of the objects are given in one coordinate system (equinox 2000). The included reading software is rudimentary at best. Thus, even generation of a simple finder chart is not a trivial undertaking. To solve this problem, at least for PC users, GUIDARES was created. GUIDARES is a user-friendly program that lets you look directly at the data in the GSC, either as a graphical sky map or as a text table. GUIDARES can read a sampling of GSC data from a given sky region, store this sampling in a text file, and display a graphical map of the sampled region in projected celestial coordinates (perfect for finder charts). GUIDARES supports rectangular and circular regions defined by coordinates in the equatorial, ecliptic (any equinox) or galactic systems.
New SECAA/ NSSDC Capabilities for Accessing ITM Data
NASA Astrophysics Data System (ADS)
Bilitza, D.; Papitashvili, N.; McGuire, R.
NASA's National Space Science Data Center (NSSDC) archives a large volume of data and models that are of relevance to the International Living with a Star (ILWS) project. Working with NSSDC its sister organization the Sun Earth Connection Active Archive (SECAA) has developed a number of data access and browse tools to facilitate user access to this important data source. For the most widely used empirical models (IRI, IGRF, MSIS/CIRA, AE/AP-8) Java-based web interfaces let users compute, list, plot, and download model parameters. We will report about recent enhancements and extensions of these data and model services in the area of ionospheric-thermospheric-mesospheric (ITM) physics. The ATMOWeb system (http://nssdc.gsfc.nasa.gov/atmoweb/) includes data from many of the ITM satellite missions of the sixties, seventies, and eighties (BE-B, DME-A, Alouette 2, AE-B, OGO-6, ISIS-1, ISIS-2, AEROS-A, AE-C, AE-D, AE-E, DE-2, and Hinotori). New capabilities of the ATMOWeb system include in addition to time series plots and data retrievals, ATMOWeb now lets user generate scatter plots and linear regression fits for any pair of parameters. Optional upper and lower boundaries let users filter out specific segments of the data and/or certain ranges of orbit parameters (altitude, longitude, local time, etc.). Data from TIMED is being added to the CDAWeb system, including new web service capabilities, to be available jointly with the broad scope of space physics data already served by CDAWeb. We will also present the newest version of the NSSDC/SECAA models web pages. The layout and sequence of these entry pages to the models catalog, archive, and web interfaces was greatly simplified and broad up-to-date.
Self-enforcing Private Inference Control
NASA Astrophysics Data System (ADS)
Yang, Yanjiang; Li, Yingjiu; Weng, Jian; Zhou, Jianying; Bao, Feng
Private inference control enables simultaneous enforcement of inference control and protection of users' query privacy. Private inference control is a useful tool for database applications, especially when users are increasingly concerned about individual privacy nowadays. However, protection of query privacy on top of inference control is a double-edged sword: without letting the database server know the content of user queries, users can easily launch DoS attacks. To assuage DoS attacks in private inference control, we propose the concept of self-enforcing private inference control, whose intuition is to force users to only make inference-free queries by enforcing inference control themselves; otherwise, penalty will inflict upon the violating users.
ERIC Educational Resources Information Center
Bush, Jonathan, Ed.; Zuidema, Leah, Ed.
2012-01-01
In professional writing, usability testing investigates how well users can find, understand, and use a document (Lannon). Usability testing involves gathering feedback from users, analyzing how successfully they used the document, and drawing on the test results to change or refine the design. From the authors' experiences as writers and teachers,…
SCAILET: An intelligent assistant for satellite ground terminal operations
NASA Technical Reports Server (NTRS)
Shahidi, A. K.; Crapo, J. A.; Schlegelmilch, R. F.; Reinhart, R. C.; Petrik, E. J.; Walters, J. L.; Jones, R. E.
1993-01-01
NASA Lewis Research Center has applied artificial intelligence to an advanced ground terminal. This software application is being deployed as an experimenter interface to the link evaluation terminal (LET) and was named Space Communication Artificial Intelligence for the Link Evaluation Terminal (SCAILET). The high-burst-rate (HBR) LET provides 30-GHz-transmitting and 20-GHz-receiving, 220-Mbps capability for wide band communications technology experiments with the Advanced Communication Technology Satellite (ACTS). The HBR-LET terminal consists of seven major subsystems. A minicomputer controls and monitors these subsystems through an IEEE-488 or RS-232 protocol interface. Programming scripts (test procedures defined by design engineers) configure the HBR-LET and permit data acquisition. However, the scripts are difficult to use, require a steep learning curve, are cryptic, and are hard to maintain. This discourages experimenters from utilizing the full capabilities of the HBR-LET system. An intelligent assistant module was developed as part of the SCAILET software. The intelligent assistant addresses critical experimenter needs by solving and resolving problems that are encountered during the configuring of the HBR-LET system. The intelligent assistant is a graphical user interface with an expert system running in the background. In order to further assist and familiarize an experimenter, an on-line hypertext documentation module was developed and included in the SCAILET software.
Interactive design optimization of magnetorheological-brake actuators using the Taguchi method
NASA Astrophysics Data System (ADS)
Erol, Ozan; Gurocak, Hakan
2011-10-01
This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.
100 Colleges Sign Up with Google to Speed Access to Library Resources
ERIC Educational Resources Information Center
Young, Jeffrey R.
2005-01-01
More than 100 colleges and universities have arranged to give people using the Google Scholar search engine on their campuses more-direct access to library materials. Google Scholar is a free tool that searches scholarly materials on the Web and in academic databases. The new arrangements essentially let Google know which online databases the…
The U. S. Geological Survey, Digital Spectral Library: Version 1 (0.2 to 3.0um)
Clark, Roger N.; Swayze, Gregg A.; Gallagher, Andrea J.; King, Trude V.V.; Calvin, Wendy M.
1993-01-01
We have developed a digital reflectance spectral library, with management and spectral analysis software. The library includes 498 spectra of 444 samples (some samples include a series of grain sizes) measured from approximately 0.2 to 3.0 um . The spectral resolution (Full Width Half Maximum) of the reflectance data is <= 4 nm in the visible (0.2-0.8 um) and <= 10 nm in the NIR (0.8-2.35 um). All spectra were corrected to absolute reflectance using an NIST Halon standard. Library management software lets users search on parameters (e.g. chemical formulae, chemical analyses, purity of samples, mineral groups, etc.) as well as spectral features. Minerals from borate, carbonate, chloride, element, halide, hydroxide, nitrate, oxide, phosphate, sulfate, sulfide, sulfosalt, and the silicate (cyclosilicate, inosilicate, nesosilicate, phyllosilicate, sorosilicate, and tectosilicate) classes are represented. X-Ray and chemical analyses are tabulated for many of the entries, and all samples have been evaluated for spectral purity. The library also contains end and intermediate members for the olivine, garnet, scapolite, montmorillonite, muscovite, jarosite, and alunite solid-solution series. We have included representative spectra of H2O ice, kerogen, ammonium-bearing minerals, rare-earth oxides, desert varnish coatings, kaolinite crystallinity series, kaolinite-smectite series, zeolite series, and an extensive evaporite series. Because of the importance of vegetation to climate-change studies we have include 17 spectra of tree leaves, bushes, and grasses. The library and software are available as a series of U.S.G.S. Open File reports. PC user software is available to convert the binary data to ascii files (a separate U.S.G.S. open file report). Additionally, a binary data files are on line at the U.S.G.S. in Denver for anonymous ftp to users on the Internet. The library search software enables a user to search on documentation parameters as well as spectral features. The analysis system includes general spectral analysis routines, plotting packages, radiative transfer software for computing intimate mixtures, routines to derive optical constants from reflectance spectra, tools to analyze spectral features, and the capability to access imaging spectrometer data cubes for spectral analysis. Users may build customized libraries (at specific wavelengths and spectral resolution) for their own instruments using the library software. We are currently extending spectral coverage to 150 um. The libraries (original and convolved) will be made available in the future on a CD-ROM.
Adaptive User Profiles in Pervasive Advertising Environments
NASA Astrophysics Data System (ADS)
Alt, Florian; Balz, Moritz; Kristes, Stefanie; Shirazi, Alireza Sahami; Mennenöh, Julian; Schmidt, Albrecht; Schröder, Hendrik; Goedicke, Michael
Nowadays modern advertising environments try to provide more efficient ads by targeting costumers based on their interests. Various approaches exist today as to how information about the users' interests can be gathered. Users can deliberately and explicitly provide this information or user's shopping behaviors can be analyzed implicitly. We implemented an advertising platform to simulate an advertising environment and present adaptive profiles, which let users setup profiles based on a self-assessment, and enhance those profiles with information about their real shopping behavior as well as about their activity intensity. Additionally, we explain how pervasive technologies such as Bluetooth can be used to create a profile anonymously and unobtrusively.
... Safe Videos for Educators Search English Español Your Teeth KidsHealth / For Kids / Your Teeth What's in this ... help you talk. So let's talk teeth! Tiny Teeth Unlike your heart or brain, your teeth weren' ...
JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age
NASA Astrophysics Data System (ADS)
Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.
2011-12-01
The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.
Generating Personalized Web Search Using Semantic Context
Xu, Zheng; Chen, Hai-Yan; Yu, Jie
2015-01-01
The “one size fits the all” criticism of search engines is that when queries are submitted, the same results are returned to different users. In order to solve this problem, personalized search is proposed, since it can provide different search results based upon the preferences of users. However, existing methods concentrate more on the long-term and independent user profile, and thus reduce the effectiveness of personalized search. In this paper, the method captures the user context to provide accurate preferences of users for effectively personalized search. First, the short-term query context is generated to identify related concepts of the query. Second, the user context is generated based on the click through data of users. Finally, a forgetting factor is introduced to merge the independent user context in a user session, which maintains the evolution of user preferences. Experimental results fully confirm that our approach can successfully represent user context according to individual user information needs. PMID:26000335
Lin, Yi-Jung; Speedie, Stuart
2003-01-01
User interface design is one of the most important parts of developing applications. Nowadays, a quality user interface must not only accommodate interaction between machines and users, but also needs to recognize the differences and provide functionalities for users from role-to-role or even individual-to-individual. With the web-based application of our Teledermatology consult system, the development environment provides us highly useful opportunities to create dynamic user interfaces, which lets us to gain greater access control and has the potential to increase efficiency of the system. We will describe the two models of user interfaces in our system: Role-based and Adaptive. PMID:14728419
... Safe Videos for Educators Search English Español Do Allergies Cause Asthma? KidsHealth / For Kids / Do Allergies Cause ... confusing, so let's find out more. How Do Allergies Happen? Most of the time, your immune (say: ...
A rank-based Prediction Algorithm of Learning User's Intention
NASA Astrophysics Data System (ADS)
Shen, Jie; Gao, Ying; Chen, Cang; Gong, HaiPing
Internet search has become an important part in people's daily life. People can find many types of information to meet different needs through search engines on the Internet. There are two issues for the current search engines: first, the users should predetermine the types of information they want and then change to the appropriate types of search engine interfaces. Second, most search engines can support multiple kinds of search functions, each function has its own separate search interface. While users need different types of information, they must switch between different interfaces. In practice, most queries are corresponding to various types of information results. These queries can search the relevant results in various search engines, such as query "Palace" contains the websites about the introduction of the National Palace Museum, blog, Wikipedia, some pictures and video information. This paper presents a new aggregative algorithm for all kinds of search results. It can filter and sort the search results by learning three aspects about the query words, search results and search history logs to achieve the purpose of detecting user's intention. Experiments demonstrate that this rank-based method for multi-types of search results is effective. It can meet the user's search needs well, enhance user's satisfaction, provide an effective and rational model for optimizing search engines and improve user's search experience.
How Users Search the Library from a Single Search Box
ERIC Educational Resources Information Center
Lown, Cory; Sierra, Tito; Boyer, Josh
2013-01-01
Academic libraries are turning increasingly to unified search solutions to simplify search and discovery of library resources. Unfortunately, very little research has been published on library user search behavior in single search box environments. This study examines how users search a large public university library using a prominent, single…
Strains and Sprains Are a Pain
... for Educators Search English Español Strains and Sprains Are a Pain KidsHealth / For Kids / Strains and Sprains ... sports. Let's find out more about them. What Are Strains and Sprains? Muscles contract and relax (almost ...
An Organizational Effectiveness Officer Tackles a Management Job: A follow-Up OE Case Study
1981-06-01
time, the COPPER "users manual " had not been updated to reflect the current methods and procedures of the PPSD. RAJ Johnson felt that documentation of...Users’ Manual , now known as PPSD Users’ Manual , to document every action and show document ilow with a flow chart. Also prior to implementing the change...comments is a document to work from. I’ll use it from my level, but let’s push it down in the organizacion ." When interviewed in December, RAJ Johnson was
Investigating User Search Tactic Patterns and System Support in Using Digital Libraries
ERIC Educational Resources Information Center
Joo, Soohyung
2013-01-01
This study aims to investigate users' search tactic application and system support in using digital libraries. A user study was conducted with sixty digital library users. The study was designed to answer three research questions: 1) How do users engage in a search process by applying different types of search tactics while conducting different…
NASA Astrophysics Data System (ADS)
Rahnamay Naeini, M.; Sadegh, M.; AghaKouchak, A.; Hsu, K. L.; Sorooshian, S.; Yang, T.
2017-12-01
Meta-Heuristic optimization algorithms have gained a great deal of attention in a wide variety of fields. Simplicity and flexibility of these algorithms, along with their robustness, make them attractive tools for solving optimization problems. Different optimization methods, however, hold algorithm-specific strengths and limitations. Performance of each individual algorithm obeys the "No-Free-Lunch" theorem, which means a single algorithm cannot consistently outperform all possible optimization problems over a variety of problems. From users' perspective, it is a tedious process to compare, validate, and select the best-performing algorithm for a specific problem or a set of test cases. In this study, we introduce a new hybrid optimization framework, entitled Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL), which combines the strengths of different evolutionary algorithms (EAs) in a parallel computing scheme, and allows users to select the most suitable algorithm tailored to the problem at hand. The concept of SC-SAHEL is to execute different EAs as separate parallel search cores, and let all participating EAs to compete during the course of the search. The newly developed SC-SAHEL algorithm is designed to automatically select, the best performing algorithm for the given optimization problem. This algorithm is rigorously effective in finding the global optimum for several strenuous benchmark test functions, and computationally efficient as compared to individual EAs. We benchmark the proposed SC-SAHEL algorithm over 29 conceptual test functions, and two real-world case studies - one hydropower reservoir model and one hydrological model (SAC-SMA). Results show that the proposed framework outperforms individual EAs in an absolute majority of the test problems, and can provide competitive results to the fittest EA algorithm with more comprehensive information during the search. The proposed framework is also flexible for merging additional EAs, boundary-handling techniques, and sampling schemes, and has good potential to be used in Water-Energy system optimal operation and management.
Multitasking Web Searching and Implications for Design.
ERIC Educational Resources Information Center
Ozmutlu, Seda; Ozmutlu, H. C.; Spink, Amanda
2003-01-01
Findings from a study of users' multitasking searches on Web search engines include: multitasking searches are a noticeable user behavior; multitasking search sessions are longer than regular search sessions in terms of queries per session and duration; both Excite and AlltheWeb.com users search for about three topics per multitasking session and…
Diffusion Geometry Based Nonlinear Methods for Hyperspectral Change Detection
2010-05-12
for matching biological spectra across a data base of hyperspectral pathology slides acquires with different instruments in different conditions, as...generalizing wavelets and similar scaling mechanisms. Plain Sight Systems, Inc. -7- Proprietary and Confidential To be specific, let the bi-Markov...remarkably well. Conventional nearest neighbor search , compared with a diffusion search. The data is a pathology slide ,each pixel is a digital
Library Instruction and Online Database Searching.
ERIC Educational Resources Information Center
Mercado, Heidi
1999-01-01
Reviews changes in online database searching in academic libraries. Topics include librarians conducting all searches; the advent of end-user searching and the need for user instruction; compact disk technology; online public catalogs; the Internet; full text databases; electronic information literacy; user education and the remote library user;…
NASA Astrophysics Data System (ADS)
Krejcar, Ondrej
The ability to let a mobile device determine its location in an indoor environment supports the creation of a new range of mobile information system applications. The goal of my project is to complement the data networking capabilities of RF wireless LANs with accurate user location and tracking capabilities for user needed data prebuffering. I created a location based system enhancement for locating and tracking users of indoor information system. User position is used for data prebuffering and pushing information from a server to his mobile client. All server data is saved as artifacts (together) with its indoor position information. The area definition for artifacts selecting is described for current and predicted user position along with valuating options for artifacts ranging. Future trends are also discussed.
Ballistic Missile Defense: Let's Look Again before We Leap into Star Wars.
ERIC Educational Resources Information Center
Morrison, David C.
1985-01-01
Americans must look beyond the superficial allure of President Reagan's Strategic Defense Initiative; they must search out the facts. Five pernicious myths on which this ill-considered proposal is founded are discussed. (RM)
MAPPER: A personal computer map projection tool
NASA Technical Reports Server (NTRS)
Bailey, Steven A.
1993-01-01
MAPPER is a set of software tools designed to let users create and manipulate map projections on a personal computer (PC). The capability exists to generate five popular map projections. These include azimuthal, cylindrical, mercator, lambert, and sinusoidal projections. Data for projections are contained in five coordinate databases at various resolutions. MAPPER is managed by a system of pull-down windows. This interface allows the user to intuitively create, view and export maps to other platforms.
Edible Oil Barriers for Treatment of Perchlorate Contaminated Groundwater
2006-02-16
perchlorate is relatively recent. Work performed in soil at Longhorn Army Ammunition Plant in Texas identified chicken manure, cow manure, and...Missile Plant , NC Pilot July-Aug. 2004 Recirculation of emulsion through source area Other DoD Facilities Confidential Site, MD Pilot Oct...G.M. Birk, 2004. A Dash of Oil and Let Marinate. Pollution Engineering, May 2004, pages 30-34. 6.3 End-User Issues Potential end users of the
Kumar, Rajendra; Sobhy, Haitham
2017-01-01
Abstract Hi-C experiments generate data in form of large genome contact maps (Hi-C maps). These show that chromosomes are arranged in a hierarchy of three-dimensional compartments. But to understand how these compartments form and by how much they affect genetic processes such as gene regulation, biologists and bioinformaticians need efficient tools to visualize and analyze Hi-C data. However, this is technically challenging because these maps are big. In this paper, we remedied this problem, partly by implementing an efficient file format and developed the genome contact map explorer platform. Apart from tools to process Hi-C data, such as normalization methods and a programmable interface, we made a graphical interface that let users browse, scroll and zoom Hi-C maps to visually search for patterns in the Hi-C data. In the software, it is also possible to browse several maps simultaneously and plot related genomic data. The software is openly accessible to the scientific community. PMID:28973466
Re-ranking via User Feedback: Georgetown University at TREC 2015 DD Track
2015-11-20
Re-ranking via User Feedback: Georgetown University at TREC 2015 DD Track Jiyun Luo and Hui Yang Department of Computer Science, Georgetown...involved in a search process, the user and the search engine. In TREC DD , the user is modeled by a simulator, called “jig”. The jig and the search engine...simulating user is provided by TREC 2015 DD Track organizer, and is called “jig”. There are 118 search topics in total. For each search topic, a short
Content-based Music Search and Recommendation System
NASA Astrophysics Data System (ADS)
Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo
Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.
What Medicines Are and What They Do (For Kids)
... Safe Videos for Educators Search English Español What Medicines Are and What They Do KidsHealth / For Kids / ... of others? Let's find out. A Rainbow of Medicine One medicine might be a pink liquid, another ...
Visualizing Rank Time Series of Wikipedia Top-Viewed Pages.
Xia, Jing; Hou, Yumeng; Chen, Yingjie Victor; Qian, Zhenyu Cheryl; Ebert, David S; Chen, Wei
2017-01-01
Visual clutter is a common challenge when visualizing large rank time series data. WikiTopReader, a reader of Wikipedia page rank, lets users explore connections among top-viewed pages by connecting page-rank behaviors with page-link relations. Such a combination enhances the unweighted Wikipedia page-link network and focuses attention on the page of interest. A set of user evaluations shows that the system effectively represents evolving ranking patterns and page-wise correlation.
Glassman, Nancy R.; Habousha, Racheline G.; Minuti, Aurelia; Schwartz, Rachel; Sorensen, Karen
2009-01-01
Due to the proliferation of electronic resources, fewer users visit the library. Traditional classroom instruction and in-person consultations are no longer sufficient in assisting library users. Librarians are constantly seeking new ways to interact with patrons and facilitate efficient use of electronic resources. This article describes the development, implementation, and evaluation of a project in which desktop-sharing software was used to reach out to users at remote locations. Various ways of using this tool are described, and challenges and implications for future expansion are discussed. PMID:20183031
Understanding PubMed user search behavior through log analysis.
Islamaj Dogan, Rezarta; Murray, G Craig; Névéol, Aurélie; Lu, Zhiyong
2009-01-01
This article reports on a detailed investigation of PubMed users' needs and behavior as a step toward improving biomedical information retrieval. PubMed is providing free service to researchers with access to more than 19 million citations for biomedical articles from MEDLINE and life science journals. It is accessed by millions of users each day. Efficient search tools are crucial for biomedical researchers to keep abreast of the biomedical literature relating to their own research. This study provides insight into PubMed users' needs and their behavior. This investigation was conducted through the analysis of one month of log data, consisting of more than 23 million user sessions and more than 58 million user queries. Multiple aspects of users' interactions with PubMed are characterized in detail with evidence from these logs. Despite having many features in common with general Web searches, biomedical information searches have unique characteristics that are made evident in this study. PubMed users are more persistent in seeking information and they reformulate queries often. The three most frequent types of search are search by author name, search by gene/protein, and search by disease. Use of abbreviation in queries is very frequent. Factors such as result set size influence users' decisions. Analysis of characteristics such as these plays a critical role in identifying users' information needs and their search habits. In turn, such an analysis also provides useful insight for improving biomedical information retrieval.Database URL:http://www.ncbi.nlm.nih.gov/PubMed.
Ståhl, Sara; Fung, Eva; Adams, Christopher; Lengqvist, Johan; Mörk, Birgitta; Stenerlöw, Bo; Lewensohn, Rolf; Lehtiö, Janne; Zubarev, Roman; Viktorsson, Kristina
2009-01-01
During the past decade, we have witnessed an explosive increase in generation of large proteomics data sets, not least in cancer research. There is a growing need to extract and correctly interpret information from such data sets to generate biologically relevant hypotheses. A pathway search engine (PSE) has recently been developed as a novel tool intended to meet these requirements. Ionizing radiation (IR) is an anticancer treatment modality that triggers multiple signal transduction networks. In this work, we show that high linear energy transfer (LET) IR induces apoptosis in a non-small cell lung cancer cell line, U-1810, whereas low LET IR does not. PSE was applied to study changes in pathway status between high and low LET IR to find pathway candidates of importance for high LET-induced apoptosis. Such pathways are potential clinical targets, and they were further validated in vitro. We used an unsupervised shotgun proteomics approach where high resolution mass spectrometry coupled to nanoflow liquid chromatography determined the identity and relative abundance of expressed proteins. Based on the proteomics data, PSE suggested the JNK pathway (p = 6·10−6) as a key event in response to high LET IR. In addition, the Fas pathway was found to be activated (p = 3·10−5) and the p38 pathway was found to be deactivated (p = 0.001) compared with untreated cells. Antibody-based analyses confirmed that high LET IR caused an increase in phosphorylation of JNK. Moreover pharmacological inhibition of JNK blocked high LET-induced apoptotic signaling. In contrast, neither an activation of p38 nor a role for p38 in high LET IR-induced apoptotic signaling was found. We conclude that, in contrast to conventional low LET IR, high LET IR can trigger activation of the JNK pathway, which in turn is critical for induction of apoptosis in these cells. Thus PSE predictions were largely confirmed, and PSE was proven to be a useful hypothesis-generating tool. PMID:19168796
Ståhl, Sara; Fung, Eva; Adams, Christopher; Lengqvist, Johan; Mörk, Birgitta; Stenerlöw, Bo; Lewensohn, Rolf; Lehtiö, Janne; Zubarev, Roman; Viktorsson, Kristina
2009-05-01
During the past decade, we have witnessed an explosive increase in generation of large proteomics data sets, not least in cancer research. There is a growing need to extract and correctly interpret information from such data sets to generate biologically relevant hypotheses. A pathway search engine (PSE) has recently been developed as a novel tool intended to meet these requirements. Ionizing radiation (IR) is an anticancer treatment modality that triggers multiple signal transduction networks. In this work, we show that high linear energy transfer (LET) IR induces apoptosis in a non-small cell lung cancer cell line, U-1810, whereas low LET IR does not. PSE was applied to study changes in pathway status between high and low LET IR to find pathway candidates of importance for high LET-induced apoptosis. Such pathways are potential clinical targets, and they were further validated in vitro. We used an unsupervised shotgun proteomics approach where high resolution mass spectrometry coupled to nanoflow liquid chromatography determined the identity and relative abundance of expressed proteins. Based on the proteomics data, PSE suggested the JNK pathway (p = 6.10(-6)) as a key event in response to high LET IR. In addition, the Fas pathway was found to be activated (p = 3.10(-5)) and the p38 pathway was found to be deactivated (p = 0.001) compared with untreated cells. Antibody-based analyses confirmed that high LET IR caused an increase in phosphorylation of JNK. Moreover pharmacological inhibition of JNK blocked high LET-induced apoptotic signaling. In contrast, neither an activation of p38 nor a role for p38 in high LET IR-induced apoptotic signaling was found. We conclude that, in contrast to conventional low LET IR, high LET IR can trigger activation of the JNK pathway, which in turn is critical for induction of apoptosis in these cells. Thus PSE predictions were largely confirmed, and PSE was proven to be a useful hypothesis-generating tool.
Finding and Exploring Health Information with a Slider-Based User Interface.
Pang, Patrick Cheong-Iao; Verspoor, Karin; Pearce, Jon; Chang, Shanton
2016-01-01
Despite the fact that search engines are the primary channel to access online health information, there are better ways to find and explore health information on the web. Search engines are prone to problems when they are used to find health information. For instance, users have difficulties in expressing health scenarios with appropriate search keywords, search results are not optimised for medical queries, and the search process does not account for users' literacy levels and reading preferences. In this paper, we describe our approach to addressing these problems by introducing a novel design using a slider-based user interface for discovering health information without the need for precise search keywords. The user evaluation suggests that the interface is easy to use and able to assist users in the process of discovering new information. This study demonstrates the potential value of adopting slider controls in the user interface of health websites for navigation and information discovery.
The underlying philosophy of Unmix is to let the data speak for itself. Unmix seeks to solve the general mixture problem where the data are assumed to be a linear combination of an unknown number of sources of unknown composition, which contribute an unknown amount to each sample...
Klein, M S; Ross, F
1997-01-01
Using the results of the 1993 Medical Library Association (MLA) Hospital Libraries Section survey of hospital-based end-user search services, this article describes how end-user search services can become an impetus for an expanded information management and technology role for the hospital librarian. An end-user services implementation plan is presented that focuses on software, hardware, finances, policies, staff allocations and responsibilities, educational program design, and program evaluation. Possibilities for extending end-user search services into information technology and informatics, specialized end-user search systems, and Internet access are described. Future opportunities are identified for expanding the hospital librarian's role in the face of changing health care management, advances in information technology, and increasing end-user expectations. PMID:9285126
Collaborative search in electronic health records.
Zheng, Kai; Mei, Qiaozhu; Hanauer, David A
2011-05-01
A full-text search engine can be a useful tool for augmenting the reuse value of unstructured narrative data stored in electronic health records (EHR). A prominent barrier to the effective utilization of such tools originates from users' lack of search expertise and/or medical-domain knowledge. To mitigate the issue, the authors experimented with a 'collaborative search' feature through a homegrown EHR search engine that allows users to preserve their search knowledge and share it with others. This feature was inspired by the success of many social information-foraging techniques used on the web that leverage users' collective wisdom to improve the quality and efficiency of information retrieval. The authors conducted an empirical evaluation study over a 4-year period. The user sample consisted of 451 academic researchers, medical practitioners, and hospital administrators. The data were analyzed using a social-network analysis to delineate the structure of the user collaboration networks that mediated the diffusion of knowledge of search. The users embraced the concept with considerable enthusiasm. About half of the EHR searches processed by the system (0.44 million) were based on stored search knowledge; 0.16 million utilized shared knowledge made available by other users. The social-network analysis results also suggest that the user-collaboration networks engendered by the collaborative search feature played an instrumental role in enabling the transfer of search knowledge across people and domains. Applying collaborative search, a social information-foraging technique popularly used on the web, may provide the potential to improve the quality and efficiency of information retrieval in healthcare.
Finding My Needle in the Haystack: Effective Personalized Re-ranking of Search Results in Prospector
NASA Astrophysics Data System (ADS)
König, Florian; van Velsen, Lex; Paramythis, Alexandros
This paper provides an overview of Prospector, a personalized Internet meta-search engine, which utilizes a combination of ontological information, ratings-based models of user interests, and complementary theme-oriented group models to recommend (through re-ranking) search results obtained from an underlying search engine. Re-ranking brings “closer to the top” those items that are of particular interest to a user or have high relevance to a given theme. A user-based, real-world evaluation has shown that the system is effective in promoting results of interest, but lags behind Google in user acceptance, possibly due to the absence of features popularized by said search engine. Overall, users would consider employing a personalized search engine to perform searches with terms that require disambiguation and / or contextualization.
ERIC Educational Resources Information Center
Hamilton, Jack A.; Wheeler, Jeanette D.
1979-01-01
A literature search indicates that many adult consumers of postsecondary education need protection from abusive practices by some schools, such as misleading advertising, incomplete information, inferior facilities, false promises, etc. The article provides a checklist for potential adult students to use in making decisions about choosing a…
GI-axe: an access broker framework for the geosciences
NASA Astrophysics Data System (ADS)
Boldrini, E.; Nativi, S.; Santoro, M.; Papeschi, F.; Mazzetti, P.
2012-12-01
The efficient and effective discovery of heterogeneous geospatial resources (e.g. data and services) is currently addressed by implementing "Discovery Brokering components"—such as GI-cat which is successfully used by the GEO brokering framework. A related (and subsequent) problem is the access of discovered resources. As for the discovery case, there exists a clear challenge: the geospatial Community makes use of heterogeneous access protocols and data models. In fact, different standards (and best practices) are defined and used by the diverse Geoscience domains and Communities of practice. Besides, through a client application, Users want to access diverse data to be jointly used in a common Geospatial Environment (CGE): a geospatial environment characterized by a spatio-temporal CRS (Coordinate Reference System), resolution, and extension. Users want to define a CGE and get the selected data ready to be used in such an environment. Finally, they want to download data according to a common encoding (either binary or textual). Therefore, it is possible to introduce the concept of "Access Brokering component" which addresses all these intermediation needs, in a transparent way for both clients (i.e. Users) and access servers (i.e. Data Providers). This work presents GI-axe: a flexible Access Broker which is capable to intermediate the different access standards and to get data according to a CGE, previously specified by the User. In doing that, GI-axe complements the capabilities of the brokered access servers, in keeping with the brokering principles. Let's consider a sample use case of a User needing to access a global temperature dataset available online on a THREDDS Data Server and a rainfall dataset accessible through a WFS—she/he may have obtained the datasets as a search result from a discovery broker. Distribution information metadata accompanying the temperature dataset further indicate that a given OPeNDAP service has to be accessed to retrieve it. At this point, the User would be in charge of searching for an existing OPeNDAP client and retrieve the desired data with the desired CGE; worse he/she may need to write his/her own OPeNDAP client. While, the User has to utilize a GIS to access the rainfall data and perform all the necessary transformations to obtain the same CGE. The GI-axe access broker takes this interoperability burden off the User, by bearing the charge of accessing the available services and performing the needed adaptations to get both data according to the same CGE. Actually, GI-axe can also expose both the TDS the WFS as (for example) a WMS, allowing the User to utilize a unique and (perhaps) more familiar client. The User can this way concentrate on less technological aspects more inherent to his/her scientific field. GI-axe has been first developed and experimented in the multidisciplinary interoperability framework of the European Community funded EuroGEOSS project. Presently, is utilized in the GEOSS Discovery & Access Brokering framework.
Development of a Web-Based Distributed Interactive Simulation (DIS) Environment Using JavaScript
2014-09-01
scripting that let users change or interact with web content depending on user input, which is in contrast with server-side scripts such as PHP, Java and...transfer, DIS usually broadcasts or multicasts its PDUs based on UDP socket. 3. JavaScript JavaScript is the scripting language of the web, and all...IDE) for developing desktop, mobile and web applications with JAVA , C++, HTML5, JavaScript and more. b. Framework The DIS implementation of
The sweet-home project: audio technology in smart homes to improve well-being and reliance.
Vacher, Michel; Istrate, Dan; Portet, François; Joubert, Thierry; Chevalier, Thierry; Smidtas, Serge; Meillon, Brigitte; Lecouteux, Benjamin; Sehili, Mohamed; Chahuara, Pedro; Méniard, Sylvain
2011-01-01
The Sweet-Home project aims at providing audio-based interaction technology that lets the user have full control over their home environment, at detecting distress situations and at easing the social inclusion of the elderly and frail population. This paper presents an overview of the project focusing on the multimodal sound corpus acquisition and labelling and on the investigated techniques for speech and sound recognition. The user study and the recognition performances show the interest of this audio technology.
Noesis: Ontology based Scoped Search Engine and Resource Aggregator for Atmospheric Science
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Movva, S.; Li, X.; Cherukuri, P.; Graves, S.
2006-12-01
The goal for search engines is to return results that are both accurate and complete. The search engines should find only what you really want and find everything you really want. Search engines (even meta search engines) lack semantics. The basis for search is simply based on string matching between the user's query term and the resource database and the semantics associated with the search string is not captured. For example, if an atmospheric scientist is searching for "pressure" related web resources, most search engines return inaccurate results such as web resources related to blood pressure. In this presentation Noesis, which is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities will be described. Noesis uses domain ontologies to help the user scope the search query to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. Noesis also serves as a resource aggregator. It categorizes the search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user.
Collaborative search in electronic health records
Mei, Qiaozhu; Hanauer, David A
2011-01-01
Objective A full-text search engine can be a useful tool for augmenting the reuse value of unstructured narrative data stored in electronic health records (EHR). A prominent barrier to the effective utilization of such tools originates from users' lack of search expertise and/or medical-domain knowledge. To mitigate the issue, the authors experimented with a ‘collaborative search’ feature through a homegrown EHR search engine that allows users to preserve their search knowledge and share it with others. This feature was inspired by the success of many social information-foraging techniques used on the web that leverage users' collective wisdom to improve the quality and efficiency of information retrieval. Design The authors conducted an empirical evaluation study over a 4-year period. The user sample consisted of 451 academic researchers, medical practitioners, and hospital administrators. The data were analyzed using a social-network analysis to delineate the structure of the user collaboration networks that mediated the diffusion of knowledge of search. Results The users embraced the concept with considerable enthusiasm. About half of the EHR searches processed by the system (0.44 million) were based on stored search knowledge; 0.16 million utilized shared knowledge made available by other users. The social-network analysis results also suggest that the user-collaboration networks engendered by the collaborative search feature played an instrumental role in enabling the transfer of search knowledge across people and domains. Conclusion Applying collaborative search, a social information-foraging technique popularly used on the web, may provide the potential to improve the quality and efficiency of information retrieval in healthcare. PMID:21486887
Uniform Interfaces for Distributed Systems.
1980-05-01
in data str ’.ctures on stable storage (such as disk). The Virtual Terminals associated with a particular user (i.e., rM display terminal) are all...vec MESSAGESIZE let error = nil [S ReceiveAny (msg) // The copy is made so that lower-level routines may // munge the message template without losing
Let's Not Forget: Learning Analytics Are about Learning
ERIC Educational Resources Information Center
Gaševic, Dragan; Dawson, Shane; Siemens, George
2015-01-01
The analysis of data collected from the interaction of users with educational and information technology has attracted much attention as a promising approach for advancing our understanding of the learning process. This promise motivated the emergence of the new research field, learning analytics, and its closely related discipline, educational…
Character Sets for PLATO/NovaNET: An Expository Catalog.
ERIC Educational Resources Information Center
Gilpin, John B.
The PLATO and NovaNET computer-based instructional systems use a fixed system character set ("normal font") and an author-definable character set ("alternate font"). The alternate font lets the author construct his own symbols and bitmapped pictures. This expository catalog allows users to determine quickly (1) whether there is…
The use, misuse and abuse of dabigatran.
Attia, John R; Pearce, Robert
2013-04-15
The tale of dabigatran sounds some cautionary notes about proper critical appraisal of new randomised controlled trials,care in deciding on the generalisability of results, judicious screening of patients and lessons about the politics around increasingly lucrative drugs. The old lesson of caveat utilitor still holds: let the user beware!
Do "Digital Certificates" Hold the Key to Colleges' On-Line Activities?
ERIC Educational Resources Information Center
Olsen, Florence
1999-01-01
Examines the increasing use of "digital certificates" to validate computer user identity in various applications on college and university campuses, including letting students register for courses, monitoring access to Internet2, and monitoring access to databases and electronic journals. The methodology has been developed by the…
What Searches Do Users Run on PEDro? An Analysis of 893,971 Search Commands Over a 6-Month Period.
Stevens, Matthew L; Moseley, Anne; Elkins, Mark R; Lin, Christine C-W; Maher, Chris G
2016-08-05
Clinicians must be able to search effectively for relevant research if they are to provide evidence-based healthcare. It is therefore relevant to consider how users search databases of evidence in healthcare, including what information users look for and what search strategies they employ. To date such analyses have been restricted to the PubMed database. Although the Physiotherapy Evidence Database (PEDro) is searched millions of times each year, no studies have investigated how users search PEDro. To assess the content and quality of searches conducted on PEDro. Searches conducted on the PEDro website over 6 months were downloaded and the 'get' commands and page-views extracted. The following data were tabulated: the 25 most common searches; the number of search terms used; the frequency of use of simple and advanced searches, including the use of each advanced search field; and the frequency of use of various search strategies. Between August 2014 and January 2015, 893,971 search commands were entered on PEDro. Fewer than 18 % of these searches used the advanced search features of PEDro. 'Musculoskeletal' was the most common subdiscipline searched, while 'low back pain' was the most common individual search. Around 20 % of all searches contained errors. PEDro is a commonly used evidence resource, but searching appears to be sub-optimal in many cases. The effectiveness of searches conducted by users needs to improve, which could be facilitated by methods such as targeted training and amending the search interface.
Introducing Online Bibliographic Service to its Users: The Online Presentation
ERIC Educational Resources Information Center
Crane, Nancy B.; Pilachowski, David M.
1978-01-01
A description of techniques for introducing online services to new user groups includes discussion of terms and their definitions, evolution of online searching, advantages and disadvantages of online searching, production of the data bases, search strategies, Boolean logic, costs and charges, "do's and don'ts," and a user search questionnaire. (J…
Earthdata Search Summer ESIP Usability Workshop
NASA Technical Reports Server (NTRS)
Reese, Mark; Sirato, Jeff
2017-01-01
The Earthdata Search Client has undergone multiple rounds of usability testing during 2017 and the user feedback received has resulted in an enhanced user interface. This session will showcase the new Earthdata Search Client user interface and provide hands-on experience for participants to learn how to search, visualize and download data in the desired format.
Wrappers for Performance Enhancement and Oblivious Decision Graphs
1995-09-01
always select all relevant features. We test di erent search engines to search the space of feature subsets and introduce compound operators to speed...distinct instances from the original dataset appearing in the test set is thus 0:632m. The 0i accuracy estimate is derived by using bootstrap sample...i for training and the rest of the instances for testing . Given a number b, the number of bootstrap samples, let 0i be the accuracy estimate for
The Landsat Image Mosaic of Antarctica
NASA Astrophysics Data System (ADS)
Bindschadler, R.; Vornberger, P.; Fleming, A.; Fox, A.; Morin, P.
2008-12-01
The first-ever true-color, high-resolution digital mosaic of Antarctica has been produced from nearly 1100 Landsat-7 ETM+ images collected between 1999 and 2003. The Landsat Image Mosaic of Antarctica (LIMA) project was an early benchmark data set of the International Polar Year and represents a close and successful collaboration between NASA, USGS, the British Antarctic Survey and the National Science Foundation. The mosaic was successfully merged with lower resolution MODIS data south of Landsat coverage to produce a complete true-color data set of the entire continent. LIMA is being used as a platform for a variety of education and outreach activities. Central to this effort is the NASA website 'Faces of Antarctica' that offers the web visitor the opportunity to explore the data set and to learn how these data are used to support scientific research. Content is delivered through a set of mysteries designed to pique the user's interest and to motivate them to delve deeper into the website where there are various videos and scientific articles for downloading. Detailed lesson plans written by teachers are provided for classroom use and Java applets let the user track the motion of ice in sequential Landsat images. Web links take the user to other sites where they can roam over the imagery using standard pan and zoom functions, or search for any named feature in the Antarctic Geographic Names data base that returns to the user a centered true-color view of any named feature. LIMA also has appeared is a host of external presentations from museum exhibits, to postcards and large posters. It has attracted various value-added providers that increase LIMA's accessibility by allowing users to specify subsets of the very large data set for individual downloads. The ultimate goal of LIMA in the public and educational sector is to enable everyone to become more familiar with Antarctica.
NASA Astrophysics Data System (ADS)
Pound, M. W.; Wolfire, M. G.; Amarnath, N. S.
2004-07-01
The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS {http://dustem.astro.umd.edu}) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars. DIRT has been used by the astrophysics community for about 5 years. Users can automatically and efficiently search grids of pre-calculated models to fit their data. A large set of physical parameters and dust types are included in the model database, which contains over 500,000 models. We are adding new functionality to DIRT to support new missions like SIRTF and SOFIA. A new Instrument module allows for plotting of the model points convolved with the spatial and spectral responses of the selected instrument. This lets users better fit data from specific instruments. Currently, we have implemented modules for the Infrared Array Camera (IRAC) and Multiband Imaging Photometer (MIPS) on SIRTF. The models are based on the dust radiation transfer code of Wolfire & Cassinelli (1986) which accounts for multiple grain sizes and compositions. The model outputs are averaged over the instrument bands using the same weighting (νFν = constant) as the SIRTF data pipeline which allows the SIRTF data products to be compared directly with the model database. This work was supported in part by a NASA AISRP grant NAG 5-10751 and the SIRTF Legacy Science Program provided by NASA through an award issued by JPL under NASA contract 1407.
Leroy, Gondy; Xu, Jennifer; Chung, Wingyan; Eggers, Shauna; Chen, Hsinchun
2007-01-01
Retrieving sufficient relevant information online is difficult for many people because they use too few keywords to search and search engines do not provide many support tools. To further complicate the search, users often ignore support tools when available. Our goal is to evaluate in a realistic setting when users use support tools and how they perceive these tools. We compared three medical search engines with support tools that require more or less effort from users to form a query and evaluate results. We carried out an end user study with 23 users who were asked to find information, i.e., subtopics and supporting abstracts, for a given theme. We used a balanced within-subjects design and report on the effectiveness, efficiency and usability of the support tools from the end user perspective. We found significant differences in efficiency but did not find significant differences in effectiveness between the three search engines. Dynamic user support tools requiring less effort led to higher efficiency. Fewer searches were needed and more documents were found per search when both query reformulation and result review tools dynamically adjust to the user query. The query reformulation tool that provided a long list of keywords, dynamically adjusted to the user query, was used most often and led to more subtopics. As hypothesized, the dynamic result review tools were used more often and led to more subtopics than static ones. These results were corroborated by the usability questionnaires, which showed that support tools that dynamically optimize output were preferred.
ERIC Educational Resources Information Center
Auster, Ethel; Lawton, Stephen B.
This research study involved a systematic investigation into the relationships among: (1) the techniques used by search analysts during preliminary interviews with users before engaging in online retrieval of bibliographic citations; (2) the amount of new information gained by the user as a result of the search; and (3) the user's ultimate…
ERIC Educational Resources Information Center
National Institute of General Medical Sciences (NIGMS), 2009
2009-01-01
Computer advances now let researchers quickly search through DNA sequences to find gene variations that could lead to disease, simulate how flu might spread through one's school, and design three-dimensional animations of molecules that rival any video game. By teaming computers and biology, scientists can answer new and old questions that could…
The MAR databases: development and implementation of databases specific for marine metagenomics
Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen
2018-01-01
Abstract We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. PMID:29106641
Library Searching: An Industrial User's Viewpoint.
ERIC Educational Resources Information Center
Hendrickson, W. A.
1982-01-01
Discusses library searching of chemical literature from an industrial user's viewpoint, focusing on differences between academic and industrial researcher's searching techniques of the same problem area. Indicates that industry users need more exposure to patents, work with abstracting services and continued improvement in computer searching…
Computer Architecture's Changing Role in Rebooting Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeBenedictis, Erik P.
In this paper, Windows 95 started the Wintel era, in which Microsoft Windows running on Intel x86 microprocessors dominated the computer industry and changed the world. Retaining the x86 instruction set across many generations let users buy new and more capable microprocessors without having to buy software to work with new architectures.
So You Need Information About Mexican Americans? Let ERIC Help!
ERIC Educational Resources Information Center
Quezada, Manuela L., Comp.; Chabran, Richard, Comp.
The guide is intended to explain and demonstrate by example how to use the Educational Resources Information Center (ERIC) system, especially to find information pertaining to Mexican Americans. An overview of ERIC and ERIC/CRESS (ERIC Clearinghouse on Rural Education and Small Schools) is given, noting definitions, potential users, types of…
NERISK: AN EXPERT SYSTEM TO ENHANCE THE INTEGRATION OF PESTICIDES WITH ARTHROPOD BIOLOGICAL CONTROL
An expert system termed NERISK was developed to evaluate the effects of pesticides on arthropod predators and parasitoids in a variety of agroecosystems. ased on a shell system (RECOG) with minor coding modifications, the system was designed to let even a novice user access the v...
Future Shop: A Model Career Placement & Transition Laboratory.
ERIC Educational Resources Information Center
Floyd, Deborah L.; And Others
During 1988-89, the Collin County Community College District (CCCCD) conducted a project to develop, implement, and evaluate a model career laboratory called a "Future Shop." The laboratory was designed to let users explore diverse career options, job placement opportunities, and transfer resources. The Future Shop lab had three major components:…
Computer Architecture's Changing Role in Rebooting Computing
DeBenedictis, Erik P.
2017-04-26
In this paper, Windows 95 started the Wintel era, in which Microsoft Windows running on Intel x86 microprocessors dominated the computer industry and changed the world. Retaining the x86 instruction set across many generations let users buy new and more capable microprocessors without having to buy software to work with new architectures.
ERIC Educational Resources Information Center
Milner, Jacob
2005-01-01
Voice over Internet Protocol (VoIP) is everywhere. The technology lets users make and receive phone calls over the Internet, transporting voice traffic alongside data traffic such as instant messages (IMs) and e-mail. While the number of consumer customers using VoIP increases every week, the technology is finding its way into K-12 education as…
Let ABE Do It. Basic Education in the Workplace.
ERIC Educational Resources Information Center
Mark, Jorie Lester, Ed.
This publication highlights business, industry, union, and Job Training Partnership Act (JTPA)-supported efforts to provide public and private employees, as well as some prospective employees, with the basic literacy skills they need to perform in the workplace. Basic or remedial education users listed in this directory include 198 companies or…
Tang, Muh-Chyun; Liu, Ying-Hsang; Wu, Wan-Ching
2013-09-01
Previous research has shown that information seekers in biomedical domain need more support in formulating their queries. A user study was conducted to evaluate the effectiveness of a metadata based query suggestion interface for PubMed bibliographic search. The study also investigated the impact of search task familiarity on search behaviors and the effectiveness of the interface. A real user, user search request and real system approach was used for the study. Unlike tradition IR evaluation, where assigned tasks were used, the participants were asked to search requests of their own. Forty-four researchers in Health Sciences participated in the evaluation - each conducted two research requests of their own, alternately with the proposed interface and the PubMed baseline. Several performance criteria were measured to assess the potential benefits of the experimental interface, including users' assessment of their original and eventual queries, the perceived usefulness of the interfaces, satisfaction with the search results, and the average relevance score of the saved records. The results show that, when searching for an unfamiliar topic, users were more likely to change their queries, indicating the effect of familiarity on search behaviors. The results also show that the interface scored higher on several of the performance criteria, such as the "goodness" of the queries, perceived usefulness, and user satisfaction. Furthermore, in line with our hypothesis, the proposed interface was relatively more effective when less familiar search requests were attempted. Results indicate that there is a selective compatibility between search familiarity and search interface. One implication of the research for system evaluation is the importance of taking into consideration task familiarity when assessing the effectiveness of interactive IR systems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Kennedy, Mike
2011-01-01
One doesn't have to search very far to find incidents of life-threatening violence at schools and universities throughout the nation--let alone tragedies away from campuses such as a gunman's January attack outside a Tucson, Arizona, grocery store that left six dead and Congresswoman Gabrielle Giffords seriously wounded. These shootings are the…
ERIC Educational Resources Information Center
Bednarski, Marsha
2006-01-01
This article describes an alternative way of teaching about biomes by having students become expert biogeographers. In order to become experts students need to first find out what a biogeographer does. Doing an online search lets students find out for themselves what the responsibilities are of people who work in this field. A good place to visit…
Vacher, Michel; Chahuara, Pedro; Lecouteux, Benjamin; Istrate, Dan; Portet, Francois; Joubert, Thierry; Sehili, Mohamed; Meillon, Brigitte; Bonnefond, Nicolas; Fabre, Sébastien; Roux, Camille; Caffiau, Sybille
2013-01-01
The Sweet-Home project aims at providing audio-based interaction technology that lets the user have full control over their home environment, at detecting distress situations and at easing the social inclusion of the elderly and frail population. This paper presents an overview of the project focusing on the implemented techniques for speech and sound recognition as context-aware decision making with uncertainty. A user experiment in a smart home demonstrates the interest of this audio-based technology.
SearchGUI: An open-source graphical user interface for simultaneous OMSSA and X!Tandem searches.
Vaudel, Marc; Barsnes, Harald; Berven, Frode S; Sickmann, Albert; Martens, Lennart
2011-03-01
The identification of proteins by mass spectrometry is a standard technique in the field of proteomics, relying on search engines to perform the identifications of the acquired spectra. Here, we present a user-friendly, lightweight and open-source graphical user interface called SearchGUI (http://searchgui.googlecode.com), for configuring and running the freely available OMSSA (open mass spectrometry search algorithm) and X!Tandem search engines simultaneously. Freely available under the permissible Apache2 license, SearchGUI is supported on Windows, Linux and OSX. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Experiences in Training End-User Searchers.
ERIC Educational Resources Information Center
Haines, Judith S.
1982-01-01
Describes study of chemists in the Chemistry Division, Organic Research Laboratory, Eastman Kodak Company, as end-user searchers on the DIALOG system searching primarily the "Chemical Abstracts" database. Training, level of use, online browsing, types of searches, satisfaction, costs, and value of end-user searching are highlighted.…
Using the TSAR Electromagnetic modeling system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pennock, S.T.; Laguna, G.W.
1993-09-01
A new user, upon receipt of the TSAR EM modeling system, may be overwhelmed by the number of software packages to learn and the number of manuals associated with those packages. This is a document to describe the creation of a simple TSAR model, beginning with an MGED solid and continuing the process through final results from TSAR. It is not intended to be a complete description of all the parts of the TSAR package. Rather, it is intended simply to touch on all the steps in the modeling process and to take a new user through the system frommore » start to finish. There are six basic parts to the TSAR package. The first, MGED, is part of the BRL-CAD package and is used to create a solid model. The second part, ANASTASIA, is the program used to sample the solid model and create a finite -- difference mesh. The third program, IMAGE, lets the user view the mesh itself and verify its accuracy. If everything about the mesh is correct, the process continues to the fourth step, SETUP-TSAR, which creates the parameter files for compiling TSAR and the input file for running a particular simulation. The fifth step is actually running TSAR, the field modeling program. Finally, the output from TSAR is placed into SIG, B2RAS or another program for post-processing and plotting. Each of these steps will be described below. The best way to learn to use the TSAR software is to actually create and run a simple test problem. As an example of how to use the TSAR package, let`s create a sphere with a rectangular internal cavity, with conical and cylindrical penetrations connecting the outside to the inside, and find the electric field inside the cavity when the object is exposed to a Gaussian plane wave. We will begin with the solid modeling software, MGED, a part of the BRL-CAD modeling release.« less
Querying databases of trajectories of differential equations: Data structures for trajectories
NASA Technical Reports Server (NTRS)
Grossman, Robert
1989-01-01
One approach to qualitative reasoning about dynamical systems is to extract qualitative information by searching or making queries on databases containing very large numbers of trajectories. The efficiency of such queries depends crucially upon finding an appropriate data structure for trajectories of dynamical systems. Suppose that a large number of parameterized trajectories gamma of a dynamical system evolving in R sup N are stored in a database. Let Eta is contained in set R sup N denote a parameterized path in Euclidean Space, and let the Euclidean Norm denote a norm on the space of paths. A data structure is defined to represent trajectories of dynamical systems, and an algorithm is sketched which answers queries.
Smart internet search engine through 6W
NASA Astrophysics Data System (ADS)
Goehler, Stephen; Cader, Masud; Szu, Harold
2006-04-01
Current Internet search engine technology is limited in its ability to display necessary relevant information to the user. Yahoo, Google and Microsoft use lookup tables or indexes which limits the ability of users to find their desired information. While these companies have improved their results over the years by enhancing their existing technology and algorithms with specialized heuristics such as PageRank, there is a need for a next generation smart search engine that can effectively interpret the relevance of user searches and provide the actual information requested. This paper explores whether a smarter Internet search engine can effectively fulfill a user's needs through the use of 6W representations.
Digital Archive Issues from the Perspective of an Earth Science Data Producer
NASA Technical Reports Server (NTRS)
Barkstrom, Bruce R.
2004-01-01
Contents include the following: Introduction. A Producer Perspective on Earth Science Data. Data Producers as Members of a Scientific Community. Some Unique Characteristics of Scientific Data. Spatial and Temporal Sampling for Earth (or Space) Science Data. The Influence of the Data Production System Architecture. The Spatial and Temporal Structures Underlying Earth Science Data. Earth Science Data File (or Relation) Schemas. Data Producer Configuration Management Complexities. The Topology of Earth Science Data Inventories. Some Thoughts on the User Perspective. Science Data User Communities. Spatial and Temporal Structure Needs of Different Users. User Spatial Objects. Data Search Services. Inventory Search. Parameter (Keyword) Search. Metadata Searches. Documentation Search. Secondary Index Search. Print Technology and Hypertext. Inter-Data Collection Configuration Management Issues. An Archive View. Producer Data Ingest and Production. User Data Searching and Distribution. Subsetting and Supersetting. Semantic Requirements for Data Interchange. Tentative Conclusions. An Object Oriented View of Archive Information Evolution. Scientific Data Archival Issues. A Perspective on the Future of Digital Archives for Scientific Data. References Index for this paper.
Multimedia Web Searching Trends.
ERIC Educational Resources Information Center
Ozmutlu, Seda; Spink, Amanda; Ozmutlu, H. Cenk
2002-01-01
Examines and compares multimedia Web searching by Excite and FAST search engine users in 2001. Highlights include audio and video queries; time spent on searches; terms per query; ranking of the most frequently used terms; and differences in Web search behaviors of U.S. and European Web users. (Author/LRW)
Social Search: A Taxonomy of, and a User-Centred Approach to, Social Web Search
ERIC Educational Resources Information Center
McDonnell, Michael; Shiri, Ali
2011-01-01
Purpose: The purpose of this paper is to introduce the notion of social search as a new concept, drawing upon the patterns of web search behaviour. It aims to: define social search; present a taxonomy of social search; and propose a user-centred social search method. Design/methodology/approach: A mixed method approach was adopted to investigate…
NASA Astrophysics Data System (ADS)
Sorokina, Svetlana; Zaichkina, Svetlana; Dyukina, Alsu; Rozanova, Olga; Balakin, Vladimir; Peleshko, Vladimir; Romanchenko, Sergey; Smirnova, Helena; Aptikaeva, Gella; Shemyakov, Alexander
In recent ten years one of the major problems of modern radiobiology is the study of radiation protective mechanisms with the help of different substances as well as activation of internal resources of the organism. Internal resources mean such phenomena as hormesis and adaptive response which represent cell or body reaction on low doses of inducing factors and predetermine their further high dose effect resistance. At present special interest is attracted by studies of biological effects of low-dose-rate high-LET radiation because of searching for new types of radiation for more effective cancer therapy and searching for new methods of radiation protection. Since natural biologically active substances have low toxicity and are capable of affecting physiological processes taking place in human’s organism and increasing organism’s natural defense system, the interest to protective means of vegetal origin and search of special food supplements intensifies every year. The purpose of this study is to investigate the combined influence of food supplement, low dose rate high-LET radiation simulating high-altitude flight conditions and X-ray radiations on radiosensitivity, induction of radiation adaptive response (RAR) and growth of Ehrlich ascite carcinoma as well. Experiments were performed with males of SHK mice at the age of two months. The animals were being irradiated with low-dose-rate high-LET radiation with the dose of 11,6 cGy (0,5 cGy/day) behind the concrete shield of the 70 GeV protons accelerator (Protvino). The X-ray irradiation was carried out on the RTH device with a voltage of 200 kV (1 Gy/min; Pushchino). The diet composition included products containing big amount of biologically active substances, such as: soybeam meat, buckwheat, lettuce leaves and drug of cod-liver oil. Four groups of mice were fed with selected products mentioned above during the whole irradiation period of 22 days. The control groups received the same food without irradiation. The relation of the amount of the food supplement to the quantity of standard food was selected experimentally. In order to determine the level of radiosensitivity all groups of mice were subjected to X-radiation with the dose of 1,5 Gy and for induction of RAR the animals were irradiated according to the standard scheme (10 cGy+1,5 Gy). The influence of food supplement on the growth of solid tumor was estimated by measuring the size of the tumor at different times after the inoculation of ascitic cells s.c. into the femur. The percent of polychromatic erythrocytes (PCE) with micronucleus (MN) in marrow served as definition criteria of cytogenetic level of damage. The results of the study indicate that: 1) Due to influence of high-LET radiation with the dose of 11,6 Gy, mice who had dietary supplement demonstrated reduction of PCE with MN to the level of natural background radiation comparing with mice who had only standard food; 2) Diet containing soybeam, buckwheat or greens unlike cod-liver oil reduces the sensitivity of mice to X-radiation with the dose of 1,5 Gy and causes significant slowdown in growth of Ehrlich carcinoma; 3) The combined effect of high-LET radiation and the food supplements (except for cod-liver oil) reduces the sensitivity of mice to irradiation with the dose of 1,5 Gy, which demonstrate ability of RAR induction unlike the mice only irradiated with high-LET radiation and causes the slowdown in growth speed of Ehrlich carcinoma in contrast to the mice only irradiated with high-LET with the dose of 11,6 Gy; 4) The combined effect of high-LET radiation and the food supplements (except for cod-liver oil) does not influence the quantity of RAR according to the standard scheme (10 cGy+1,5 Gy).
Exploration of Web Users' Search Interests through Automatic Subject Categorization of Query Terms.
ERIC Educational Resources Information Center
Pu, Hsiao-tieh; Yang, Chyan; Chuang, Shui-Lung
2001-01-01
Proposes a mechanism that carefully integrates human and machine efforts to explore Web users' search interests. The approach consists of a four-step process: extraction of core terms; construction of subject taxonomy; automatic subject categorization of query terms; and observation of users' search interests. Research findings are proved valuable…
ERIC Educational Resources Information Center
Academy for Educational Development, Washington, DC.
This CD-ROM is part of an interactive and dynamic multimedia package of information and games for learning K'iche' and Ixil. The CD-ROMs help bilingual pre-service teachers improve their reading, writing, and listening comprehension skills in their own Mayan language. After a musical and colorful introduction, users may choose introductions to…
Let's Talk about Digital Learners in the Digital Era
ERIC Educational Resources Information Center
Gallardo-Echenique, Eliana Esther; Marqués-Molías, Luis; Bullen, Mark; Strijbos, Jan-Willem
2015-01-01
This paper reports on a literature review of the concept of "Digital Natives" and related terms. More specifically, it reports on the idea of a homogeneous generation of prolific and skilled users of digital technology born between 1980 and 1994. In all, 127 articles published between 1991 and 2014 were reviewed. On the basis of the…
Stay connected | National Oceanic and Atmospheric Administration
areas and people. Please note: Some links below will lead to non-governmental websites visit the site disclaimer to see how these links are handled. Facebook icon. Facebook Facebook lets users follow people and ;a visual discovery tool that you can use to find ideas for all your projects and interests."
Faculty/Student Surveys Using Open Source Software
ERIC Educational Resources Information Center
Kaceli, Sali
2004-01-01
This session will highlight an easy survey package which lets non-technical users create surveys, administer surveys, gather results, and view statistics. This is an open source application all managed online via a web browser. By using phpESP, the faculty is given the freedom of creating various surveys at their convenience and link them to their…
Books, Books, Books--Let Us Read: A Library Serving Sheltered and Incarcerated Youth.
ERIC Educational Resources Information Center
Carlson, Pam
1994-01-01
Describes the growth and development of a library program serving a shelter for abused and neglected children and youth and a juvenile detention center in Orange County (California). Program funding, materials preferred by teen users, library management, special events, and problems are discussed. Teen patrons and their use of the services are…
Brain Gym: Let the User Beware
ERIC Educational Resources Information Center
Kroeze, Kevin; Hyatt, Keith; Lambert, Chuck
2015-01-01
As part of the No Child Left behind Act of 2001 and the Individuals with Disabilities Education Improvement Act of 2004, schools are called upon to provide students with academic instruction using scientific, research-based methods whenever possible. One of these supposed research-based methods is a program by the name of Brain Gym®. Brain Gym® is…
The IRI/LDEO Climate Data Library: Helping People use Climate Data
NASA Astrophysics Data System (ADS)
Blumenthal, M. B.; Grover-Kopec, E.; Bell, M.; del Corral, J.
2005-12-01
The IRI Climate Data Library (http://iridl.ldeo.columbia.edu/) is a library of datasets. By library we mean a collection of things, collected from both near and far, designed to make them more accessible for the library's users. Our datasets come from many different sources, many different "data cultures", many different formats. By dataset we mean a collection of data organized as multidimensional dependent variables, independent variables, and sub-datasets, along with the metadata (particularly use-metadata) that makes it possible to interpret the data in a meaningful manner. Ingrid, which provides the infrastructure for the Data Library, is an environment that lets one work with datasets: read, write, request, serve, view, select, calculate, transform, ... . It hides an extraordinary amount of technical detail from the user, letting the user think in terms of manipulations to datasets rather that manipulations of files of numbers. Among other things, this hidden technical detail could be accessing data on servers in other places, doing only the small needed portion of an enormous calculation, or translating to and from a variety of formats and between "data cultures". These operations are presented as a collection of virtual directories and documents on a web server, so that an ordinary web client can instantiate a calculation simply by requesting the resulting document or image. Building on this infrastructure, we (and others) have created collections of dynamically-updated images to faciliate monitoring aspects of the climate system, as well as linking these images to the underlying data. We have also created specialized interfaces to address the particular needs of user groups that IRI needs to support.
Improving visual search in instruction manuals using pictograms.
Kovačević, Dorotea; Brozović, Maja; Možina, Klementina
2016-11-01
Instruction manuals provide important messages about the proper use of a product. They should communicate in such a way that they facilitate users' searches for specific information. Despite the increasing research interest in visual search, there is a lack of empirical knowledge concerning the role of pictograms in search performance during the browsing of a manual's pages. This study investigates how the inclusion of pictograms improves the search for the target information. Furthermore, it examines whether this search process is influenced by the visual similarity between the pictograms and the searched for information. On the basis of eye-tracking measurements, as objective indicators of the participants' visual attention, it was found that pictograms can be a useful element of search strategy. Another interesting finding was that boldface highlighting is a more effective method for improving user experience in information seeking, rather than the similarity between the pictorial and adjacent textual information. Implications for designing effective user manuals are discussed. Practitioner Summary: Users often view instruction manuals with the aim of finding specific information. We used eye-tracking technology to examine different manual pages in order to improve the user's visual search for target information. The results indicate that the use of pictograms and bold highlighting of relevant information facilitate the search process.
Modeling Guru: Knowledge Base for NASA Modelers
NASA Astrophysics Data System (ADS)
Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.
2009-05-01
Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the same document, even allowing the author to select those who may edit and approve the document. To maintain knowledge integrity, all documents are moderated before they are visible to the public. Modeling Guru, running on Clearspace by Jive Software, has been an active resource to the NASA modeling and HEC communities for more than a year and currently has more than 100 active users. SIVO will soon install live instant messaging support, as well as a user-customizable homepage with social-networking features. In addition, SIVO plans to implement a large dataset/file storage capability so that users can quickly and easily exchange datasets and files with one another. Continued active community participation combined with periodic software updates and improved features will ensure that Modeling Guru remains a vibrant, effective, easy-to-use tool for the NASA scientific community.
Using Internet Search Engines to Obtain Medical Information: A Comparative Study
Wang, Liupu; Wang, Juexin; Wang, Michael; Li, Yong; Liang, Yanchun
2012-01-01
Background The Internet has become one of the most important means to obtain health and medical information. It is often the first step in checking for basic information about a disease and its treatment. The search results are often useful to general users. Various search engines such as Google, Yahoo!, Bing, and Ask.com can play an important role in obtaining medical information for both medical professionals and lay people. However, the usability and effectiveness of various search engines for medical information have not been comprehensively compared and evaluated. Objective To compare major Internet search engines in their usability of obtaining medical and health information. Methods We applied usability testing as a software engineering technique and a standard industry practice to compare the four major search engines (Google, Yahoo!, Bing, and Ask.com) in obtaining health and medical information. For this purpose, we searched the keyword breast cancer in Google, Yahoo!, Bing, and Ask.com and saved the results of the top 200 links from each search engine. We combined nonredundant links from the four search engines and gave them to volunteer users in an alphabetical order. The volunteer users evaluated the websites and scored each website from 0 to 10 (lowest to highest) based on the usefulness of the content relevant to breast cancer. A medical expert identified six well-known websites related to breast cancer in advance as standards. We also used five keywords associated with breast cancer defined in the latest release of Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) and analyzed their occurrence in the websites. Results Each search engine provided rich information related to breast cancer in the search results. All six standard websites were among the top 30 in search results of all four search engines. Google had the best search validity (in terms of whether a website could be opened), followed by Bing, Ask.com, and Yahoo!. The search results highly overlapped between the search engines, and the overlap between any two search engines was about half or more. On the other hand, each search engine emphasized various types of content differently. In terms of user satisfaction analysis, volunteer users scored Bing the highest for its usefulness, followed by Yahoo!, Google, and Ask.com. Conclusions Google, Yahoo!, Bing, and Ask.com are by and large effective search engines for helping lay users get health and medical information. Nevertheless, the current ranking methods have some pitfalls and there is room for improvement to help users get more accurate and useful information. We suggest that search engine users explore multiple search engines to search different types of health information and medical knowledge for their own needs and get a professional consultation if necessary. PMID:22672889
Using Internet search engines to obtain medical information: a comparative study.
Wang, Liupu; Wang, Juexin; Wang, Michael; Li, Yong; Liang, Yanchun; Xu, Dong
2012-05-16
The Internet has become one of the most important means to obtain health and medical information. It is often the first step in checking for basic information about a disease and its treatment. The search results are often useful to general users. Various search engines such as Google, Yahoo!, Bing, and Ask.com can play an important role in obtaining medical information for both medical professionals and lay people. However, the usability and effectiveness of various search engines for medical information have not been comprehensively compared and evaluated. To compare major Internet search engines in their usability of obtaining medical and health information. We applied usability testing as a software engineering technique and a standard industry practice to compare the four major search engines (Google, Yahoo!, Bing, and Ask.com) in obtaining health and medical information. For this purpose, we searched the keyword breast cancer in Google, Yahoo!, Bing, and Ask.com and saved the results of the top 200 links from each search engine. We combined nonredundant links from the four search engines and gave them to volunteer users in an alphabetical order. The volunteer users evaluated the websites and scored each website from 0 to 10 (lowest to highest) based on the usefulness of the content relevant to breast cancer. A medical expert identified six well-known websites related to breast cancer in advance as standards. We also used five keywords associated with breast cancer defined in the latest release of Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) and analyzed their occurrence in the websites. Each search engine provided rich information related to breast cancer in the search results. All six standard websites were among the top 30 in search results of all four search engines. Google had the best search validity (in terms of whether a website could be opened), followed by Bing, Ask.com, and Yahoo!. The search results highly overlapped between the search engines, and the overlap between any two search engines was about half or more. On the other hand, each search engine emphasized various types of content differently. In terms of user satisfaction analysis, volunteer users scored Bing the highest for its usefulness, followed by Yahoo!, Google, and Ask.com. Google, Yahoo!, Bing, and Ask.com are by and large effective search engines for helping lay users get health and medical information. Nevertheless, the current ranking methods have some pitfalls and there is room for improvement to help users get more accurate and useful information. We suggest that search engine users explore multiple search engines to search different types of health information and medical knowledge for their own needs and get a professional consultation if necessary.
Dynamic User Interfaces for Service Oriented Architectures in Healthcare.
Schweitzer, Marco; Hoerbst, Alexander
2016-01-01
Electronic Health Records (EHRs) play a crucial role in healthcare today. Considering a data-centric view, EHRs are very advanced as they provide and share healthcare data in a cross-institutional and patient-centered way adhering to high syntactic and semantic interoperability. However, the EHR functionalities available for the end users are rare and hence often limited to basic document query functions. Future EHR use necessitates the ability to let the users define their needed data according to a certain situation and how this data should be processed. Workflow and semantic modelling approaches as well as Web services provide means to fulfil such a goal. This thesis develops concepts for dynamic interfaces between EHR end users and a service oriented eHealth infrastructure, which allow the users to design their flexible EHR needs, modeled in a dynamic and formal way. These are used to discover, compose and execute the right Semantic Web services.
Search Pathways: Modeling GeoData Search Behavior to Support Usable Application Development
NASA Astrophysics Data System (ADS)
Yarmey, L.; Rosati, A.; Tressel, S.
2014-12-01
Recent technical advances have enabled development of new scientific data discovery systems. Metadata brokering, linked data, and other mechanisms allow users to discover scientific data of interes across growing volumes of heterogeneous content. Matching this complex content with existing discovery technologies, people looking for scientific data are presented with an ever-growing array of features to sort, filter, subset, and scan through search returns to help them find what they are looking for. This paper examines the applicability of available technologies in connecting searchers with the data of interest. What metrics can be used to track success given shifting baselines of content and technology? How well do existing technologies map to steps in user search patterns? Taking a user-driven development approach, the team behind the Arctic Data Explorer interdisciplinary data discovery application invested heavily in usability testing and user search behavior analysis. Building on earlier library community search behavior work, models were developed to better define the diverse set of thought processes and steps users took to find data of interest, here called 'search pathways'. This research builds a deeper understanding of the user community that seeks to reuse scientific data. This approach ensures that development decisions are driven by clearly articulated user needs instead of ad hoc technology trends. Initial results from this research will be presented along with lessons learned for other discovery platform development and future directions for informatics research into search pathways.
The Custom Search allows users to search for and generate customized data downloads of pollutant loadings information. Users can select varying levels of detail for outputs: annual, monitoring period, and facility level.
Should I Let My Child Watch Television?
ERIC Educational Resources Information Center
Bharadwaj, Balaji
2013-01-01
While the prevalence of autism has been increasing globally, there is a search for the causative factors behind the rise. The point of view presented here examines the possibility of children brought up in social deprivation and watching television being at higher risk for developing autistic symptoms. The association is evident in the clinical…
Correlation Revelation: The Search for Meaning in Pearson's Coefficient
ERIC Educational Resources Information Center
Huhn, Craig
2016-01-01
When the author was first charged with getting a group of students to understand the correlation coefficient, he did not anticipate the topic would challenge his own understanding, let alone cause him to eventually question the very nature of mathematics itself. On the surface, the idea seemed straightforward, one that millions of students across…
Developing a distributed HTML5-based search engine for geospatial resource discovery
NASA Astrophysics Data System (ADS)
ZHOU, N.; XIA, J.; Nebert, D.; Yang, C.; Gui, Z.; Liu, K.
2013-12-01
With explosive growth of data, Geospatial Cyberinfrastructure(GCI) components are developed to manage geospatial resources, such as data discovery and data publishing. However, the efficiency of geospatial resources discovery is still challenging in that: (1) existing GCIs are usually developed for users of specific domains. Users may have to visit a number of GCIs to find appropriate resources; (2) The complexity of decentralized network environment usually results in slow response and pool user experience; (3) Users who use different browsers and devices may have very different user experiences because of the diversity of front-end platforms (e.g. Silverlight, Flash or HTML). To address these issues, we developed a distributed and HTML5-based search engine. Specifically, (1)the search engine adopts a brokering approach to retrieve geospatial metadata from various and distributed GCIs; (2) the asynchronous record retrieval mode enhances the search performance and user interactivity; (3) the search engine based on HTML5 is able to provide unified access capabilities for users with different devices (e.g. tablet and smartphone).
A Detailed Analysis of End-User Search Behaviors.
ERIC Educational Resources Information Center
Wildemuth, Barbara M.; And Others
1991-01-01
Discussion of search strategy formulation focuses on a study at the University of North Carolina at Chapel Hill that analyzed how medical students developed and revised search strategies for microbiology database searches. Implications for future research on search behavior, for system interface design, and for end user training are suggested. (16…
Analysis of Users' Searches of CD-ROM Databases in the National and University Library in Zagreb.
ERIC Educational Resources Information Center
Jokic, Maja
1997-01-01
Investigates the search behavior of CD-ROM database users in Zagreb (Croatia) libraries: one group needed a minimum of technical assistance, and the other was completely independent. Highlights include the use of questionnaires and transaction log analysis and the need for end-user education. The questionnaire and definitions of search process…
The Searching Behavior of Remote Users: A Study of One Online Public Access Catalog (OPAC).
ERIC Educational Resources Information Center
Kalin, Sally W.
1991-01-01
Describes a study that was conducted to determine whether the searching behavior of remote users of LIAS (Library Information Access System), Pennsylvania State University's online public access catalog (OPAC), differed from those using the OPAC within the library. Differences in search strategies and in user satisfaction are discussed. (eight…
What Friends Are For: Collaborative Intelligence Analysis and Search
2014-06-01
14. SUBJECT TERMS Intelligence Community, information retrieval, recommender systems , search engines, social networks, user profiling, Lucene...improvements over existing search systems . The improvements are shown to be robust to high levels of human error and low similarity between users ...precision NOLH nearly orthogonal Latin hypercubes P@ precision at documents RS recommender systems TREC Text REtrieval Conference USM user
A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model
NASA Technical Reports Server (NTRS)
Mathe, Nathalie; Chen, James
1994-01-01
Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.
Exploring personalized searches using tag-based user profiles and resource profiles in folksonomy.
Cai, Yi; Li, Qing; Xie, Haoran; Min, Huaqin
2014-10-01
With the increase in resource-sharing websites such as YouTube and Flickr, many shared resources have arisen on the Web. Personalized searches have become more important and challenging since users demand higher retrieval quality. To achieve this goal, personalized searches need to take users' personalized profiles and information needs into consideration. Collaborative tagging (also known as folksonomy) systems allow users to annotate resources with their own tags, which provides a simple but powerful way for organizing, retrieving and sharing different types of social resources. In this article, we examine the limitations of previous tag-based personalized searches. To handle these limitations, we propose a new method to model user profiles and resource profiles in collaborative tagging systems. We use a normalized term frequency to indicate the preference degree of a user on a tag. A novel search method using such profiles of users and resources is proposed to facilitate the desired personalization in resource searches. In our framework, instead of the keyword matching or similarity measurement used in previous works, the relevance measurement between a resource and a user query (termed the query relevance) is treated as a fuzzy satisfaction problem of a user's query requirements. We implement a prototype system called the Folksonomy-based Multimedia Retrieval System (FMRS). Experiments using the FMRS data set and the MovieLens data set show that our proposed method outperforms baseline methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Search Engines: Gateway to a New ``Panopticon''?
NASA Astrophysics Data System (ADS)
Kosta, Eleni; Kalloniatis, Christos; Mitrou, Lilian; Kavakli, Evangelia
Nowadays, Internet users are depending on various search engines in order to be able to find requested information on the Web. Although most users feel that they are and remain anonymous when they place their search queries, reality proves otherwise. The increasing importance of search engines for the location of the desired information on the Internet usually leads to considerable inroads into the privacy of users. The scope of this paper is to study the main privacy issues with regard to search engines, such as the anonymisation of search logs and their retention period, and to examine the applicability of the European data protection legislation to non-EU search engine providers. Ixquick, a privacy-friendly meta search engine will be presented as an alternative to privacy intrusive existing practices of search engines.
The Collaborative Search by Tag-Based User Profile in Social Media
Li, Xiaodong; Li, Qing
2014-01-01
Recently, we have witnessed the popularity and proliferation of social media applications (e.g., Delicious, Flickr, and YouTube) in the web 2.0 era. The rapid growth of user-generated data results in the problem of information overload to users. Facing such a tremendous volume of data, it is a big challenge to assist the users to find their desired data. To attack this critical problem, we propose the collaborative search approach in this paper. The core idea is that similar users may have common interests so as to help users to find their demanded data. Similar research has been conducted on the user log analysis in web search. However, the rapid growth and change of user-generated data in social media require us to discover a brand-new approach to address the unsolved issues (e.g., how to profile users, how to measure the similar users, and how to depict user-generated resources) rather than adopting existing method from web search. Therefore, we investigate various metrics to identify the similar users (user community). Moreover, we conduct the experiment on two real-life data sets by comparing the Collaborative method with the latest baselines. The empirical results show the effectiveness of the proposed approach and validate our observations. PMID:25692176
Remote evaluation of remote console information retrieval system (NASA/RECON)
NASA Technical Reports Server (NTRS)
Coles, V. L.
1971-01-01
The technique is described for NASA user evaluation. It consists of sending out an evaluation form with each literature search. The results are presented which are derived from a compilation of user responses. In an eleven-month period in which evaluation forms went out with 3,001 searches, 33.6% of the forms were completed and returned. The returns showed that 88.5% of the respondents found the searches suitable to their needs, 81% learned of valuable new references from the searches, and 93.5% received the searches in time to meet their needs. The significance of relevance or precision ratio in relation to user satisfaction is discussed, and an extrapolation from user responses resulted in a relevance ratio of 49.3%. Some of the general comments found in the responses are analyzed as indicators of what the users expected from the information retrieval service.
Kessler, Sabrina Heike; Zillich, Arne Freya
2018-04-20
In Germany, the Internet is gaining increasing importance for laypeople as a source of health information, including information about vaccination. While previous research has focused on the characteristics of online information about vaccination, this study investigated the influence of relevant user-specific cognitive factors on users' search behavior for online information about vaccination. Additionally, it examined how searching online for information about vaccination influences users' attitudes toward vaccination. We conducted an experimental study with 56 undergraduate students from a German university that consisted of a survey and eye-tracking while browsing the Internet, followed by a content analysis of the eye-tracking data. The results show that the users exposed themselves to balanced and diverse online information about vaccination. However, none of the examined cognitive factors (attitude toward vaccination, attitude salience, prior knowledge about vaccination, need for cognition, and cognitive involvement) influenced the amount of time users spent searching the Internet for information about vaccination. Our study was not able to document any effects of attitude-consistent selective exposure to online information about vaccination. In addition, we found no effect on attitude change after having searched the Internet for vaccine-related information. Thus, users' search behavior regarding vaccination seems to be relatively stable.
Dexter: Data Extractor for scanned graphs
NASA Astrophysics Data System (ADS)
Demleitner, Markus
2011-12-01
The NASA Astrophysics Data System (ADS) now holds 1.3 million scanned pages, containing numerous plots and figures for which the original data sets are lost or inaccessible. The availability of scans of the figures can significantly ease the regeneration of the data sets. For this purpose, the ADS has developed Dexter, a Java applet that supports the user in this process. Dexter's basic functionality is to let the user manually digitize a plot by marking points and defining the coordinate transformation from the logical to the physical coordinate system. Advanced features include automatic identification of axes, tracing lines and finding points matching a template.
ERIC Educational Resources Information Center
Saracevic, Tefko
2000-01-01
Summarizes a presentation that discussed findings and implications of research projects using an Internet search service and Internet-accessible vendor databases, representing the two sides of public database searching: query formulation and resource utilization. Presenters included: Tefko Saracevic, Amanda Spink, Dietmar Wolfram and Hong Xie.…
Enhancements to the SHARP Build System and NEK5000 Coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alex; Bennett, Andrew R.; Billings, Jay Jay
The SHARP project for the Department of Energy's Nuclear Energy Advanced Modeling and Simulation (NEAMS) program provides a multiphysics framework for coupled simulations of advanced nuclear reactor designs. It provides an overall coupling environment that utilizes custom interfaces to couple existing physics codes through a common spatial decomposition and unique solution transfer component. As of this writing, SHARP couples neutronics, thermal hydraulics, and structural mechanics using PROTEUS, Nek5000, and Diablo respectively. This report details two primary SHARP improvements regarding the Nek5000 and Diablo individual physics codes: (1) an improved Nek5000 coupling interface that lets SHARP achieve a vast increase inmore » overall solution accuracy by manipulating the structure of the internal Nek5000 spatial mesh, and (2) the capability to seamlessly couple structural mechanics calculations into the framework through improvements to the SHARP build system. The Nek5000 coupling interface now uses a barycentric Lagrange interpolation method that takes the vertex-based power and density computed from the PROTEUS neutronics solver and maps it to the user-specified, general-order Nek5000 spectral element mesh. Before this work, SHARP handled this vertex-based solution transfer in an averaging-based manner. SHARP users can now achieve higher levels of accuracy by specifying any arbitrary Nek5000 spectral mesh order. This improvement takes the average percentage error between the PROTEUS power solution and the Nek5000 interpolated result down drastically from over 23 % to just above 2 %, and maintains the correct power profile. We have integrated Diablo into the SHARP build system to facilitate the future coupling of structural mechanics calculations into SHARP. Previously, simulations involving Diablo were done in an iterative manner, requiring a large amount manual work, and left only as a task for advanced users. This report will detail a new Diablo build system that was implemented using GNU Autotools, mirroring much of the current SHARP build system, and easing the use of structural mechanics calculations for end-users of the SHARP multiphysics framework. It lets users easily build and use Diablo as a stand-alone simulation, as well as fully couple with the other SHARP physics modules. The top-level SHARP build system was modified to allow Diablo to hook in directly. New dependency handlers were implemented to let SHARP users easily build the framework with these new simulation capabilities. The remainder of this report will describe this work in full, with a detailed discussion of the overall design philosophy of SHARP, the new solution interpolation method introduced, and the Diablo integration work. We will conclude with a discussion of possible future SHARP improvements that will serve to increase solution accuracy and framework capability.« less
A study on PubMed search tag usage pattern: association rule mining of a full-day PubMed query log.
Mosa, Abu Saleh Mohammad; Yoo, Illhoi
2013-01-09
The practice of evidence-based medicine requires efficient biomedical literature search such as PubMed/MEDLINE. Retrieval performance relies highly on the efficient use of search field tags. The purpose of this study was to analyze PubMed log data in order to understand the usage pattern of search tags by the end user in PubMed/MEDLINE search. A PubMed query log file was obtained from the National Library of Medicine containing anonymous user identification, timestamp, and query text. Inconsistent records were removed from the dataset and the search tags were extracted from the query texts. A total of 2,917,159 queries were selected for this study issued by a total of 613,061 users. The analysis of frequent co-occurrences and usage patterns of the search tags was conducted using an association mining algorithm. The percentage of search tag usage was low (11.38% of the total queries) and only 2.95% of queries contained two or more tags. Three out of four users used no search tag and about two-third of them issued less than four queries. Among the queries containing at least one tagged search term, the average number of search tags was almost half of the number of total search terms. Navigational search tags are more frequently used than informational search tags. While no strong association was observed between informational and navigational tags, six (out of 19) informational tags and six (out of 29) navigational tags showed strong associations in PubMed searches. The low percentage of search tag usage implies that PubMed/MEDLINE users do not utilize the features of PubMed/MEDLINE widely or they are not aware of such features or solely depend on the high recall focused query translation by the PubMed's Automatic Term Mapping. The users need further education and interactive search application for effective use of the search tags in order to fulfill their biomedical information needs from PubMed/MEDLINE.
A Study on Pubmed Search Tag Usage Pattern: Association Rule Mining of a Full-day Pubmed Query Log
2013-01-01
Background The practice of evidence-based medicine requires efficient biomedical literature search such as PubMed/MEDLINE. Retrieval performance relies highly on the efficient use of search field tags. The purpose of this study was to analyze PubMed log data in order to understand the usage pattern of search tags by the end user in PubMed/MEDLINE search. Methods A PubMed query log file was obtained from the National Library of Medicine containing anonymous user identification, timestamp, and query text. Inconsistent records were removed from the dataset and the search tags were extracted from the query texts. A total of 2,917,159 queries were selected for this study issued by a total of 613,061 users. The analysis of frequent co-occurrences and usage patterns of the search tags was conducted using an association mining algorithm. Results The percentage of search tag usage was low (11.38% of the total queries) and only 2.95% of queries contained two or more tags. Three out of four users used no search tag and about two-third of them issued less than four queries. Among the queries containing at least one tagged search term, the average number of search tags was almost half of the number of total search terms. Navigational search tags are more frequently used than informational search tags. While no strong association was observed between informational and navigational tags, six (out of 19) informational tags and six (out of 29) navigational tags showed strong associations in PubMed searches. Conclusions The low percentage of search tag usage implies that PubMed/MEDLINE users do not utilize the features of PubMed/MEDLINE widely or they are not aware of such features or solely depend on the high recall focused query translation by the PubMed’s Automatic Term Mapping. The users need further education and interactive search application for effective use of the search tags in order to fulfill their biomedical information needs from PubMed/MEDLINE. PMID:23302604
Patterns of Information-Seeking for Cancer on the Internet: An Analysis of Real World Data
Ofran, Yishai; Paltiel, Ora; Pelleg, Dan; Rowe, Jacob M.; Yom-Tov, Elad
2012-01-01
Although traditionally the primary information sources for cancer patients have been the treating medical team, patients and their relatives increasingly turn to the Internet, though this source may be misleading and confusing. We assess Internet searching patterns to understand the information needs of cancer patients and their acquaintances, as well as to discern their underlying psychological states. We screened 232,681 anonymous users who initiated cancer-specific queries on the Yahoo Web search engine over three months, and selected for study users with high levels of interest in this topic. Searches were partitioned by expected survival for the disease being searched. We compared the search patterns of anonymous users and their contacts. Users seeking information on aggressive malignancies exhibited shorter search periods, focusing on disease- and treatment-related information. Users seeking knowledge regarding more indolent tumors searched for longer periods, alternated between different subjects, and demonstrated a high interest in topics such as support groups. Acquaintances searched for longer periods than the proband user when seeking information on aggressive (compared to indolent) cancers. Information needs can be modeled as transitioning between five discrete states, each with a unique signature representing the type of information of interest to the user. Thus, early phases of information-seeking for cancer follow a specific dynamic pattern. Areas of interest are disease dependent and vary between probands and their contacts. These patterns can be used by physicians and medical Web site authors to tailor information to the needs of patients and family members. PMID:23029317
ADS's Dexter Data Extraction Applet
NASA Astrophysics Data System (ADS)
Demleitner, M.; Accomazzi, A.; Eichhorn, G.; Grant, C. S.; Kurtz, M. J.; Murray, S. S.
The NASA Astrophysics Data System (ADS) now holds 1.3 million scanned pages, containing numerous plots and figures for which the original data sets are lost or inaccessible. The availability of scans of the figures can significantly ease the regeneration of the data sets. For this purpose, the ADS has developed Dexter, a Java applet that supports the user in this process. Dexter's basic functionality is to let the user manually digitize a plot by marking points and defining the coordinate transformation from the logical to the physical coordinate system. Advanced features include automatic identification of axes, tracing lines and finding points matching a template. This contribution both describes the operation of Dexter from a user's point of view and discusses some of the architectural issues we faced during implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quam, W.; Del Duca, T.; Plake, W.
This paper describes a pocket-calculator-sized, neutron-sensitive, REM-responding personnel dosimeter that uses three tissue-equivalent cylindrical proportional counters as neutron-sensitive detectors. These are conventionally called Linear Energy Transfer (LET) counters. Miniaturized hybrid circuits are used for the linear pulse handling electronics, followed by a 256-channel ADC. A CMOS microprocessor is used to calculate REM exposure from the basic rads-tissue data supplied by the LET counters and also to provide timing and display functions. The instrument is used to continuously accumulate time in hours since reset, total counts accumulated, rads-tissue, and REM. The user can display any one of these items or amore » channel number (an aid in calibration) at any time. Such data are provided with a precision of +- 3% for a total exposure of 1 mREM over eight hours.« less
2007-05-01
35 5 Actinide product radionuclides... actinides , and fission products in fallout. Doses from low-linear energy transfer (LET) radiation (beta particles and gamma rays) are reported separately...assumptions about the critical parameters used in calculating internal doses – resuspension factor, breathing rate, fractionation, and scenario elements – to
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-08
... let the user know to use pledge code 01 instead. Effective December 2, 2010, DTC will extend the end... one code. The extended period for pledge affords greater flexibility in determining and securing... the respective rights of DTC or persons using the service. At any time within 60 days of the filing of...
Logging on and Letting out: Using Online Social Networks to Grieve and to Mourn
ERIC Educational Resources Information Center
Carroll, Brian; Landry, Katie
2010-01-01
The purpose of this article is to explore how and why younger Internet users of social networking platforms such as MySpace and Facebook maintain connections with those who have died or been killed. This article, therefore, examines the blurring or blending of interpersonal communication and mass communication via the web as what once was very…
Do-It-Yourself Learning Games: Software That Lets You Pick the Questions--and Answers.
ERIC Educational Resources Information Center
Hively, Wells
1984-01-01
Reviews user-adaptable learning games that can be customized for any subject, including Tic Tac Show and the Game Show from Computer Advanced Ideas, which are question-answer learning programs based on game shows, and Master Match from Computer Advanced Ideas and Square Pairs from Scholastic Inc., which are based on the card game Concentration.…
ERIC Educational Resources Information Center
Ito, Kristin E.; Kalyanaraman, Sri; Ford, Carol A.; Brown, Jane D.; Miller, William C.
2008-01-01
The purpose of this study was to develop and pilot-test an interactive CD-ROM aimed at the prevention of sexually transmitted infections (STIs) in female adolescents. The CD-ROM includes prevention information, models skills for negotiating abstinence and consistent condom use, teaches media literacy, and allows the user to choose a culturally…
Letting Students Use Web 2.0 Tools to Hook One Another on Reading
ERIC Educational Resources Information Center
Ercegovac, Zorana
2012-01-01
In the rapidly changing globalized world, school librarians cannot prepare today's students for every possible outcome, but they can give them the skills that will make them adaptable in 21st-century learning and work settings. The mission for school library programs is to ensure that students are effective users of ideas and information by being…
1985-11-01
13I OR (STARTUPV SILECT I CREATE FORN P1 Prompt At Let, *TaI: Item TAUK At 2 8 1 31:: 5 I CREATE PORN P2 . . . eeeeeee ooe. .... eeee... Child IICount I ----------------- 4 +---------------- I Last I- Box I I Child I Size +---------------- ----------------- i Module i-bIBox I I Margins
Searching Databases without Query-Building Aids: Implications for Dyslexic Users
ERIC Educational Resources Information Center
Berget, Gerd; Sandnes, Frode Eika
2015-01-01
Introduction: Few studies document the information searching behaviour of users with cognitive impairments. This paper therefore addresses the effect of dyslexia on information searching in a database with no tolerance for spelling errors and no query-building aids. The purpose was to identify effective search interface design guidelines that…
Hybrid Filtering in Semantic Query Processing
ERIC Educational Resources Information Center
Jeong, Hanjo
2011-01-01
This dissertation presents a hybrid filtering method and a case-based reasoning framework for enhancing the effectiveness of Web search. Web search may not reflect user needs, intent, context, and preferences, because today's keyword-based search is lacking semantic information to capture the user's context and intent in posing the search query.…
MYPLAN - A Mobile Phone Application for Supporting People at Risk of Suicide.
Skovgaard Larsen, Jette L; Frandsen, Hanne; Erlangsen, Annette
2016-05-01
Safety plans have been suggested as an intervention for people at risk of suicide. Given the impulsive character of suicidal ideation, a safety plan in the format of a mobile phone application is likely to be more available and useful than traditional paper versions. The study describes MYPLAN, a mobile phone application designed to support people at risk of suicide by letting them create a safety plan. MYPLAN was developed in collaboration with clinical psychiatric staff at Danish suicide preventive clinics. The mobile application lets the user create an individualized safety plan by filling in templates with strategies, actions, and direct links to contact persons. MYPLAN was developed in 2013 and is freely available in Denmark and Norway. It is designed for iPhone and android platforms. As of December 2015, the application has been downloaded almost 8,000 times. Users at risk of suicide as well as clinical staff have provided positive feedback on the mobile application. Support via mobile phone applications might be particularly useful for younger age groups at risk of suicide as well as in areas or countries where support options are lacking. Yet, it is important to examine the effectiveness of this type of intervention.
The MAR databases: development and implementation of databases specific for marine metagenomics.
Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen; Willassen, Nils P
2018-01-04
We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.
Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin
2014-12-01
The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.
EasyKSORD: A Platform of Keyword Search Over Relational Databases
NASA Astrophysics Data System (ADS)
Peng, Zhaohui; Li, Jing; Wang, Shan
Keyword Search Over Relational Databases (KSORD) enables casual users to use keyword queries (a set of keywords) to search relational databases just like searching the Web, without any knowledge of the database schema or any need of writing SQL queries. Based on our previous work, we design and implement a novel KSORD platform named EasyKSORD for users and system administrators to use and manage different KSORD systems in a novel and simple manner. EasyKSORD supports advanced queries, efficient data-graph-based search engines, multiform result presentations, and system logging and analysis. Through EasyKSORD, users can search relational databases easily and read search results conveniently, and system administrators can easily monitor and analyze the operations of KSORD and manage KSORD systems much better.
A Mathematical and Sociological Analysis of Google Search Algorithm
2013-01-16
through the collective intelligence of the web to determine a page’s importance. Let v be a vector of RN with N ≥ 8 billion. Any unit vector in RN is...scrolled up by some artifical hits. Aknowledgment: The authors would like to thank Dr. John Lavery for his encouragement and support which enable them to
20 CFR 402.180 - Procedure on assessing and collecting fees for providing records.
Code of Federal Regulations, 2010 CFR
2010-04-01
... want us to continue to process your request. Also, before we start work on your request under § 402.140... start searching for the records you want. If so, we will let you know promptly upon receiving your... numerous small bills to frequent requesters, or to businesses or agents representing requesters. For...
Let's Bring Back the Magic of Song for Teaching Reading
ERIC Educational Resources Information Center
Iwasaki, Becky; Rasinski, Timothy; Yildirim, Kasim; Zimmerman, Belinda S.
2013-01-01
Based on a first grade teacher's search for approaches to promote successful reading acquisition in her first grade classroom, the authors present a curricular engagement in which the teacher explored using music, specifically singing songs, as a fun and motivating way to accelerate reading progress. The premise is that singing (while at the…
Research on Information Sharing Method for Future C2 in Network Centric Environment
2011-06-01
subscription (or search) request. Then, some of the information service nodes for future C2 deal with these users’ requests, locate, federated search the... federated search server is responsible for resolving the search requests sending out from the users, and executing the federated search . The information... federated search server, information filtering model, or information subscription matching algorithm (such as users subscribe the target information at two
ERIC Educational Resources Information Center
Vine, Rita
2001-01-01
Explains how to train users in effective Web searching. Discusses challenges of teaching Web information retrieval; a framework for information searching; choosing the right search tools for users; the seven-step lesson planning process; tips for delivering group Internet training; and things that help people work faster and smarter on the Web.…
Data Recommender: An Alternative Way to Discover Open Scientific Datasets
NASA Astrophysics Data System (ADS)
Klump, J. F.; Devaraju, A.; Williams, G.; Hogan, D.; Davy, R.; Page, J.; Singh, D.; Peterson, N.
2017-12-01
Over the past few years, institutions and government agencies have adopted policies to openly release their data, which has resulted in huge amounts of open data becoming available on the web. When trying to discover the data, users face two challenges: an overload of choice and the limitations of the existing data search tools. On the one hand, there are too many datasets to choose from, and therefore, users need to spend considerable effort to find the datasets most relevant to their research. On the other hand, data portals commonly offer keyword and faceted search, which depend fully on the user queries to search and rank relevant datasets. Consequently, keyword and faceted search may return loosely related or irrelevant results, although the results may contain the same query. They may also return highly specific results that depend more on how well metadata was authored. They do not account well for variance in metadata due to variance in author styles and preferences. The top-ranked results may also come from the same data collection, and users are unlikely to discover new and interesting datasets. These search modes mainly suits users who can express their information needs in terms of the structure and terminology of the data portals, but may pose a challenge otherwise. The above challenges reflect that we need a solution that delivers the most relevant (i.e., similar and serendipitous) datasets to users, beyond the existing search functionalities on the portals. A recommender system is an information filtering system that presents users with relevant and interesting contents based on users' context and preferences. Delivering data recommendations to users can make data discovery easier, and as a result may enhance user engagement with the portal. We developed a hybrid data recommendation approach for the CSIRO Data Access Portal. The approach leverages existing recommendation techniques (e.g., content-based filtering and item co-occurrence) to produce similar and serendipitous data recommendations. It measures the relevance between datasets based on their properties, and search and download patterns. We evaluated the recommendation approach in a user study, and the obtained user judgments revealed the ability of the approach to accurately quantify the relevance of the datasets.
Quantifying the Search Behaviour of Different Demographics Using Google Correlate
Letchford, Adrian; Preis, Tobias; Moat, Helen Susannah
2016-01-01
Vast records of our everyday interests and concerns are being generated by our frequent interactions with the Internet. Here, we investigate how the searches of Google users vary across U.S. states with different birth rates and infant mortality rates. We find that users in states with higher birth rates search for more information about pregnancy, while those in states with lower birth rates search for more information about cats. Similarly, we find that users in states with higher infant mortality rates search for more information about credit, loans and diseases. Our results provide evidence that Internet search data could offer new insight into the concerns of different demographics. PMID:26910464
Supervised learning of tools for content-based search of image databases
NASA Astrophysics Data System (ADS)
Delanoy, Richard L.
1996-03-01
A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.
Ji, Yanqing; Ying, Hao; Tran, John; Dews, Peter; Massanari, R Michael
2016-07-19
Finding highly relevant articles from biomedical databases is challenging not only because it is often difficult to accurately express a user's underlying intention through keywords but also because a keyword-based query normally returns a long list of hits with many citations being unwanted by the user. This paper proposes a novel biomedical literature search system, called BiomedSearch, which supports complex queries and relevance feedback. The system employed association mining techniques to build a k-profile representing a user's relevance feedback. More specifically, we developed a weighted interest measure and an association mining algorithm to find the strength of association between a query and each concept in the article(s) selected by the user as feedback. The top concepts were utilized to form a k-profile used for the next-round search. BiomedSearch relies on Unified Medical Language System (UMLS) knowledge sources to map text files to standard biomedical concepts. It was designed to support queries with any levels of complexity. A prototype of BiomedSearch software was made and it was preliminarily evaluated using the Genomics data from TREC (Text Retrieval Conference) 2006 Genomics Track. Initial experiment results indicated that BiomedSearch increased the mean average precision (MAP) for a set of queries. With UMLS and association mining techniques, BiomedSearch can effectively utilize users' relevance feedback to improve the performance of biomedical literature search.
Manually Classifying User Search Queries on an Academic Library Web Site
ERIC Educational Resources Information Center
Chapman, Suzanne; Desai, Shevon; Hagedorn, Kat; Varnum, Ken; Mishra, Sonali; Piacentine, Julie
2013-01-01
The University of Michigan Library wanted to learn more about the kinds of searches its users were conducting through the "one search" search box on the Library Web site. Library staff conducted two investigations. A preliminary investigation in 2011 involved the manual review of the 100 most frequently occurring queries conducted…
Ethnography of Novices' First Use of Web Search Engines: Affective Control in Cognitive Processing.
ERIC Educational Resources Information Center
Nahl, Diane
1998-01-01
This study of 18 novice Internet users employed a structured self-report method to investigate affective and cognitive operations in the following phases of World Wide Web searching: presearch formulation, search statement formulation, search strategy, and evaluation of results. Users also rated their self-confidence as searchers and satisfaction…
Monitoring User Search Success through Transaction Log Analysis: The WolfPAC Example.
ERIC Educational Resources Information Center
Zink, Steven D.
1991-01-01
Describes the use of transaction log analysis of the online catalog at the University of Nevada, Reno, libraries to help evaluate reasons for unsuccessful user searches. Author, title, and subject searches are examined; problems with Library of Congress subject headings are discussed; and title keyword searching is suggested. (11 references) (LRW)
BioSearch: a semantic search engine for Bio2RDF
Qiu, Honglei; Huang, Jiacheng
2017-01-01
Abstract Biomedical data are growing at an incredible pace and require substantial expertise to organize data in a manner that makes them easily findable, accessible, interoperable and reusable. Massive effort has been devoted to using Semantic Web standards and technologies to create a network of Linked Data for the life sciences, among others. However, while these data are accessible through programmatic means, effective user interfaces for non-experts to SPARQL endpoints are few and far between. Contributing to user frustrations is that data are not necessarily described using common vocabularies, thereby making it difficult to aggregate results, especially when distributed across multiple SPARQL endpoints. We propose BioSearch — a semantic search engine that uses ontologies to enhance federated query construction and organize search results. BioSearch also features a simplified query interface that allows users to optionally filter their keywords according to classes, properties and datasets. User evaluation demonstrated that BioSearch is more effective and usable than two state of the art search and browsing solutions. Database URL: http://ws.nju.edu.cn/biosearch/ PMID:29220451
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1993-01-01
The Power Control and Rain Fade Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Power Control and Rain Fade Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Power Control and Rain Fade Software automatically controls the LET uplink power to compensate for signal fades. Besides power augmentation, the C&PM Software system is also responsible for instrument control during HBR-LET experiments, control of the Intermediate Frequency Switch Matrix on board the ACTS to yield a desired path through the spacecraft payload, and data display. The Power Control and Rain Fade Software User's Guide, Version 1.0 outlines the commands and procedures to install and operate the Power Control and Rain Fade Software. The Power Control and Rain Fade Software Maintenance Manual, Version 1.0 is a programmer's guide to the Power Control and Rain Fade Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Power Control and Rain Fade Software, computer algorithms, format representations, and computer hardware configuration. The Power Control and Rain Fade Test Plan provides a step-by-step procedure to verify the operation of the software using a predetermined signal fade event. The Test Plan also provides a means to demonstrate the capability of the software.
Solving search problems by strongly simulating quantum circuits
Johnson, T. H.; Biamonte, J. D.; Clark, S. R.; Jaksch, D.
2013-01-01
Simulating quantum circuits using classical computers lets us analyse the inner workings of quantum algorithms. The most complete type of simulation, strong simulation, is believed to be generally inefficient. Nevertheless, several efficient strong simulation techniques are known for restricted families of quantum circuits and we develop an additional technique in this article. Further, we show that strong simulation algorithms perform another fundamental task: solving search problems. Efficient strong simulation techniques allow solutions to a class of search problems to be counted and found efficiently. This enhances the utility of strong simulation methods, known or yet to be discovered, and extends the class of search problems known to be efficiently simulable. Relating strong simulation to search problems also bounds the computational power of efficiently strongly simulable circuits; if they could solve all problems in P this would imply that all problems in NP and #P could be solved in polynomial time. PMID:23390585
End-Users, Front Ends and Librarians.
ERIC Educational Resources Information Center
Bourne, Donna E.
1989-01-01
The increase in end-user searching, the advantages and limitations of front ends, and the role of the librarian in end-user searching are discussed. It is argued that librarians need to recognize that front ends can be of benefit to themselves and patrons, and to assume the role of advisors and educators for end-users. (37 references) (CLB)
ISE: An Integrated Search Environment. The manual
NASA Technical Reports Server (NTRS)
Chu, Lon-Chan
1992-01-01
Integrated Search Environment (ISE), a software package that implements hierarchical searches with meta-control, is described in this manual. ISE is a collection of problem-independent routines to support solving searches. Mainly, these routines are core routines for solving a search problem and they handle the control of searches and maintain the statistics related to searches. By separating the problem-dependent and problem-independent components in ISE, new search methods based on a combination of existing methods can be developed by coding a single master control program. Further, new applications solved by searches can be developed by coding the problem-dependent parts and reusing the problem-independent parts already developed. Potential users of ISE are designers of new application solvers and new search algorithms, and users of experimental application solvers and search algorithms. The ISE is designed to be user-friendly and information rich. In this manual, the organization of ISE is described and several experiments carried out on ISE are also described.
Searching for cancer information on the internet: analyzing natural language search queries.
Bader, Judith L; Theofanos, Mary Frances
2003-12-11
Searching for health information is one of the most-common tasks performed by Internet users. Many users begin searching on popular search engines rather than on prominent health information sites. We know that many visitors to our (National Cancer Institute) Web site, cancer.gov, arrive via links in search engine result. To learn more about the specific needs of our general-public users, we wanted to understand what lay users really wanted to know about cancer, how they phrased their questions, and how much detail they used. The National Cancer Institute partnered with AskJeeves, Inc to develop a methodology to capture, sample, and analyze 3 months of cancer-related queries on the Ask.com Web site, a prominent United States consumer search engine, which receives over 35 million queries per week. Using a benchmark set of 500 terms and word roots supplied by the National Cancer Institute, AskJeeves identified a test sample of cancer queries for 1 week in August 2001. From these 500 terms only 37 appeared >or= 5 times/day over the trial test week in 17208 queries. Using these 37 terms, 204165 instances of cancer queries were found in the Ask.com query logs for the actual test period of June-August 2001. Of these, 7500 individual user questions were randomly selected for detailed analysis and assigned to appropriate categories. The exact language of sample queries is presented. Considering multiples of the same questions, the sample of 7500 individual user queries represented 76077 queries (37% of the total 3-month pool). Overall 78.37% of sampled Cancer queries asked about 14 specific cancer types. Within each cancer type, queries were sorted into appropriate subcategories including at least the following: General Information, Symptoms, Diagnosis and Testing, Treatment, Statistics, Definition, and Cause/Risk/Link. The most-common specific cancer types mentioned in queries were Digestive/Gastrointestinal/Bowel (15.0%), Breast (11.7%), Skin (11.3%), and Genitourinary (10.5%). Additional subcategories of queries about specific cancer types varied, depending on user input. Queries that were not specific to a cancer type were also tracked and categorized. Natural-language searching affords users the opportunity to fully express their information needs and can aid users naïve to the content and vocabulary. The specific queries analyzed for this study reflect news and research studies reported during the study dates and would surely change with different study dates. Analyzing queries from search engines represents one way of knowing what kinds of content to provide to users of a given Web site. Users ask questions using whole sentences and keywords, often misspelling words. Providing the option for natural-language searching does not obviate the need for good information architecture, usability engineering, and user testing in order to optimize user experience.
Searching for Cancer Information on the Internet: Analyzing Natural Language Search Queries
Theofanos, Mary Frances
2003-01-01
Background Searching for health information is one of the most-common tasks performed by Internet users. Many users begin searching on popular search engines rather than on prominent health information sites. We know that many visitors to our (National Cancer Institute) Web site, cancer.gov, arrive via links in search engine result. Objective To learn more about the specific needs of our general-public users, we wanted to understand what lay users really wanted to know about cancer, how they phrased their questions, and how much detail they used. Methods The National Cancer Institute partnered with AskJeeves, Inc to develop a methodology to capture, sample, and analyze 3 months of cancer-related queries on the Ask.com Web site, a prominent United States consumer search engine, which receives over 35 million queries per week. Using a benchmark set of 500 terms and word roots supplied by the National Cancer Institute, AskJeeves identified a test sample of cancer queries for 1 week in August 2001. From these 500 terms only 37 appeared ≥ 5 times/day over the trial test week in 17208 queries. Using these 37 terms, 204165 instances of cancer queries were found in the Ask.com query logs for the actual test period of June-August 2001. Of these, 7500 individual user questions were randomly selected for detailed analysis and assigned to appropriate categories. The exact language of sample queries is presented. Results Considering multiples of the same questions, the sample of 7500 individual user queries represented 76077 queries (37% of the total 3-month pool). Overall 78.37% of sampled Cancer queries asked about 14 specific cancer types. Within each cancer type, queries were sorted into appropriate subcategories including at least the following: General Information, Symptoms, Diagnosis and Testing, Treatment, Statistics, Definition, and Cause/Risk/Link. The most-common specific cancer types mentioned in queries were Digestive/Gastrointestinal/Bowel (15.0%), Breast (11.7%), Skin (11.3%), and Genitourinary (10.5%). Additional subcategories of queries about specific cancer types varied, depending on user input. Queries that were not specific to a cancer type were also tracked and categorized. Conclusions Natural-language searching affords users the opportunity to fully express their information needs and can aid users naïve to the content and vocabulary. The specific queries analyzed for this study reflect news and research studies reported during the study dates and would surely change with different study dates. Analyzing queries from search engines represents one way of knowing what kinds of content to provide to users of a given Web site. Users ask questions using whole sentences and keywords, often misspelling words. Providing the option for natural-language searching does not obviate the need for good information architecture, usability engineering, and user testing in order to optimize user experience. PMID:14713659
ERIC Educational Resources Information Center
Du, Jia Tina; Evans, Nina
2011-01-01
This project investigated how academic users search for information on their real-life research tasks. This article presents the findings of the first of two studies. The study data were collected in the Queensland University of Technology (QUT) in Brisbane, Australia. Eleven PhD students' searching behaviors on personal research topics were…
Single event upset susceptibility testing of the Xilinx Virtex II FPGA
NASA Technical Reports Server (NTRS)
Yui, C.; Swift, G.; Carmichael, C.
2002-01-01
Heavy ion testing of the Xilinx Virtex IZ was conducted on the configuration, block RAM and user flip flop cells to determine their single event upset susceptibility using LETs of 1.2 to 60 MeVcm^2/mg. A software program specifically designed to count errors in the FPGA is used to reveal L1/e values and single-event-functional interrupt failures.
Modeling web-based information seeking by users who are blind.
Brunsman-Johnson, Carissa; Narayanan, Sundaram; Shebilske, Wayne; Alakke, Ganesh; Narakesari, Shruti
2011-01-01
This article describes website information seeking strategies used by users who are blind and compares those with sighted users. It outlines how assistive technologies and website design can aid users who are blind while information seeking. People who are blind and sighted are tested using an assessment tool and performing several tasks on websites. The times and keystrokes are recorded for all tasks as well as commands used and spatial questioning. Participants who are blind used keyword-based search strategies as their primary tool to seek information. Sighted users also used keyword search techniques if they were unable to find the information using a visual scan of the home page of a website. A proposed model based on the present study for information seeking is described. Keywords are important in the strategies used by both groups of participants and providing these common and consistent keywords in locations that are accessible to the users may be useful for efficient information searching. The observations suggest that there may be a difference in how users search a website that is familiar compared to one that is unfamiliar. © 2011 Informa UK, Ltd.
JSC Search System Usability Case Study
NASA Technical Reports Server (NTRS)
Meza, David; Berndt, Sarah
2014-01-01
The advanced nature of "search" has facilitated the movement from keyword match to the delivery of every conceivable information topic from career, commerce, entertainment, learning... the list is infinite. At NASA Johnson Space Center (JSC ) the Search interface is an important means of knowledge transfer. By indexing multiple sources between directorates and organizations, the system's potential is culture changing in that through search, knowledge of the unique accomplishments in engineering and science can be seamlessly passed between generations. This paper reports the findings of an initial survey, the first of a four part study to help determine user sentiment on the intranet, or local (JSC) enterprise search environment as well as the larger NASA enterprise. The survey is a means through which end users provide direction on the development and transfer of knowledge by way of the search experience. The ideal is to identify what is working and what needs to be improved from the users' vantage point by documenting: (1) Where users are satisfied/dissatisfied (2) Perceived value of interface components (3) Gaps which cause any disappointment in search experience. The near term goal is it to inform JSC search in order to improve users' ability to utilize existing services and infrastructure to perform tasks with a shortened life cycle. Continuing steps include an agency based focus with modified questions to accomplish a similar purpose
Best kept secrets ... Source Data Systems, Inc. (SDS).
Andrew, W F
1991-03-01
The SDS/MEDNET system is a cost-effective option for small- to medium-size hospitals (up to 400 beds). The parameter-driven system lets users control operations with only occasional SDS assistance. A full application set, available for modular selection to reduce upfront costs while facilitating steady growth and protecting client investment, is adaptable to multi-facility environments. The industry-standard, Intel-based multi-user processors, network communications and protocols assure high efficiency, low-cost solutions independent of any one hardware vendor. Sustained growth in both client base and product offerings point to a high level of responsiveness and healthcare industry commitment. Corporate emphasis on user involvement and open systems integration assures clients of leading-edge capabilities. SDS/MEDNET will be a strong contender in selected marketing environments.
Let's Get Real: Deeper Learning and the Power of the Workplace. Deeper Learning Research Series
ERIC Educational Resources Information Center
Hoffman, Nancy
2015-01-01
For young people in the United States, whatever their backgrounds, one of the essential purposes of schooling should be to help them develop the knowledge, skills, and competence needed to search for and obtain work that they find at least reasonably satisfying. Our present educational system does precious little to introduce young people to the…
ERIC Educational Resources Information Center
Scafe, Suzanne
2010-01-01
This paper focuses on three autobiographical narratives: Jacqueline Walker's "Pilgrim State", "Sugar and Slate" by Charlotte Williams and "In Search of Mr. McKenzie" by Isha McKenzie-Mavinga and Thelma Perkins. It situates these texts by contemporary black British women in relation to a tradition of black…
ERIC Educational Resources Information Center
Gough, Annette
2017-01-01
This article traces the shifts in environmental education discourses from the 1972 UN Conference on the Human Environment, to the 2012 UN Rio+20 Conference on Sustainable Development, and beyond through a biopolitical lens. Each of the earlier shifts is reflected in environmental, sustainability and science education policies and curricula--but…
Understanding User Preferences and Awareness: Privacy Mechanisms in Location-Based Services
NASA Astrophysics Data System (ADS)
Burghardt, Thorben; Buchmann, Erik; Müller, Jens; Böhm, Klemens
Location based services (LBS) let people retrieve and share information related to their current position. Examples are Google Latitude or Panoramio. Since LBS share user-related content, location information etc., they put user privacy at risk. Literature has proposed various privacy mechanisms for LBS. However, it is unclear which mechanisms humans really find useful, and how they make use of them. We present a user study that addresses these issues. To obtain realistic results, we have implemented a geotagging application on the web and on GPS cellphones, and our study participants use this application in their daily lives. We test five privacy mechanisms that differ in the awareness, mental effort and degree of informedness required from the users. Among other findings, we have observed that in situations where a single simple mechanism does not meet all privacy needs, people want to use simple and sophisticated mechanisms in combination. Further, individuals are concerned about the privacy of others, even when they do not value privacy for themselves.
Controlled Vocabularies Boost International Participation and Normalization of Searches
NASA Technical Reports Server (NTRS)
Olsen, Lola M.
2006-01-01
The Global Change Master Directory's (GCMD) science staff set out to document Earth science data and provide a mechanism for it's discovery in fulfillment of a commitment to NASA's Earth Science progam and to the Committee on Earth Observation Satellites' (CEOS) International Directory Network (IDN.) At the time, whether to offer a controlled vocabulary search or a free-text search was resolved with a decision to support both. The feedback from the user community indicated that being asked to independently determine the appropriate 'English" words through a free-text search would be very difficult. The preference was to be 'prompted' for relevant keywords through the use of a hierarchy of well-designed science keywords. The controlled keywords serve to 'normalize' the search through knowledgeable input by metadata providers. Earth science keyword taxonomies were developed, rules for additions, deletions, and modifications were created. Secondary sets of controlled vocabularies for related descriptors such as projects, data centers, instruments, platforms, related data set link types, and locations, along with free-text searches assist users in further refining their search results. Through this robust 'search and refine' capability in the GCMD users are directed to the data and services they seek. The next step in guiding users more directly to the resources they desire is to build a 'reasoning' capability for search through the use of ontologies. Incorporating twelve sets of Earth science keyword taxonomies has boosted the GCMD S ability to help users define and more directly retrieve data of choice.
Predicting user click behaviour in search engine advertisements
NASA Astrophysics Data System (ADS)
Daryaie Zanjani, Mohammad; Khadivi, Shahram
2015-10-01
According to the specific requirements and interests of users, search engines select and display advertisements that match user needs and have higher probability of attracting users' attention based on their previous search history. New objects such as user, advertisement or query cause a deterioration of precision in targeted advertising due to their lack of history. This article surveys this challenge. In the case of new objects, we first extract similar observed objects to the new object and then we use their history as the history of new object. Similarity between objects is measured based on correlation, which is a relation between user and advertisement when the advertisement is displayed to the user. This method is used for all objects, so it has helped us to accurately select relevant advertisements for users' queries. In our proposed model, we assume that similar users behave in a similar manner. We find that users with few queries are similar to new users. We will show that correlation between users and advertisements' keywords is high. Thus, users who pay attention to advertisements' keywords, click similar advertisements. In addition, users who pay attention to specific brand names might have similar behaviours too.
Directing the public to evidence-based online content
Cooper, Crystale Purvis; Gelb, Cynthia A; Vaughn, Alexandra N; Smuland, Jenny; Hughes, Alexandra G; Hawkins, Nikki A
2015-01-01
To direct online users searching for gynecologic cancer information to accurate content, the Centers for Disease Control and Prevention’s (CDC) ‘Inside Knowledge: Get the Facts About Gynecologic Cancer’ campaign sponsored search engine advertisements in English and Spanish. From June 2012 to August 2013, advertisements appeared when US Google users entered search terms related to gynecologic cancer. Users who clicked on the advertisements were directed to relevant content on the CDC website. Compared with the 3 months before the initiative (March–May 2012), visits to the CDC web pages linked to the advertisements were 26 times higher after the initiative began (June–August 2012) (p<0.01), and 65 times higher when the search engine advertisements were supplemented with promotion on television and additional websites (September 2012–August 2013) (p<0.01). Search engine advertisements can direct users to evidence-based content at a highly teachable moment—when they are seeking relevant information. PMID:25053580
Sexual information seeking on web search engines.
Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles
2004-02-01
Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed.
PubMed Interact: an Interactive Search Application for MEDLINE/PubMed
Muin, Michael; Fontelo, Paul; Ackerman, Michael
2006-01-01
Online search and retrieval systems are important resources for medical literature research. Progressive Web 2.0 technologies provide opportunities to improve search strategies and user experience. Using PHP, Document Object Model (DOM) manipulation and Asynchronous JavaScript and XML (Ajax), PubMed Interact allows greater functionality so users can refine search parameters with ease and interact with the search results to retrieve and display relevant information and related articles. PMID:17238658
eTACTS: a method for dynamically filtering clinical trial search results.
Miotto, Riccardo; Jiang, Silis; Weng, Chunhua
2013-12-01
Information overload is a significant problem facing online clinical trial searchers. We present eTACTS, a novel interactive retrieval framework using common eligibility tags to dynamically filter clinical trial search results. eTACTS mines frequent eligibility tags from free-text clinical trial eligibility criteria and uses these tags for trial indexing. After an initial search, eTACTS presents to the user a tag cloud representing the current results. When the user selects a tag, eTACTS retains only those trials containing that tag in their eligibility criteria and generates a new cloud based on tag frequency and co-occurrences in the remaining trials. The user can then select a new tag or unselect a previous tag. The process iterates until a manageable number of trials is returned. We evaluated eTACTS in terms of filtering efficiency, diversity of the search results, and user eligibility to the filtered trials using both qualitative and quantitative methods. eTACTS (1) rapidly reduced search results from over a thousand trials to ten; (2) highlighted trials that are generally not top-ranked by conventional search engines; and (3) retrieved a greater number of suitable trials than existing search engines. eTACTS enables intuitive clinical trial searches by indexing eligibility criteria with effective tags. User evaluation was limited to one case study and a small group of evaluators due to the long duration of the experiment. Although a larger-scale evaluation could be conducted, this feasibility study demonstrated significant advantages of eTACTS over existing clinical trial search engines. A dynamic eligibility tag cloud can potentially enhance state-of-the-art clinical trial search engines by allowing intuitive and efficient filtering of the search result space. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
eTACTS: A Method for Dynamically Filtering Clinical Trial Search Results
Miotto, Riccardo; Jiang, Silis; Weng, Chunhua
2013-01-01
Objective Information overload is a significant problem facing online clinical trial searchers. We present eTACTS, a novel interactive retrieval framework using common eligibility tags to dynamically filter clinical trial search results. Materials and Methods eTACTS mines frequent eligibility tags from free-text clinical trial eligibility criteria and uses these tags for trial indexing. After an initial search, eTACTS presents to the user a tag cloud representing the current results. When the user selects a tag, eTACTS retains only those trials containing that tag in their eligibility criteria and generates a new cloud based on tag frequency and co-occurrences in the remaining trials. The user can then select a new tag or unselect a previous tag. The process iterates until a manageable number of trials is returned. We evaluated eTACTS in terms of filtering efficiency, diversity of the search results, and user eligibility to the filtered trials using both qualitative and quantitative methods. Results eTACTS (1) rapidly reduced search results from over a thousand trials to ten; (2) highlighted trials that are generally not top-ranked by conventional search engines; and (3) retrieved a greater number of suitable trials than existing search engines. Discussion eTACTS enables intuitive clinical trial searches by indexing eligibility criteria with effective tags. User evaluation was limited to one case study and a small group of evaluators due to the long duration of the experiment. Although a larger-scale evaluation could be conducted, this feasibility study demonstrated significant advantages of eTACTS over existing clinical trial search engines. Conclusion A dynamic eligibility tag cloud can potentially enhance state-of-the-art clinical trial search engines by allowing intuitive and efficient filtering of the search result space. PMID:23916863
Pross, Christoph; Averdunk, Lars-Henrik; Stjepanovic, Josip; Busse, Reinhard; Geissler, Alexander
2017-04-21
Quality of care public reporting provides structural, process and outcome information to facilitate hospital choice and strengthen quality competition. Yet, evidence indicates that patients rarely use this information in their decision-making, due to limited awareness of the data and complex and conflicting information. While there is enthusiasm among policy makers for public reporting, clinicians and researchers doubt its overall impact. Almost no study has analyzed how users behave on public reporting portals, which information they seek out and when they abort their search. This study employs web-usage mining techniques on server log data of 17 million user actions from Germany's premier provider transparency portal Weisse-Liste.de (WL.de) between 2012 and 2015. Postal code and ICD search requests facilitate identification of geographical and treatment area usage patterns. User clustering helps to identify user types based on parameters like session length, referrer and page topic visited. First-level markov chains illustrate common click paths and premature exits. In 2015, the WL.de Hospital Search portal had 2,750 daily users, with 25% mobile traffic, a bounce rate of 38% and 48% of users examining hospital quality information. From 2013 to 2015, user traffic grew at 38% annually. On average users spent 7 min on the portal, with 7.4 clicks and 54 s between clicks. Users request information for many oncologic and orthopedic conditions, for which no process or outcome quality indicators are available. Ten distinct user types, with particular usage patterns and interests, are identified. In particular, the different types of professional and non-professional users need to be addressed differently to avoid high premature exit rates at several key steps in the information search and view process. Of all users, 37% enter hospital information correctly upon entry, while 47% require support in their hospital search. Several onsite and offsite improvement options are identified. Public reporting needs to be directed at the interests of its users, with more outcome quality information for oncology and orthopedics. Customized reporting can cater to the different needs and skill levels of professional and non-professional users. Search engine optimization and hospital quality advocacy can increase website traffic.
CINT - Center for Integrated Nanotechnologies
Skip to Content Skip to Search Skip to Utility Navigation Skip to Top Navigation Search Site submit Facilities Discovery Platform Integration Lab User Facilities LUMOS Research Science Thrusts Integration Challenges Accepted User Proposals Data Management Becoming a User Call for Proposals Proposal Guidelines
A Query Analysis of Consumer Health Information Retrieval
Hong, Yi; de la Cruz, Norberto; Barnas, Gary; Early, Eileen; Gillis, Rick
2002-01-01
The log files of MCW HealthLink web site were analyzed to study users' needs for consumer health information and get a better understanding of the health topics users are searching for, the paths users usually take to find consumer health information and the way to improve search effectiveness.
Llnking the EarthScope Data Virtual Catalog to the GEON Portal
NASA Astrophysics Data System (ADS)
Lin, K.; Memon, A.; Baru, C.
2008-12-01
The EarthScope Data Portal provides a unified, single-point of access to EarthScope data and products from USArray, Plate Boundary Observatory (PBO), and San Andreas Fault Observatory at Depth (SAFOD) experiments. The portal features basic search and data access capabilities to allow users to discover and access EarthScope data using spatial, temporal, and other metadata-based (data type, station specific) search conditions. The portal search module is the user interface implementation of the EarthScope Data Search Web Service. This Web Service acts as a virtual catalog that in turn invokes Web services developed by IRIS (Incorporated Research Institutions for Seismology), UNAVCO (University NAVSTAR Consortium), and GFZ (German Research Center for Geosciences) to search for EarthScope data in the archives at each of these locations. These Web Services provide information about all resources (data) that match the specified search conditions. In this presentation we will describe how the EarthScope Data Search Web service can be integrated into the GEONsearch application in the GEON Portal (see http://portal.geongrid.org). Thus, a search request issued at the GEON Portal will also search the EarthScope virtual catalog thereby providing users seamless access to data in GEON as well as the Earthscope via a common user interface.
Evidence-based Medicine Search: a customizable federated search engine.
Bracke, Paul J; Howse, David K; Keim, Samuel M
2008-04-01
This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center.
Evidence-based Medicine Search: a customizable federated search engine
Bracke, Paul J.; Howse, David K.; Keim, Samuel M.
2008-01-01
Purpose: This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. Brief Description: The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Outcomes/Conclusion: Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center. PMID:18379665
Online Patent Searching: Guided by an Expert System.
ERIC Educational Resources Information Center
Ardis, Susan B.
1990-01-01
Describes the development of an expert system for online patent searching that uses menu driven software to interpret the user's knowledge level and the general nature of the search problem. The discussion covers the rationale for developing such a system, current system functions, cost effectiveness, user reactions, and plans for future…
Encounters with the OPAC: On-Line Searching in Public Libraries.
ERIC Educational Resources Information Center
Slone, Deborah J.
2000-01-01
Reports on a qualitative study that explored strategies and behaviors of public library users during interaction with an online public access catalog, and users' confidence in finding needed information online. Discusses results of questionnaires, interviews, and observations that examined unknown-item searches, area searches, and known-item…
Measuring Online Search Expertise
ERIC Educational Resources Information Center
Bailey, Earl
2017-01-01
Search expertise has long been studied and used extensively in information seeking behavior research, both as a fundamental concept and as a method of comparing groups of users. Unfortunately, while search expertise has been studied for some time, the conceptualization of it has lagged behind its use in categorizing users. This has led to users…
Using Digital Libraries Non-Visually: Understanding the Help-Seeking Situations of Blind Users
ERIC Educational Resources Information Center
Xie, Iris; Babu, Rakesh; Joo, Soohyung; Fuller, Paige
2015-01-01
Introduction: This study explores blind users' unique help-seeking situations in interacting with digital libraries. In particular, help-seeking situations were investigated at both the physical and cognitive levels. Method: Fifteen blind participants performed three search tasks, including known- item search, specific information search, and…
A new programming metaphor for image processing procedures
NASA Technical Reports Server (NTRS)
Smirnov, O. M.; Piskunov, N. E.
1992-01-01
Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and thus be used as a component of other factories. A bare-bones prototype of factory programming was implemented under the PcIPS image processing system, and a complete version (on a multitasking platform) is under development.
Optimizing Earth Data Search Ranking using Deep Learning and Real-time User Behaviour
NASA Astrophysics Data System (ADS)
Jiang, Y.; Yang, C. P.; Armstrong, E. M.; Huang, T.; Moroni, D. F.; McGibbney, L. J.; Greguska, F. R., III
2017-12-01
Finding Earth science data has been a challenging problem given both the quantity of data available and the heterogeneity of the data across a wide variety of domains. Current search engines in most geospatial data portals tend to induce end users to focus on one single data characteristic dimension (e.g., term frequency-inverse document frequency (TF-IDF) score, popularity, release date, etc.). This approach largely fails to take account of users' multidimensional preferences for geospatial data, and hence may likely result in a less than optimal user experience in discovering the most applicable dataset out of a vast range of available datasets. With users interacting with search engines, sufficient information is already hidden in the log files. Compared with explicit feedback data, information that can be derived/extracted from log files is virtually free and substantially more timely. In this dissertation, I propose an online deep learning framework that can quickly update the learning function based on real-time user clickstream data. The contributions of this framework include 1) a log processor that can ingest, process and create training data from web logs in a real-time manner; 2) a query understanding module to better interpret users' search intent using web log processing results and metadata; 3) a feature extractor that identifies ranking features representing users' multidimensional interests of geospatial data; and 4) a deep learning based ranking algorithm that can be trained incrementally using user behavior data. The search ranking results will be evaluated using precision at K and normalized discounted cumulative gain (NDCG).
Search without Boundaries Using Simple APIs
Tong, Qi
2009-01-01
The U.S. Geological Survey (USGS) Library, where the author serves as the digital services librarian, is increasingly challenged to make it easier for users to find information from many heterogeneous information sources. Information is scattered throughout different software applications (i.e., library catalog, federated search engine, link resolver, and vendor websites), and each specializes in one thing. How could the library integrate the functionalities of one application with another and provide a single point of entry for users to search across? To improve the user experience, the library launched an effort to integrate the federated search engine into the library's intranet website. The result is a simple search box that leverages the federated search engine's built-in application programming interfaces (APIs). In this article, the author describes how this project demonstrated the power of APIs and their potential to be used by other enterprise search portals inside or outside of the library.
Heuristics for Relevancy Ranking of Earth Dataset Search Results
NASA Astrophysics Data System (ADS)
Lynnes, C.; Quinn, P.; Norton, J.
2016-12-01
As the Variety of Earth science datasets increases, science researchers find it more challenging to discover and select the datasets that best fit their needs. The most common way of search providers to address this problem is to rank the datasets returned for a query by their likely relevance to the user. Large web page search engines typically use text matching supplemented with reverse link counts, semantic annotations and user intent modeling. However, this produces uneven results when applied to dataset metadata records simply externalized as a web page. Fortunately, data and search provides have decades of experience in serving data user communities, allowing them to form heuristics that leverage the structure in the metadata together with knowledge about the user community. Some of these heuristics include specific ways of matching the user input to the essential measurements in the dataset and determining overlaps of time range and spatial areas. Heuristics based on the novelty of the datasets can prioritize later, better versions of data over similar predecessors. And knowledge of how different user types and communities use data can be brought to bear in cases where characteristics of the user (discipline, expertise) or their intent (applications, research) can be divined. The Earth Observing System Data and Information System has begun implementing some of these heuristics in the relevancy algorithm of its Common Metadata Repository search engine.
Heuristics for Relevancy Ranking of Earth Dataset Search Results
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Quinn, Patrick; Norton, James
2016-01-01
As the Variety of Earth science datasets increases, science researchers find it more challenging to discover and select the datasets that best fit their needs. The most common way of search providers to address this problem is to rank the datasets returned for a query by their likely relevance to the user. Large web page search engines typically use text matching supplemented with reverse link counts, semantic annotations and user intent modeling. However, this produces uneven results when applied to dataset metadata records simply externalized as a web page. Fortunately, data and search provides have decades of experience in serving data user communities, allowing them to form heuristics that leverage the structure in the metadata together with knowledge about the user community. Some of these heuristics include specific ways of matching the user input to the essential measurements in the dataset and determining overlaps of time range and spatial areas. Heuristics based on the novelty of the datasets can prioritize later, better versions of data over similar predecessors. And knowledge of how different user types and communities use data can be brought to bear in cases where characteristics of the user (discipline, expertise) or their intent (applications, research) can be divined. The Earth Observing System Data and Information System has begun implementing some of these heuristics in the relevancy algorithm of its Common Metadata Repository search engine.
Relevancy Ranking of Satellite Dataset Search Results
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Quinn, Patrick; Norton, James
2017-01-01
As the Variety of Earth science datasets increases, science researchers find it more challenging to discover and select the datasets that best fit their needs. The most common way of search providers to address this problem is to rank the datasets returned for a query by their likely relevance to the user. Large web page search engines typically use text matching supplemented with reverse link counts, semantic annotations and user intent modeling. However, this produces uneven results when applied to dataset metadata records simply externalized as a web page. Fortunately, data and search provides have decades of experience in serving data user communities, allowing them to form heuristics that leverage the structure in the metadata together with knowledge about the user community. Some of these heuristics include specific ways of matching the user input to the essential measurements in the dataset and determining overlaps of time range and spatial areas. Heuristics based on the novelty of the datasets can prioritize later, better versions of data over similar predecessors. And knowledge of how different user types and communities use data can be brought to bear in cases where characteristics of the user (discipline, expertise) or their intent (applications, research) can be divined. The Earth Observing System Data and Information System has begun implementing some of these heuristics in the relevancy algorithm of its Common Metadata Repository search engine.
1983-09-30
Ideally, all APSE tools DD, vLAm 147 COTON or I NOv 66is oSwLEtEaUCASII SN )02 IF St. 06,SECURITY CLASSIFICATION Of THIS 1PAG9 (ft;..Data X-014 I.N...has finished all the modifications/entries that are desired, the user presses a special key (function key or enter key) which causes the modified
Extending Cross-Generational Knowledge Flow Research in Edge Organizations
2008-06-01
letting Protégé generate the basic user interface, and then gradually write widgets and plug-ins to customize its look-and- feel and behavior . 4 3.0...2007a) focused on cross-generational knowledge flows in edge organizations. We found that cross- generational biases affect tacit knowledge transfer...the software engineering field, many matured methodologies already exist, such as Rational Unified Process (Hunt, 2003) or Extreme Programming (Beck
Don't Let Micropayments Penalize You--Experience from the City University of Hong Kong
ERIC Educational Resources Information Center
Ching, Steve H.; Tai, Alice; Pong, Joanna; Cheng, Michael
2009-01-01
Self-service is the trend of today's libraries. The Run Run Shaw Library at the City University of Hong Kong therefore is transforming itself in the same direction and offers to its users what it calls the Easy-service. However, the need to handle cash-based micropayments at the service counter is a stumbling block. There is an urgent need for the…
Activity Monitors Help Users Get Optimum Sun Exposure
NASA Technical Reports Server (NTRS)
2015-01-01
Goddard scientist Shahid Aslam was investigating alternative methods for measuring extreme ultraviolet radiation on the Solar Dynamics Observatory when he hit upon semiconductors that measured wavelengths pertinent to human health. As a result, he and a partner established College Park, Maryland-based Sensor Sensor LLC and developed UVA+B SunFriend, a wrist monitor that lets people know when they've received their optimal amounts of sunlight for the day.
AIDE - Advanced Intrusion Detection Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Cathy L.
2013-04-28
Would you like to know when someone has dropped an undesirable executable binary on our system? What about something less malicious such as a software installation by a user? What about the user who decides to install a newer version of mod_perl or PHP on your web server without letting you know beforehand? Or even something as simple as when an undocumented config file change is made by another member of the admin group? Do you even want to know about all the changes that happen on a daily basis on your server? The purpose of an intrusion detection systemmore » (IDS) is to detect unauthorized, possibly malicious activity. The purpose of a host-based IDS, or file integrity checker, is check for unauthorized changes to key system files, binaries, libraries, and directories on the system. AIDE is an Open Source file and directory integrity checker. AIDE will let you know when a file or directory has been added, deleted, modified. It is included with the Red Hat Enterprise 6. It is available for other Linux distros. This is a case study describing the process of configuring AIDE on an out of the box RHEL6 installation. Its goal is to illustrate the thinking and the process by which a useful AIDE configuration is built.« less
Guiding users to quality information about osteoarthritis on the Internet: a pilot study.
Ilic, Dragan; Maloney, Stephen; Green, Sally
2005-12-01
This pilot study explored the feasibility of and user satisfaction with an Internet User's Guide (IUG) to assist patients in sourcing relevant, valid information about osteoarthritis on the Internet. Twelve people with osteoarthritis participated in focus groups that involved searching the Internet for information relating to their condition with the aid of the IUG. Participants were asked to perform an initial search of the Internet for information on osteoarthritis, followed by a second search with the aid of the IUG. User satisfaction with the IUG and subsequent online searches was obtained during and following the Internet simulations. A total of 92% of all participants had previously used the Internet to search for health information in the past. However, only a third used the Internet to further source information on their condition. Prior to using the IUG, participants cited efficiently searching the Internet for relevant and credible information as the primary obstacle in their continued use of the Internet. All participants reported that the use of the IUG increased their ability to source quality online medical information. The provision of an IUG may support and increase user awareness about searching for relevant, quality medical information on the Internet. Further quantitative and qualitative research is required to identify how best to empower consumers who wish to use the Internet as a medical resource.
Hanauer, David A; Wu, Danny T Y; Yang, Lei; Mei, Qiaozhu; Murkowski-Steffy, Katherine B; Vydiswaran, V G Vinod; Zheng, Kai
2017-03-01
The utility of biomedical information retrieval environments can be severely limited when users lack expertise in constructing effective search queries. To address this issue, we developed a computer-based query recommendation algorithm that suggests semantically interchangeable terms based on an initial user-entered query. In this study, we assessed the value of this approach, which has broad applicability in biomedical information retrieval, by demonstrating its application as part of a search engine that facilitates retrieval of information from electronic health records (EHRs). The query recommendation algorithm utilizes MetaMap to identify medical concepts from search queries and indexed EHR documents. Synonym variants from UMLS are used to expand the concepts along with a synonym set curated from historical EHR search logs. The empirical study involved 33 clinicians and staff who evaluated the system through a set of simulated EHR search tasks. User acceptance was assessed using the widely used technology acceptance model. The search engine's performance was rated consistently higher with the query recommendation feature turned on vs. off. The relevance of computer-recommended search terms was also rated high, and in most cases the participants had not thought of these terms on their own. The questions on perceived usefulness and perceived ease of use received overwhelmingly positive responses. A vast majority of the participants wanted the query recommendation feature to be available to assist in their day-to-day EHR search tasks. Challenges persist for users to construct effective search queries when retrieving information from biomedical documents including those from EHRs. This study demonstrates that semantically-based query recommendation is a viable solution to addressing this challenge. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Bell, Mary Ann
2007-01-01
In this article, the author expresses her disappointment over the self-censorship being practiced by some schools. Some schools are only letting students search sites on district-approved lists, while others are imposing stringent time limits on certain sites. In a few extreme cases, schools banned Internet entirely. These practices are blocking…
NASA Astrophysics Data System (ADS)
Chandrashekar, Varsha; B, Prabadevi
2017-11-01
Providing services to user is the main functionality of every search engine. Recently services based on users’ current location has also been enabled with the help of GPS in every smartphone. But how safe are their searches and how trustworthy is the search engine. Why are users tracked even when they turn off the tracking. Where lies the solution. Unless there is a security system to prevent ad trackers from misusing user’ s location, any application which relies on user’ s location will be of no use. We know that location information is highly sensitive personal data. Knowing where a person was at a particular time, one can infer his/her personal activities, political views, health status, and launch unsolicited advertising, physical attacks or harassment. Therefore, mechanisms to preserve users' privacy and anonymity are mandatory in any application that involves users’ location. So there comes the need to hide the location of the users. This proposed application aims to implement some of the features required for preserving users’ privacy and also a secure user login so that services provided to users can be used by them without danger of their searches being misused.
MetaSEEk: a content-based metasearch engine for images
NASA Astrophysics Data System (ADS)
Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu
1997-12-01
Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.
Usability/Sentiment for the Enterprise and ENTERPRISE
NASA Technical Reports Server (NTRS)
Meza, David; Berndt, Sarah
2014-01-01
The purpose of the Sentiment of Search Study for NASA Johnson Space Center (JSC) is to gain insight into the intranet search environment. With an initial usability survey, the authors were able to determine a usability score based on the Systems Usability Scale (SUS). Created in 1986, the freely available, well cited, SUS is commonly used to determine user perceptions of a system (in this case the intranet search environment). As with any improvement initiative, one must first examine and document the current reality of the situation. In this scenario, a method was needed to determine the usability of a search interface in addition to the user's perception on how well the search system was providing results. The use of the SUS provided a mechanism to quickly ascertain information in both areas, by adding one additional open-ended question at the end. The first ten questions allowed us to examine the usability of the system, while the last questions informed us on how the users rated the performance of the search results. The final analysis provides us with a better understanding of the current situation and areas to focus on for improvement. The power of search applications to enhance knowledge transfer is indisputable. The performance impact for any user unable to find needed information undermines project lifecycle, resource and scheduling requirements. Ever-increasing complexity of content and the user interface make usability considerations for the intranet, especially for search, a necessity instead of a 'nice-to-have'. Despite these arguments, intranet usability is largely disregarded due to lack of attention beyond the functionality of the infrastructure (White, 2013). The data collected from users of the JSC search system revealed their overall sentiment by means of the widely-known System Usability Scale. Results of the scores suggest 75%, +/-0.04, of the population rank the search system below average. In terms of a grading scaled, this equated to D or lower. It is obvious JSC users are not satisfied with the current situation, however they are eager to provide information and assistance in improving the search system. A majority of the respondents provided feedback on the issues most troubling them. This information will be used to enrich the next phase, root cause analysis and solution creation.
End-User Searching in a Large Library Network: A Case Study of Patent Attorneys.
ERIC Educational Resources Information Center
Vollaro, Alice J.; Hawkins, Donald T.
1986-01-01
Reports results of study of a group of end users (patent attorneys) doing their own online searching at AT&T Bell Laboratories. Highlights include DIALOG databases used by the attorneys, locations and searching modes, characteristics of patent attorney searchers, and problem areas. Questionnaire is appended. (5 references) (EJS)
Improving Web Search for Difficult Queries
ERIC Educational Resources Information Center
Wang, Xuanhui
2009-01-01
Search engines have now become essential tools in all aspects of our life. Although a variety of information needs can be served very successfully, there are still a lot of queries that search engines can not answer very effectively and these queries always make users feel frustrated. Since it is quite often that users encounter such "difficult…
ERIC Educational Resources Information Center
Tenopir, Carol
2004-01-01
Only the most dedicated super-searchers are motivated to learn and control command systems, like DialogClassic, that rely on the user to input complex search strategies. Infrequent searchers and most end users choose interfaces that do some of the work for them and make the search process appear easy. However, the easier a good interface seems to…
Features: Real-Time Adaptive Feature and Document Learning for Web Search.
ERIC Educational Resources Information Center
Chen, Zhixiang; Meng, Xiannong; Fowler, Richard H.; Zhu, Binhai
2001-01-01
Describes Features, an intelligent Web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Explains how Features learns from users' document relevance feedback and automatically extracts and suggests indexing keywords relevant to a search query, and learns from users' keyword relevance feedback…
User Practices in Keyword and Boolean Searching on an Online Public Access Catalog.
ERIC Educational Resources Information Center
Ensor, Pat
1992-01-01
Discussion of keyword and Boolean searching techniques in online public access catalogs (OPACs) focuses on a study conducted at Indiana State University that examined users' attitudes toward searching on NOTIS (Northwestern Online Total Integrated System). Relevant literature is reviewed, and implications for library instruction are suggested. (17…
Ephemeral Relevance and User Activities in a Search Session
ERIC Educational Resources Information Center
Jiang, Jiepu
2016-01-01
We study relevance judgment and user activities in a search session. We focus on ephemeral relevance--a contextual measurement regarding the amount of useful information a searcher acquired from a clicked result at a particular time--and two primary types of search activities--query reformulation and click. The purpose of the study is both…
Automatic Recognition of Object Names in Literature
NASA Astrophysics Data System (ADS)
Bonnin, C.; Lesteven, S.; Derriere, S.; Oberto, A.
2008-08-01
SIMBAD is a database of astronomical objects that provides (among other things) their bibliographic references in a large number of journals. Currently, these references have to be entered manually by librarians who read each paper. To cope with the increasing number of papers, CDS develops a tool to assist the librarians in their work, taking advantage of the Dictionary of Nomenclature of Celestial Objects, which keeps track of object acronyms and of their origin. The program searches for object names directly in PDF documents by comparing the words with all the formats stored in the Dictionary of Nomenclature. It also searches for variable star names based on constellation names and for a large list of usual names such as Aldebaran or the Crab. Object names found in the documents often correspond to several astronomical objects. The system retrieves all possible matches, displays them with their object type given by SIMBAD, and lets the librarian make the final choice. The bibliographic reference can then be automatically added to the object identifiers in the database. Besides, the systematic usage of the Dictionary of Nomenclature, which is updated manually, permitted to automatically check it and to detect errors and inconsistencies. Last but not least, the program collects some additional information such as the position of the object names in the document (in the title, subtitle, abstract, table, figure caption...) and their number of occurrences. In the future, this will permit to calculate the 'weight' of an object in a reference and to provide SIMBAD users with an important new information, which will help them to find the most relevant papers in the object reference list.
ERIC Educational Resources Information Center
Chung, EunKyung; Yoon, JungWon
2009-01-01
Introduction: The purpose of this study is to compare characteristics and features of user supplied tags and search query terms for images on the "Flickr" Website in terms of categories of pictorial meanings and level of term specificity. Method: This study focuses on comparisons between tags and search queries using Shatford's categorization…
Let's get real about virtual: online health is here to stay.
Prainsack, Barbara
2013-08-01
A lot has been written about the opportunities of the Internet for medicine, and lately, also for disease research specifically. Although it remains to be seen how significant and sustainable a change this will result in, some recent developments are highly relevant for the area of genetic research. User-friendly, low-threshold web-based tools do not only provide information to patients and other users, but they also supply user-generated data that can be utilized by both medical practice and medical research. Many of these developments have been below the radar of mainstream academic research so far. Issues related to data quality and standardization, as well as data protection and privacy, still need to be addressed. Dismissing these platforms as fads of a tiny privileged minority risks missing the opportunity to have our say in these debates.
Cross-System Evaluation of Clinical Trial Search Engines
Jiang, Silis Y.; Weng, Chunhua
2014-01-01
Clinical trials are fundamental to the advancement of medicine but constantly face recruitment difficulties. Various clinical trial search engines have been designed to help health consumers identify trials for which they may be eligible. Unfortunately, knowledge of the usefulness and usability of their designs remains scarce. In this study, we used mixed methods, including time-motion analysis, think-aloud protocol, and survey, to evaluate five popular clinical trial search engines with 11 users. Differences in user preferences and time spent on each system were observed and correlated with user characteristics. In general, searching for applicable trials using these systems is a cognitively demanding task. Our results show that user perceptions of these systems are multifactorial. The survey indicated eTACTS being the generally preferred system, but this finding did not persist among all mixed methods. This study confirms the value of mixed-methods for a comprehensive system evaluation. Future system designers must be aware that different users groups expect different functionalities. PMID:25954590
Cross-system evaluation of clinical trial search engines.
Jiang, Silis Y; Weng, Chunhua
2014-01-01
Clinical trials are fundamental to the advancement of medicine but constantly face recruitment difficulties. Various clinical trial search engines have been designed to help health consumers identify trials for which they may be eligible. Unfortunately, knowledge of the usefulness and usability of their designs remains scarce. In this study, we used mixed methods, including time-motion analysis, think-aloud protocol, and survey, to evaluate five popular clinical trial search engines with 11 users. Differences in user preferences and time spent on each system were observed and correlated with user characteristics. In general, searching for applicable trials using these systems is a cognitively demanding task. Our results show that user perceptions of these systems are multifactorial. The survey indicated eTACTS being the generally preferred system, but this finding did not persist among all mixed methods. This study confirms the value of mixed-methods for a comprehensive system evaluation. Future system designers must be aware that different users groups expect different functionalities.
G-Bean: an ontology-graph based web tool for biomedical literature retrieval
2014-01-01
Background Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. Methods G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Results Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. Conclusions G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user. PMID:25474588
G-Bean: an ontology-graph based web tool for biomedical literature retrieval.
Wang, James Z; Zhang, Yuanyuan; Dong, Liang; Li, Lin; Srimani, Pradip K; Yu, Philip S
2014-01-01
Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user.
Users' Perceptions of the Web As Revealed by Transaction Log Analysis.
ERIC Educational Resources Information Center
Moukdad, Haidar; Large, Andrew
2001-01-01
Describes the results of a transaction log analysis of a Web search engine, WebCrawler, to analyze user's queries for information retrieval. Results suggest most users do not employ advanced search features, and the linguistic structure often resembles a human-human communication model that is not always successful in human-computer communication.…
Searching for Images: The Analysis of Users' Queries for Image Retrieval in American History.
ERIC Educational Resources Information Center
Choi, Youngok; Rasmussen, Edie M.
2003-01-01
Studied users' queries for visual information in American history to identify the image attributes important for retrieval and the characteristics of users' queries for digital images, based on queries from 38 faculty and graduate students. Results of pre- and post-test questionnaires and interviews suggest principle categories of search terms.…
Subject Access Problems of Different Types of OPAC Users Or, the Double Challenge.
ERIC Educational Resources Information Center
Cochrane, Pauline A.
1989-01-01
Reviews the problems of users of online public access catalogs and argues that it is necessary to think of all searching problems as systems problems rather than user failures, and to concentrate research in the area of systems enhancements. A list of improved tools needed for subject searching in online catalogs is identified. (CLB)
Building a COTS archive for satellite data
NASA Technical Reports Server (NTRS)
Singer, Ken; Terril, Dave; Kelly, Jack; Nichols, Cathy
1994-01-01
The goal of the NOAA/NESDIS Active Archive was to provide a method of access to an online archive of satellite data. The archive had to manage and store the data, let users interrogate the archive, and allow users to retrieve data from the archive. Practical issues of the system design such as implementation time, cost and operational support were examined in addition to the technical issues. There was a fixed window of opportunity to create an operational system, along with budget and staffing constraints. Therefore, the technical solution had to be designed and implemented subject to constraint imposed by the practical issues. The NOAA/NESDIS Active Archive came online in July of 1994, meeting all of its original objectives.
Health literacy and usability of clinical trial search engines.
Utami, Dina; Bickmore, Timothy W; Barry, Barbara; Paasche-Orlow, Michael K
2014-01-01
Several web-based search engines have been developed to assist individuals to find clinical trials for which they may be interested in volunteering. However, these search engines may be difficult for individuals with low health and computer literacy to navigate. The authors present findings from a usability evaluation of clinical trial search tools with 41 participants across the health and computer literacy spectrum. The study consisted of 3 parts: (a) a usability study of an existing web-based clinical trial search tool; (b) a usability study of a keyword-based clinical trial search tool; and (c) an exploratory study investigating users' information needs when deciding among 2 or more candidate clinical trials. From the first 2 studies, the authors found that users with low health literacy have difficulty forming queries using keywords and have significantly more difficulty using a standard web-based clinical trial search tool compared with users with adequate health literacy. From the third study, the authors identified the search factors most important to individuals searching for clinical trials and how these varied by health literacy level.
2015-01-01
Background In recent years, with advances in techniques for protein structure analysis, the knowledge about protein structure and function has been published in a vast number of articles. A method to search for specific publications from such a large pool of articles is needed. In this paper, we propose a method to search for related articles on protein structure analysis by using an article itself as a query. Results Each article is represented as a set of concepts in the proposed method. Then, by using similarities among concepts formulated from databases such as Gene Ontology, similarities between articles are evaluated. In this framework, the desired search results vary depending on the user's search intention because a variety of information is included in a single article. Therefore, the proposed method provides not only one input article (primary article) but also additional articles related to it as an input query to determine the search intention of the user, based on the relationship between two query articles. In other words, based on the concepts contained in the input article and additional articles, we actualize a relevant literature search that considers user intention by varying the degree of attention given to each concept and modifying the concept hierarchy graph. Conclusions We performed an experiment to retrieve relevant papers from articles on protein structure analysis registered in the Protein Data Bank by using three query datasets. The experimental results yielded search results with better accuracy than when user intention was not considered, confirming the effectiveness of the proposed method. PMID:25952498
Technical development of PubMed interact: an improved interface for MEDLINE/PubMed searches.
Muin, Michael; Fontelo, Paul
2006-11-03
The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications.
Modeling User Behavior and Attention in Search
ERIC Educational Resources Information Center
Huang, Jeff
2013-01-01
In Web search, query and click log data are easy to collect but they fail to capture user behaviors that do not lead to clicks. As search engines reach the limits inherent in click data and are hungry for more data in a competitive environment, mining cursor movements, hovering, and scrolling becomes important. This dissertation investigates how…
Impact of Internet Search Engines on OPAC Users: A Study of Punjabi University, Patiala (India)
ERIC Educational Resources Information Center
Kumar, Shiv
2012-01-01
Purpose: The aim of this paper is to study the impact of internet search engine usage with special reference to OPAC searches in the Punjabi University Library, Patiala, Punjab (India). Design/methodology/approach: The primary data were collected from 352 users comprising faculty, research scholars and postgraduate students of the university. A…
What do web-use skill differences imply for online health information searches?
Feufel, Markus A; Stahl, S Frederica
2012-06-13
Online health information is of variable and often low scientific quality. In particular, elderly less-educated populations are said to struggle in accessing quality online information (digital divide). Little is known about (1) how their online behavior differs from that of younger, more-educated, and more-frequent Web users, and (2) how the older population may be supported in accessing good-quality online health information. To specify the digital divide between skilled and less-skilled Web users, we assessed qualitative differences in technical skills, cognitive strategies, and attitudes toward online health information. Based on these findings, we identified educational and technological interventions to help Web users find and access good-quality online health information. We asked 22 native German-speaking adults to search for health information online. The skilled cohort consisted of 10 participants who were younger than 30 years of age, had a higher level of education, and were more experienced using the Web than 12 participants in the less-skilled cohort, who were at least 50 years of age. We observed online health information searches to specify differences in technical skills and analyzed concurrent verbal protocols to identify health information seekers' cognitive strategies and attitudes. Our main findings relate to (1) attitudes: health information seekers in both cohorts doubted the quality of information retrieved online; among poorly skilled seekers, this was mainly because they doubted their skills to navigate vast amounts of information; once a website was accessed, quality concerns disappeared in both cohorts, (2) technical skills: skilled Web users effectively filtered information according to search intentions and data sources; less-skilled users were easily distracted by unrelated information, and (3) cognitive strategies: skilled Web users searched to inform themselves; less-skilled users searched to confirm their health-related opinions such as "vaccinations are harmful." Independent of Web-use skills, most participants stopped a search once they had found the first piece of evidence satisfying search intentions, rather than according to quality criteria. Findings related to Web-use skills differences suggest two classes of interventions to facilitate access to good-quality online health information. Challenges related to findings (1) and (2) should be remedied by improving people's basic Web-use skills. In particular, Web users should be taught how to avoid information overload by generating specific search terms and to avoid low-quality information by requesting results from trusted websites only. Problems related to finding (3) may be remedied by visually labeling search engine results according to quality criteria.
Sentiment of Search: KM and IT for User Expectations
NASA Technical Reports Server (NTRS)
Berndt, Sarah Ann; Meza, David
2014-01-01
User perceived value is the number one indicator of a successful implementation of KM and IT collaborations. The system known as "Search" requires more strategy and workflow that a mere data dump or ungoverned infrastructure can provide. Monitoring of user sentiment can be a driver for providing objective measures of success and justifying changes to the user interface. The dynamic nature of information technology makes traditional usability metrics difficult to identify, yet easy to argue against. There is little disagreement, however, on the criticality of adapting to user needs and expectations. The Systems Usability Scale (SUS), developed by John Brook in 1986 has become an industry standard for usability engineering. The first phase of a modified SUS, polls the sentiment of representative users of the JSC Search system. This information can be used to correlate user determined value with types of information sought and how the system is (or is not) meeting expectations. Sentiment analysis by way of the SUS assists an organization in identification and prioritization of the KM and IT variables impacting user perceived value. A secondary, user group focused analysis is the topic of additional work that demonstrates the impact of specific changes dictated by user sentiment.
Data Discovery and Access via the Heliophysics Events Knowledgebase (HEK)
NASA Astrophysics Data System (ADS)
Somani, A.; Hurlburt, N. E.; Schrijver, C. J.; Cheung, M.; Freeland, S.; Slater, G. L.; Seguin, R.; Timmons, R.; Green, S.; Chang, L.; Kobashi, A.; Jaffey, A.
2011-12-01
The HEK is a integrated system which helps direct scientists to solar events and data from a variety of providers. The system is fully operational and adoption of HEK has been growing since the launch of NASA's SDO mission. In this presentation we describe the different components that comprise HEK. The Heliophysics Events Registry (HER) and Heliophysics Coverage Registry (HCR) form the two major databases behind the system. The HCR allows the user to search on coverage event metadata for a variety of instruments. The HER allows the user to search on annotated event metadata for a variety of instruments. Both the HCR and HER are accessible via a web API which can return search results in machine readable formats (e.g. XML and JSON). A variety of SolarSoft services are also provided to allow users to search the HEK as well as obtain and manipulate data. Other components include - the Event Detection System (EDS) continually runs feature finding algorithms on SDO data to populate the HER with relevant events, - A web form for users to request SDO data cutouts for multiple AIA channels as well as HMI line-of-sight magnetograms, - iSolSearch, which allows a user to browse events in the HER and search for specific events over a specific time interval, all within a graphical web page, - Panorama, which is the software tool used for rapid visualization of large volumes of solar image data in multiple channels/wavelengths. The user can also easily create WYSIWYG movies and launch the Annotator tool to describe events and features. - EVACS, which provides a JOGL powered client for the HER and HCR. EVACS displays the searched for events on a full disk magnetogram of the sun while displaying more detailed information for events.
A Semantic Approach for Knowledge Discovery to Help Mitigate Habitat Loss in the Gulf of Mexico
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Maskey, M.; Graves, S.; Hardin, D.
2008-12-01
Noesis is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities. Ontologies enable Noesis to help users refine their searches for information on the open web and in hidden web locations such as data catalogues with standardized, but discipline specific vocabularies. Through its ontologies Noesis provides a guided refinement of search queries which produces complete and accurate searches while reducing the user's burden to experiment with different search strings. All search results are organized by categories (e. g. all results from Google are grouped together) which may be selected or omitted according to the desire of the user. During the past two years ontologies were developed for sea grasses in the Gulf of Mexico and were used to support a habitat restoration demonstration project. Currently these ontologies are being augmented to address the special characteristics of mangroves. These new ontologies will extend the demonstration project to broader regions of the Gulf including protected mangrove locations in coastal Mexico. Noesis contributes to the decision making process by producing a comprehensive list of relevant resources based on the semantic information contained in the ontologies. Ontologies are organized in a tree like taxonomies, where the child nodes represent the Specializations and the parent nodes represent the Generalizations of a node or concept. Specializations can be used to provide more detailed search, while generalizations are used to make the search broader. Ontologies are also used to link two syntactically different terms to one semantic concept (synonyms). Appending a synonym to the query expands the search, thus providing better search coverage. Every concept has a set of properties that are neither in the same inheritance hierarchy (Specializations / Generalizations) nor equivalent (synonyms). These are called Related Concepts and they are captured in the ontology through property relationships. By using Related Concepts users can search for resources with respect to a particular property. Noesis automatically generates searches that include all of these capabilities, removing the burden from the user and producing broader and more accurate search results. This presentation will demonstrate the features of Noesis and describe its application to habitat studies in the Gulf of Mexico.
Military Application of Networking by Touch in Collaborative Planning and Tactical Environments
2007-09-01
the network. For example, processing, or understanding, four plus pages per second, let alone 1500 pages, far surpasses a normal user’s ability to...discovering rapid, evolutionary approaches for filtering four to 1500 pages per second into knowledgeable forms relevant to the user. Unless we...weapons at the ready when the TL receives a slight vibration in the upper right quadrant of a vest he is wearing. The familiar tactile sensation
2009-06-01
to floating point , to multi-level logic. 2 Overview Self-aware computation can be distinguished from existing computational models which are...systems have advanced to the point that the time is ripe to realize such a system. To illustrate, let us examine each of the key aspects of self...servers for each service, there are no single points of failure in the system. If an OS or user core has a failure, one of several introspection cores
2005-06-01
company has devel- oped an exciting prototype technology: … that lets users of PDAs and similar mobile devices put data into their handheld systems...for a class of small, easily carried electronic devices used to store and retrieve infor- mation” [2], were at one time viewed as lit- tle more than...some of the many ways that PDA technology is currently being used within the DoD: • The Pocket-Sized Forward Entry Device (PFED) is a ruggedized PDA
2004-11-01
institutionalized approaches to solving problems, company/client specific mission priorities (for example, State Department vs . Army Reserve and... independent variables that let the user leave a particular step before fin- ishing all the items, and to return at a later time without any data loss. One...Sales, Main Exchange, Miscellane- ous Shops, Post Office, Restaurant , and Theater.) Authorized customers served 04 Other criteria pro- vided by the
Large-scale feature searches of collections of medical imagery
NASA Astrophysics Data System (ADS)
Hedgcock, Marcus W.; Karshat, Walter B.; Levitt, Tod S.; Vosky, D. N.
1993-09-01
Large scale feature searches of accumulated collections of medical imagery are required for multiple purposes, including clinical studies, administrative planning, epidemiology, teaching, quality improvement, and research. To perform a feature search of large collections of medical imagery, one can either search text descriptors of the imagery in the collection (usually the interpretation), or (if the imagery is in digital format) the imagery itself. At our institution, text interpretations of medical imagery are all available in our VA Hospital Information System. These are downloaded daily into an off-line computer. The text descriptors of most medical imagery are usually formatted as free text, and so require a user friendly database search tool to make searches quick and easy for any user to design and execute. We are tailoring such a database search tool (Liveview), developed by one of the authors (Karshat). To further facilitate search construction, we are constructing (from our accumulated interpretation data) a dictionary of medical and radiological terms and synonyms. If the imagery database is digital, the imagery which the search discovers is easily retrieved from the computer archive. We describe our database search user interface, with examples, and compare the efficacy of computer assisted imagery searches from a clinical text database with manual searches. Our initial work on direct feature searches of digital medical imagery is outlined.
Earthdata Search Usability Study Process
NASA Technical Reports Server (NTRS)
Reese, Mark
2016-01-01
User experience (UX) design is the process of enhancing user satisfaction by improving various aspects of the user's interaction with an application or website. One aspect of UX design is usability, or the extent to which an application can be used to to accomplish tasks efficiently, effectively, and with satisfaction. NASA's Earthdata Search Client recently underwent a focused usability testing project to measure usability and gain valuable user feedback and insights to increase usability for its end-users. This presentation focuses on the process by which the usability tests were administered and the lessons learned throughout the process.
Wong, Paul Wai-Ching; Fu, King-Wa; Yau, Rickey Sai-Pong; Ma, Helen Hei-Man; Law, Yik-Wa; Chang, Shu-Sen; Yip, Paul Siu-Fai
2013-01-11
The Internet's potential impact on suicide is of major public health interest as easy online access to pro-suicide information or specific suicide methods may increase suicide risk among vulnerable Internet users. Little is known, however, about users' actual searching and browsing behaviors of online suicide-related information. To investigate what webpages people actually clicked on after searching with suicide-related queries on a search engine and to examine what queries people used to get access to pro-suicide websites. A retrospective observational study was done. We used a web search dataset released by America Online (AOL). The dataset was randomly sampled from all AOL subscribers' web queries between March and May 2006 and generated by 657,000 service subscribers. We found 5526 search queries (0.026%, 5526/21,000,000) that included the keyword "suicide". The 5526 search queries included 1586 different search terms and were generated by 1625 unique subscribers (0.25%, 1625/657,000). Of these queries, 61.38% (3392/5526) were followed by users clicking on a search result. Of these 3392 queries, 1344 (39.62%) webpages were clicked on by 930 unique users but only 1314 of those webpages were accessible during the study period. Each clicked-through webpage was classified into 11 categories. The categories of the most visited webpages were: entertainment (30.13%; 396/1314), scientific information (18.31%; 240/1314), and community resources (14.53%; 191/1314). Among the 1314 accessed webpages, we could identify only two pro-suicide websites. We found that the search terms used to access these sites included "commiting suicide with a gas oven", "hairless goat", "pictures of murder by strangulation", and "photo of a severe burn". A limitation of our study is that the database may be dated and confined to mainly English webpages. Searching or browsing suicide-related or pro-suicide webpages was uncommon, although a small group of users did access websites that contain detailed suicide method information.
Tags Extarction from Spatial Documents in Search Engines
NASA Astrophysics Data System (ADS)
Borhaninejad, S.; Hakimpour, F.; Hamzei, E.
2015-12-01
Nowadays the selective access to information on the Web is provided by search engines, but in the cases which the data includes spatial information the search task becomes more complex and search engines require special capabilities. The purpose of this study is to extract the information which lies in spatial documents. To that end, we implement and evaluate information extraction from GML documents and a retrieval method in an integrated approach. Our proposed system consists of three components: crawler, database and user interface. In crawler component, GML documents are discovered and their text is parsed for information extraction; storage. The database component is responsible for indexing of information which is collected by crawlers. Finally the user interface component provides the interaction between system and user. We have implemented this system as a pilot system on an Application Server as a simulation of Web. Our system as a spatial search engine provided searching capability throughout the GML documents and thus an important step to improve the efficiency of search engines has been taken.
A novel visualization model for web search results.
Nguyen, Tien N; Zhang, Jin
2006-01-01
This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.
Faceted Visualization of Three Dimensional Neuroanatomy By Combining Ontology with Faceted Search
Veeraraghavan, Harini; Miller, James V.
2013-01-01
In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset. PMID:24006207
Faceted visualization of three dimensional neuroanatomy by combining ontology with faceted search.
Veeraraghavan, Harini; Miller, James V
2014-04-01
In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset.
Building a Smart Portal for Astronomy
NASA Astrophysics Data System (ADS)
Derriere, S.; Boch, T.
2011-07-01
The development of a portal for accessing astronomical resources is not an easy task. The ever-increasing complexity of the data products can result in very complex user interfaces, requiring a lot of effort and learning from the user in order to perform searches. This is often a design choice, where the user must explicitly set many constraints, while the portal search logic remains simple. We investigated a different approach, where the query interface is kept as simple as possible (ideally, a simple text field, like for Google search), and the search logic is made much more complex to interpret the query in a relevant manner. We will present the implications of this approach in terms of interpretation and categorization of the query parameters (related to astronomical vocabularies), translation (mapping) of these concepts into the portal components metadata, identification of query schemes and use cases matching the input parameters, and delivery of query results to the user.
Intra-Operative Dosimetry in Prostate Brachytherapy
2007-11-01
of the focal spot. 2.1. Model for Reconstruction Space Transformation As illustrated in Figure 8, let A & B ( with reference frames FA & FB) be the two...simplex optimization method in MATLAB 7.0 with the search space being defined by the distortion modes from PCA. A linear combination of the modes would...arm is tracked with an X-ray fiducial system called FTRAC that is composed of optimally selected polynomial
Pian, Wenjing; Khoo, Christopher Sg; Chi, Jianxing
2017-12-21
Users searching for health information on the Internet may be searching for their own health issue, searching for someone else's health issue, or browsing with no particular health issue in mind. Previous research has found that these three categories of users focus on different types of health information. However, most health information websites provide static content for all users. If the three types of user health information need contexts can be identified by the Web application, the search results or information offered to the user can be customized to increase its relevance or usefulness to the user. The aim of this study was to investigate the possibility of identifying the three user health information contexts (searching for self, searching for others, or browsing with no particular health issue in mind) using just hyperlink clicking behavior; using eye-tracking information; and using a combination of eye-tracking, demographic, and urgency information. Predictive models are developed using multinomial logistic regression. A total of 74 participants (39 females and 35 males) who were mainly staff and students of a university were asked to browse a health discussion forum, Healthboards.com. An eye tracker recorded their examining (eye fixation) and skimming (quick eye movement) behaviors on 2 types of screens: summary result screen displaying a list of post headers, and detailed post screen. The following three types of predictive models were developed using logistic regression analysis: model 1 used only the time spent in scanning the summary result screen and reading the detailed post screen, which can be determined from the user's mouse clicks; model 2 used the examining and skimming durations on each screen, recorded by an eye tracker; and model 3 added user demographic and urgency information to model 2. An analysis of variance (ANOVA) analysis found that users' browsing durations were significantly different for the three health information contexts (P<.001). The logistic regression model 3 was able to predict the user's type of health information context with a 10-fold cross validation mean accuracy of 84% (62/74), followed by model 2 at 73% (54/74) and model 1 at 71% (52/78). In addition, correlation analysis found that particular browsing durations were highly correlated with users' age, education level, and the urgency of their information need. A user's type of health information need context (ie, searching for self, for others, or with no health issue in mind) can be identified with reasonable accuracy using just user mouse clicks that can easily be detected by Web applications. Higher accuracy can be obtained using Google glass or future computing devices with eye tracking function. ©Wenjing Pian, Christopher SG Khoo, Jianxing Chi. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 21.12.2017.
NASA Technical Reports Server (NTRS)
McGreevy, Michael W.; Connors, Mary M. (Technical Monitor)
2001-01-01
To support Search Requests and Quick Responses at the Aviation Safety Reporting System (ASRS), four new QUORUM methods have been developed: keyword search, phrase search, phrase generation, and phrase discovery. These methods build upon the core QUORUM methods of text analysis, modeling, and relevance-ranking. QUORUM keyword search retrieves ASRS incident narratives that contain one or more user-specified keywords in typical or selected contexts, and ranks the narratives on their relevance to the keywords in context. QUORUM phrase search retrieves narratives that contain one or more user-specified phrases, and ranks the narratives on their relevance to the phrases. QUORUM phrase generation produces a list of phrases from the ASRS database that contain a user-specified word or phrase. QUORUM phrase discovery finds phrases that are related to topics of interest. Phrase generation and phrase discovery are particularly useful for finding query phrases for input to QUORUM phrase search. The presentation of the new QUORUM methods includes: a brief review of the underlying core QUORUM methods; an overview of the new methods; numerous, concrete examples of ASRS database searches using the new methods; discussion of related methods; and, in the appendices, detailed descriptions of the new methods.
In Pursuit of Image: How We Think about Photographs We Seek
ERIC Educational Resources Information Center
Oyarce, Sara
2012-01-01
The user perspective of image search remains poorly understood. The purpose of this study is to identify and investigate the key issues relevant to a user's interaction with images and the user's approach to image search. A deeper understanding of these issues will serve to inform the design of image retrieval systems and in turn better…
The User Interface of ERIC on the Macintosh: A Qualitative Study of Novice Users.
ERIC Educational Resources Information Center
Thomas, Patricia
The experience of novice users searching SilverPlatter's ERIC CD-ROM on the Macintosh was studied. Ten students from an introductory master's level course in library and information science were recruited as volunteer subjects. Subjects were asked to complete a search on the ERIC CD-ROM; and data were collected via observations, a think-aloud…
PIA: An Intuitive Protein Inference Engine with a Web-Based User Interface.
Uszkoreit, Julian; Maerkens, Alexandra; Perez-Riverol, Yasset; Meyer, Helmut E; Marcus, Katrin; Stephan, Christian; Kohlbacher, Oliver; Eisenacher, Martin
2015-07-02
Protein inference connects the peptide spectrum matches (PSMs) obtained from database search engines back to proteins, which are typically at the heart of most proteomics studies. Different search engines yield different PSMs and thus different protein lists. Analysis of results from one or multiple search engines is often hampered by different data exchange formats and lack of convenient and intuitive user interfaces. We present PIA, a flexible software suite for combining PSMs from different search engine runs and turning these into consistent results. PIA can be integrated into proteomics data analysis workflows in several ways. A user-friendly graphical user interface can be run either locally or (e.g., for larger core facilities) from a central server. For automated data processing, stand-alone tools are available. PIA implements several established protein inference algorithms and can combine results from different search engines seamlessly. On several benchmark data sets, we show that PIA can identify a larger number of proteins at the same protein FDR when compared to that using inference based on a single search engine. PIA supports the majority of established search engines and data in the mzIdentML standard format. It is implemented in Java and freely available at https://github.com/mpc-bioinformatics/pia.
Personalized query suggestion based on user behavior
NASA Astrophysics Data System (ADS)
Chen, Wanyu; Hao, Zepeng; Shao, Taihua; Chen, Honghui
Query suggestions help users refine their queries after they input an initial query. Previous work mainly concentrated on similarity-based and context-based query suggestion approaches. However, models that focus on adapting to a specific user (personalization) can help to improve the probability of the user being satisfied. In this paper, we propose a personalized query suggestion model based on users’ search behavior (UB model), where we inject relevance between queries and users’ search behavior into a basic probabilistic model. For the relevance between queries, we consider their semantical similarity and co-occurrence which indicates the behavior information from other users in web search. Regarding the current user’s preference to a query, we combine the user’s short-term and long-term search behavior in a linear fashion and deal with the data sparse problem with Bayesian probabilistic matrix factorization (BPMF). In particular, we also investigate the impact of different personalization strategies (the combination of the user’s short-term and long-term search behavior) on the performance of query suggestion reranking. We quantify the improvement of our proposed UB model against a state-of-the-art baseline using the public AOL query logs and show that it beats the baseline in terms of metrics used in query suggestion reranking. The experimental results show that: (i) for personalized ranking, users’ behavioral information helps to improve query suggestion effectiveness; and (ii) given a query, merging information inferred from the short-term and long-term search behavior of a particular user can result in a better performance than both plain approaches.
Plug Your Users into Library Resources with OpenSearch Plug-Ins
ERIC Educational Resources Information Center
Baker, Nicholas C.
2007-01-01
To bring the library catalog and other online resources right into users' workspace quickly and easily without needing much more than a short XML file, the author, a reference and Web services librarian at Williams College, learned to build and use OpenSearch plug-ins. OpenSearch is a set of simple technologies and standards that allows the…
ERIC Educational Resources Information Center
Bao, Xue-Ming
1998-01-01
A survey of 786 students and faculty found 78.6% used the World Wide Web on a daily or weekly basis and provided information about user demographics, frequency of use and satisfaction, search results and problems, university library home-page use, search strategies, and search training. Discusses challenges and opportunities for librarians…
Earthdata 3.0: A Unified Experience and Platform for Earth Science Discovery
NASA Astrophysics Data System (ADS)
Plofchan, P.; McLaughlin, B. D.
2015-12-01
NASA's EOSDIS (Earth Observing System Data and Information System) as a multitude of websites and applications focused on serving the Earth Science community's extensive data needs. With no central user interface, theme, or mechanism for accessing that data, interrelated systems are confusing and potentially disruptive in users' searches for EOSDIS data holdings. In an effort to bring consistency across these systems, an effort was undertaken to develop Earthdata 3.0: a complete information architecture overhaul of the Earthdata website, a significant update to the Earthdata user experience and user interface, and an increased focus on searching across EOSDIS data holdings, including those housed and made available through DAAC websites. As part of this effort, and in a desire to unify the user experience across related websites, the Earthdata User Interface (EUI) was developed. The EUI is a collection of responsive design components and layouts geared toward creating websites and applications within the Earthdata ecosystem. Each component and layout has been designed specifically for Earth science-related projects which eliminates some of the complexities of building a website or application from the ground up. Its adoption will ensure both consistent markup and a unified look and feel for end users, thereby increasing usability and accessibility. Additionally, through the user of a Google Search Appliance, custom Clojure code, and in cooperation with DAACs, Earthdata 3.0 presents a variety of search results upon a user's keyword(s) entry. These results are not just textual links, but also direct links to downloadable datasets, visualizations of datasets and collections of data, and related articles and videos for further research. The end result of the development of the EUI and the enhanced multi-response type search is a consistent and usable platform for Earth scientists and users to navigate and locate data to further their research.
Context-Aware Online Commercial Intention Detection
NASA Astrophysics Data System (ADS)
Hu, Derek Hao; Shen, Dou; Sun, Jian-Tao; Yang, Qiang; Chen, Zheng
With more and more commercial activities moving onto the Internet, people tend to purchase what they need through Internet or conduct some online research before the actual transactions happen. For many Web users, their online commercial activities start from submitting a search query to search engines. Just like the common Web search queries, the queries with commercial intention are usually very short. Recognizing the queries with commercial intention against the common queries will help search engines provide proper search results and advertisements, help Web users obtain the right information they desire and help the advertisers benefit from the potential transactions. However, the intentions behind a query vary a lot for users with different background and interest. The intentions can even be different for the same user, when the query is issued in different contexts. In this paper, we present a new algorithm framework based on skip-chain conditional random field (SCCRF) for automatically classifying Web queries according to context-based online commercial intention. We analyze our algorithm performance both theoretically and empirically. Extensive experiments on several real search engine log datasets show that our algorithm can improve more than 10% on F1 score than previous algorithms on commercial intention detection.
Huesch, Marco D; Galstyan, Aram; Ong, Michael K; Doctor, Jason N
2016-06-01
To pilot public health interventions at women potentially interested in maternity care via campaigns on social media (Twitter), social networks (Facebook), and online search engines (Google Search). Primary data from Twitter, Facebook, and Google Search on users of these platforms in Los Angeles between March and July 2014. Observational study measuring the responses of targeted users of Twitter, Facebook, and Google Search exposed to our sponsored messages soliciting them to start an engagement process by clicking through to a study website containing information on maternity care quality information for the Los Angeles market. Campaigns reached a little more than 140,000 consumers each day across the three platforms, with a little more than 400 engagements each day. Facebook and Google search had broader reach, better engagement rates, and lower costs than Twitter. Costs to reach 1,000 targeted users were approximately in the same range as less well-targeted radio and TV advertisements, while initial engagements-a user clicking through an advertisement-cost less than $1 each. Our results suggest that commercially available online advertising platforms in wide use by other industries could play a role in targeted public health interventions. © Health Research and Educational Trust.
GGRNA: an ultrafast, transcript-oriented search engine for genes and transcripts
Naito, Yuki; Bono, Hidemasa
2012-01-01
GGRNA (http://GGRNA.dbcls.jp/) is a Google-like, ultrafast search engine for genes and transcripts. The web server accepts arbitrary words and phrases, such as gene names, IDs, gene descriptions, annotations of gene and even nucleotide/amino acid sequences through one simple search box, and quickly returns relevant RefSeq transcripts. A typical search takes just a few seconds, which dramatically enhances the usability of routine searching. In particular, GGRNA can search sequences as short as 10 nt or 4 amino acids, which cannot be handled easily by popular sequence analysis tools. Nucleotide sequences can be searched allowing up to three mismatches, or the query sequences may contain degenerate nucleotide codes (e.g. N, R, Y, S). Furthermore, Gene Ontology annotations, Enzyme Commission numbers and probe sequences of catalog microarrays are also incorporated into GGRNA, which may help users to conduct searches by various types of keywords. GGRNA web server will provide a simple and powerful interface for finding genes and transcripts for a wide range of users. All services at GGRNA are provided free of charge to all users. PMID:22641850
GGRNA: an ultrafast, transcript-oriented search engine for genes and transcripts.
Naito, Yuki; Bono, Hidemasa
2012-07-01
GGRNA (http://GGRNA.dbcls.jp/) is a Google-like, ultrafast search engine for genes and transcripts. The web server accepts arbitrary words and phrases, such as gene names, IDs, gene descriptions, annotations of gene and even nucleotide/amino acid sequences through one simple search box, and quickly returns relevant RefSeq transcripts. A typical search takes just a few seconds, which dramatically enhances the usability of routine searching. In particular, GGRNA can search sequences as short as 10 nt or 4 amino acids, which cannot be handled easily by popular sequence analysis tools. Nucleotide sequences can be searched allowing up to three mismatches, or the query sequences may contain degenerate nucleotide codes (e.g. N, R, Y, S). Furthermore, Gene Ontology annotations, Enzyme Commission numbers and probe sequences of catalog microarrays are also incorporated into GGRNA, which may help users to conduct searches by various types of keywords. GGRNA web server will provide a simple and powerful interface for finding genes and transcripts for a wide range of users. All services at GGRNA are provided free of charge to all users.
Using the Turning Research Into Practice (TRIP) database: how do clinicians really search?*
Meats, Emma; Brassey, Jon; Heneghan, Carl; Glasziou, Paul
2007-01-01
Objectives: Clinicians and patients are increasingly accessing information through Internet searches. This study aimed to examine clinicians' current search behavior when using the Turning Research Into Practice (TRIP) database to examine search engine use and the ways it might be improved. Methods: A Web log analysis was undertaken of the TRIP database—a meta-search engine covering 150 health resources including MEDLINE, The Cochrane Library, and a variety of guidelines. The connectors for terms used in searches were studied, and observations were made of 9 users' search behavior when working with the TRIP database. Results: Of 620,735 searches, most used a single term, and 12% (n = 75,947) used a Boolean operator: 11% (n = 69,006) used “AND” and 0.8% (n = 4,941) used “OR.” Of the elements of a well-structured clinical question (population, intervention, comparator, and outcome), the population was most commonly used, while fewer searches included the intervention. Comparator and outcome were rarely used. Participants in the observational study were interested in learning how to formulate better searches. Conclusions: Web log analysis showed most searches used a single term and no Boolean operators. Observational study revealed users were interested in conducting efficient searches but did not always know how. Therefore, either better training or better search interfaces are required to assist users and enable more effective searching. PMID:17443248
White, Ryen W; Horvitz, Eric
2014-01-01
Objective To better understand the relationship between online health-seeking behaviors and in-world healthcare utilization (HU) by studies of online search and access activities before and after queries that pursue medical professionals and facilities. Materials and methods We analyzed data collected from logs of online searches gathered from consenting users of a browser toolbar from Microsoft (N=9740). We employed a complementary survey (N=489) to seek a deeper understanding of information-gathering, reflection, and action on the pursuit of professional healthcare. Results We provide insights about HU through the survey, breaking out its findings by different respondent marginalizations as appropriate. Observations made from search logs may be explained by trends observed in our survey responses, even though the user populations differ. Discussion The results provide insights about how users decide if and when to utilize healthcare resources, and how online health information seeking transitions to in-world HU. The findings from both the survey and the logs reveal behavioral patterns and suggest a strong relationship between search behavior and HU. Although the diversity of our survey respondents is limited and we cannot be certain that users visited medical facilities, we demonstrate that it may be possible to infer HU from long-term search behavior by the apparent influence that health concerns and professional advice have on search activity. Conclusions Our findings highlight different phases of online activities around queries pursuing professional healthcare facilities and services. We also show that it may be possible to infer HU from logs without tracking people's physical location, based on the effect of HU on pre- and post-HU search behavior. This allows search providers and others to develop more robust models of interests and preferences by modeling utilization rather than simply the intention to utilize that is expressed in search queries. PMID:23666794
Technical development of PubMed Interact: an improved interface for MEDLINE/PubMed searches
Muin, Michael; Fontelo, Paul
2006-01-01
Background The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. Results PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. Conclusion PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications. PMID:17083729
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System
Manduchi, R.; Coughlan, J.; Ivanchenko, V.
2016-01-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed. PMID:26949755
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System.
Manduchi, R; Coughlan, J; Ivanchenko, V
2008-07-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed.
Directing the public to evidence-based online content.
Cooper, Crystale Purvis; Gelb, Cynthia A; Vaughn, Alexandra N; Smuland, Jenny; Hughes, Alexandra G; Hawkins, Nikki A
2015-04-01
To direct online users searching for gynecologic cancer information to accurate content, the Centers for Disease Control and Prevention's (CDC) 'Inside Knowledge: Get the Facts About Gynecologic Cancer' campaign sponsored search engine advertisements in English and Spanish. From June 2012 to August 2013, advertisements appeared when US Google users entered search terms related to gynecologic cancer. Users who clicked on the advertisements were directed to relevant content on the CDC website. Compared with the 3 months before the initiative (March-May 2012), visits to the CDC web pages linked to the advertisements were 26 times higher after the initiative began (June-August 2012) (p<0.01), and 65 times higher when the search engine advertisements were supplemented with promotion on television and additional websites (September 2012-August 2013) (p<0.01). Search engine advertisements can direct users to evidence-based content at a highly teachable moment--when they are seeking relevant information. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Gong, Yang; Zhang, Jiajie
2011-04-01
In a distributed information search task, data representation and cognitive distribution jointly affect user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered framework, we proposed a search model and task taxonomy. The model defines its application in the context of healthcare setting. The taxonomy clarifies the legitimate operations for each type of search task of relational data. We then developed experimental prototypes of hyperlipidemia data displays. Based on the displays, we tested the search tasks performance through two experiments. The experiments are of a within-subject design with a random sample of 24 participants. The results support our hypotheses and validate the prediction of the model and task taxonomy. In this study, representation dimensions, data scales, and search task types are the main factors in determining search efficiency and effectiveness. Specifically, the more external representations provided on the interface the better search task performance of users. The results also suggest the ideal search performance occurs when the question type and its corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which could be more effectively designed in electronic medical records.
Making Temporal Search More Central in Spatial Data Infrastructures
NASA Astrophysics Data System (ADS)
Corti, P.; Lewis, B.
2017-10-01
A temporally enabled Spatial Data Infrastructure (SDI) is a framework of geospatial data, metadata, users, and tools intended to provide an efficient and flexible way to use spatial information which includes the historical dimension. One of the key software components of an SDI is the catalogue service which is needed to discover, query, and manage the metadata. A search engine is a software system capable of supporting fast and reliable search, which may use any means necessary to get users to the resources they need quickly and efficiently. These techniques may include features such as full text search, natural language processing, weighted results, temporal search based on enrichment, visualization of patterns in distributions of results in time and space using temporal and spatial faceting, and many others. In this paper we will focus on the temporal aspects of search which include temporal enrichment using a time miner - a software engine able to search for date components within a larger block of text, the storage of time ranges in the search engine, handling historical dates, and the use of temporal histograms in the user interface to display the temporal distribution of search results.
World Wide Web Metaphors for Search Mission Data
NASA Technical Reports Server (NTRS)
Norris, Jeffrey S.; Wallick, Michael N.; Joswig, Joseph C.; Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Abramyan, Lucy; Crockett, Thomas M.; Shams, Khawaja S.; Fox, Jason M.;
2010-01-01
A software program that searches and browses mission data emulates a Web browser, containing standard meta - phors for Web browsing. By taking advantage of back-end URLs, users may save and share search states. Also, since a Web interface is familiar to users, training time is reduced. Familiar back and forward buttons move through a local search history. A refresh/reload button regenerates a query, and loads in any new data. URLs can be constructed to save search results. Adding context to the current search is also handled through a familiar Web metaphor. The query is constructed by clicking on hyperlinks that represent new components to the search query. The selection of a link appears to the user as a page change; the choice of links changes to represent the updated search and the results are filtered by the new criteria. Selecting a navigation link changes the current query and also the URL that is associated with it. The back button can be used to return to the previous search state. This software is part of the MSLICE release, which was written in Java. It will run on any current Windows, Macintosh, or Linux system.
Analyzing Medical Image Search Behavior: Semantics and Prediction of Query Results.
De-Arteaga, Maria; Eggel, Ivan; Kahn, Charles E; Müller, Henning
2015-10-01
Log files of information retrieval systems that record user behavior have been used to improve the outcomes of retrieval systems, understand user behavior, and predict events. In this article, a log file of the ARRS GoldMiner search engine containing 222,005 consecutive queries is analyzed. Time stamps are available for each query, as well as masked IP addresses, which enables to identify queries from the same person. This article describes the ways in which physicians (or Internet searchers interested in medical images) search and proposes potential improvements by suggesting query modifications. For example, many queries contain only few terms and therefore are not specific; others contain spelling mistakes or non-medical terms that likely lead to poor or empty results. One of the goals of this report is to predict the number of results a query will have since such a model allows search engines to automatically propose query modifications in order to avoid result lists that are empty or too large. This prediction is made based on characteristics of the query terms themselves. Prediction of empty results has an accuracy above 88%, and thus can be used to automatically modify the query to avoid empty result sets for a user. The semantic analysis and data of reformulations done by users in the past can aid the development of better search systems, particularly to improve results for novice users. Therefore, this paper gives important ideas to better understand how people search and how to use this knowledge to improve the performance of specialized medical search engines.
`Googling' Terrorists: Are Northern Irish Terrorists Visible on Internet Search Engines?
NASA Astrophysics Data System (ADS)
Reilly, P.
In this chapter, the analysis suggests that Northern Irish terrorists are not visible on Web search engines when net users employ conventional Internet search techniques. Editors of mass media organisations traditionally have had the ability to decide whether a terrorist atrocity is `newsworthy,' controlling the `oxygen' supply that sustains all forms of terrorism. This process, also known as `gatekeeping,' is often influenced by the norms of social responsibility, or alternatively, with regard to the interests of the advertisers and corporate sponsors that sustain mass media organisations. The analysis presented in this chapter suggests that Internet search engines can also be characterised as `gatekeepers,' albeit without the ability to shape the content of Websites before it reaches net users. Instead, Internet search engines give priority retrieval to certain Websites within their directory, pointing net users towards these Websites rather than others on the Internet. Net users are more likely to click on links to the more `visible' Websites on Internet search engine directories, these sites invariably being the highest `ranked' in response to a particular search query. A number of factors including the design of the Website and the number of links to external sites determine the `visibility' of a Website on Internet search engines. The study suggests that Northern Irish terrorists and their sympathisers are unlikely to achieve a greater degree of `visibility' online than they enjoy in the conventional mass media through the perpetration of atrocities. Although these groups may have a greater degree of freedom on the Internet to publicise their ideologies, they are still likely to be speaking to the converted or members of the press. Although it is easier to locate Northern Irish terrorist organisations on Internet search engines by linking in via ideology, ideological description searches, such as `Irish Republican' and `Ulster Loyalist,' are more likely to generate links pointing towards the sites of research institutes and independent media organisations than sites sympathetic to Northern Irish terrorist organisations. The chapter argues that Northern Irish terrorists are only visible on search engines if net users select the correct search terms.
Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration
NASA Technical Reports Server (NTRS)
Groce, Alex; Joshi, Rajeev
2008-01-01
Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.
Let's Be Blunt: Consumption Methods Matter Among Black Marijuana Smokers.
Montgomery, LaTrice; Bagot, Kara
2016-05-01
Despite the high prevalence of blunt (i.e., hollowed-out cigars that are filled with marijuana) use among Black marijuana smokers, few studies have examined if and how blunt users differ from traditional joint users. The current study compared the prevalence and patterns of use for those who smoked blunts in the past month (i.e., blunt users) with those who used marijuana through other methods (i.e., other marijuana users). The sample included 935 Black past-month marijuana smokers participating in the 2013 National Survey on Drug Use and Health. Among past-month marijuana smokers, 73.2% were blunt users and 26.8% were other marijuana users. Overall, blunt users initiated marijuana use at an earlier age (15.9 vs. 17.3 years, p < .01) and reported more days of marijuana use in the past month (16 vs. 8 days, p < .01) than did other marijuana users. There were also differences by gender. Among females, blunt users reported a higher odds of past-year marijuana abuse or dependence (23.8%) than other marijuana users (11.2%) (adjusted odds ratio = 1.23, 95% CI [1.12, 3.17], p < .01). However, blunt-using males reported similar odds of past-year marijuana abuse or dependence (approximately 25%) as other marijuana-using males. These findings highlight the need for targeted interventions for blunt users as a subgroup of marijuana users, especially among Black females, who may be at increased risk for developing a marijuana use disorder as a result of blunt smoking.
Large Scale Data Analytics of User Behavior for Improving Content Delivery
2014-12-01
video streaming, web browsing To Achan and Amma iv Abstract The Internet is fast becoming the de facto content delivery network of the world...operators everywhere and they seek to de - sign and manage their networks better to improve content delivery and provide better quality of experience...Anjali, Kriti and Ruta have been great company and have let me partake in their delicious homecooked food more times than I can remember. My friends
Visual task performance using a monocular see-through head-mounted display (HMD) while walking.
Mustonen, Terhi; Berg, Mikko; Kaistinen, Jyrki; Kawai, Takashi; Häkkinen, Jukka
2013-12-01
A monocular see-through head-mounted display (HMD) allows the user to view displayed information while simultaneously interacting with the surrounding environment. This configuration lets people use HMDs while they are moving, such as while walking. However, sharing attention between the display and environment can compromise a person's performance in any ongoing task, and controlling one's gait may add further challenges. In this study, the authors investigated how the requirements of HMD-administered visual tasks altered users' performance while they were walking. Twenty-four university students completed 3 cognitive tasks (high- and low-working memory load, visual vigilance) on an HMD while seated and while simultaneously performing a paced walking task in a controlled environment. The results show that paced walking worsened performance (d', reaction time) in all HMD-administered tasks, but visual vigilance deteriorated more than memory performance. The HMD-administered tasks also worsened walking performance (speed, path overruns) in a manner that varied according to the overall demands of the task. These results suggest that people's ability to process information displayed on an HMD may worsen while they are in motion. Furthermore, the use of an HMD can critically alter a person's natural performance, such as their ability to guide and control their gait. In particular, visual tasks that involve constant monitoring of the HMD should be avoided. These findings highlight the need for careful consideration of the type and difficulty of information that can be presented through HMDs while still letting the user achieve an acceptable overall level of performance in various contexts of use. PsycINFO Database Record (c) 2013 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, Y.
2015-12-01
Oceanographic resource discovery is a critical step for developing ocean science applications. With the increasing number of resources available online, many Spatial Data Infrastructure (SDI) components (e.g. catalogues and portals) have been developed to help manage and discover oceanographic resources. However, efficient and accurate resource discovery is still a big challenge because of the lack of data relevancy information. In this article, we propose a search engine framework for mining and utilizing dataset relevancy from oceanographic dataset metadata, usage metrics, and user feedback. The objective is to improve discovery accuracy of oceanographic data and reduce time for scientist to discover, download and reformat data for their projects. Experiments and a search example show that the propose engine helps both scientists and general users search for more accurate results with enhanced performance and user experience through a user-friendly interface.
Usability Evaluation of Clinical Guidelines on the Web Using Eye-Tracker.
Khodambashi, Soudabeh; Gilstad, Heidi; Nytrø, Øystein
2016-01-01
Publishing clinical guidelines (GLs) on the web increases their accessibility. However, evaluating their usability and understanding how users interact with the websites has been neglected. In this study we used Tobii eye-tracker to analyse users' interaction with five commercial and public GL sites popular in Norway (four in Norwegian and one English of US origin (UpToDate)). We measured number of clicks and usage rate for search functions, task completion time, users' objective and perception of task success rate. We also measured learning effect for inexperienced users. We found a direct correlation between participant's satisfaction regarding website usability and the time spent, number of mouse clicks and use of search function to obtain the desired results. Our study showed that users' perceived success rate was not reliable and GL publishers should evaluate their website regarding presentation format, layout, navigation bar and search function.
Front-End/Gateway Software: Availability and Usefulness.
ERIC Educational Resources Information Center
Kesselman, Martin
1985-01-01
Reviews features of front-end software packages (interface between user and online system)--database selection, search strategy development, saving and downloading, hardware and software requirements, training and documentation, online systems and database accession, and costs--and discusses gateway services (user searches through intermediary…
A user-friendly tool for medical-related patent retrieval.
Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnyakova, Dina; Lovis, Christian; Ruch, Patrick
2012-01-01
Health-related information retrieval is complicated by the variety of nomenclatures available to name entities, since different communities of users will use different ways to name a same entity. We present in this report the development and evaluation of a user-friendly interactive Web application aiming at facilitating health-related patent search. Our tool, called TWINC, relies on a search engine tuned during several patent retrieval competitions, enhanced with intelligent interaction modules, such as chemical query, normalization and expansion. While the functionality of related article search showed promising performances, the ad hoc search results in fairly contrasted results. Nonetheless, TWINC performed well during the PatOlympics competition and was appreciated by intellectual property experts. This result should be balanced by the limited evaluation sample. We can also assume that it can be customized to be applied in corporate search environments to process domain and company-specific vocabularies, including non-English literature and patents reports.
Welcome, Mo; Pereverzev, Va
2014-09-01
Glycemic allostasis is the process by which blood glucose stabilization is achieved through the balancing of glucose consumption rate and release into the blood stream under a variety of stressors. This paper reviews findings on the dynamics of glycemic levels during mental activities on fasting in non-alcohol users and alcohol users with different periods of abstinence. Referred articles for this review were searched in the databases of PubMed, Scopus, DOAJ and AJOL. The search was conducted in 2013 between January 20 and July 31. The following keywords were used in the search: alcohol action on glycemia OR brain glucose OR cognitive functions; dynamics of glycemia, dynamics of glycemia during mental activities; dynamics of glycemia on fasting; dynamics of glycemia in non-alcohol users OR alcohol users; glycemic regulation during sobriety. Analysis of the selected articles showed that glycemic allostasis during mental activities on fasting is poorly regulated in alcohol users even after a long duration of sobriety (1-4 weeks after alcohol consumption), compared to non-alcohol users. The major contributor to the maintenance of euglycemia during mental activities after the night's rest (during continuing fast) is gluconeogenesis.
Welcome, MO; Pereverzev, VA
2014-01-01
Glycemic allostasis is the process by which blood glucose stabilization is achieved through the balancing of glucose consumption rate and release into the blood stream under a variety of stressors. This paper reviews findings on the dynamics of glycemic levels during mental activities on fasting in non-alcohol users and alcohol users with different periods of abstinence. Referred articles for this review were searched in the databases of PubMed, Scopus, DOAJ and AJOL. The search was conducted in 2013 between January 20 and July 31. The following keywords were used in the search: alcohol action on glycemia OR brain glucose OR cognitive functions; dynamics of glycemia, dynamics of glycemia during mental activities; dynamics of glycemia on fasting; dynamics of glycemia in non-alcohol users OR alcohol users; glycemic regulation during sobriety. Analysis of the selected articles showed that glycemic allostasis during mental activities on fasting is poorly regulated in alcohol users even after a long duration of sobriety (1-4 weeks after alcohol consumption), compared to non-alcohol users. The major contributor to the maintenance of euglycemia during mental activities after the night's rest (during continuing fast) is gluconeogenesis. PMID:25364589
High Resolution, High Frame Rate Video Technology
NASA Technical Reports Server (NTRS)
1990-01-01
Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.
Task-Based Information Searching.
ERIC Educational Resources Information Center
Vakkari, Pertti
2003-01-01
Reviews studies on the relationship between task performance and information searching by end-users, focusing on information searching in electronic environments and information retrieval systems. Topics include task analysis; task characteristics; search goals; modeling information searching; modeling search goals; information seeking behavior;…
NASA Technical Reports Server (NTRS)
Albornoz, Caleb Ronald
2012-01-01
Thousands of millions of documents are stored and updated daily in the World Wide Web. Most of the information is not efficiently organized to build knowledge from the stored data. Nowadays, search engines are mainly used by users who rely on their skills to look for the information needed. This paper presents different techniques search engine users can apply in Google Search to improve the relevancy of search results. According to the Pew Research Center, the average person spends eight hours a month searching for the right information. For instance, a company that employs 1000 employees wastes $2.5 million dollars on looking for nonexistent and/or not found information. The cost is very high because decisions are made based on the information that is readily available to use. Whenever the information necessary to formulate an argument is not available or found, poor decisions may be made and mistakes will be more likely to occur. Also, the survey indicates that only 56% of Google users feel confident with their current search skills. Moreover, just 76% of the information that is available on the Internet is accurate.
Google Search Queries About Neurosurgical Topics: Are They a Suitable Guide for Neurosurgeons?
Lawson McLean, Anna C; Lawson McLean, Aaron; Kalff, Rolf; Walter, Jan
2016-06-01
Google is the most popular search engine, with about 100 billion searches per month. Google Trends is an integrated tool that allows users to obtain Google's search popularity statistics from the last decade. Our aim was to evaluate whether Google Trends is a useful tool to assess the public's interest in specific neurosurgical topics. We evaluated Google Trends statistics for the neurosurgical search topic areas "hydrocephalus," "spinal stenosis," "concussion," "vestibular schwannoma," and "cerebral arteriovenous malformation." We compared these with bibliometric data from PubMed and epidemiologic data from the German Federal Monitoring Agency. In addition, we assessed Google users' search behavior for the search terms "glioblastoma" and "meningioma." Over the last 10 years, there has been an increasing interest in the topic "concussion" from Internet users in general and scientists. "Spinal stenosis," "concussion," and "vestibular schwannoma" are topics that are of special interest in high-income countries (eg, Germany), whereas "hydrocephalus" is a popular topic in low- and middle-income countries. The Google-defined top searches within these topic areas revealed more detail about people's interests (eg, "normal pressure hydrocephalus" or "football concussion" ranked among the most popular search queries within the corresponding topics). There was a similar volume of queries for "glioblastoma" and "meningioma." Google Trends is a useful source to elicit information about general trends in peoples' health interests and the role of different diseases across the world. The Internet presence of neurosurgical units and surgeons can be guided by online users' interests to achieve high-quality, professional-endorsed patient education. Copyright © 2016 Elsevier Inc. All rights reserved.
The LET Procedure for Prosthetic Myocontrol: Towards Multi-DOF Control Using Single-DOF Activations.
Nowak, Markus; Castellini, Claudio
2016-01-01
Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects.
NASA Technical Reports Server (NTRS)
Siarto, Jeff; Reese, Mark; Shum, Dana; Baynes, Katie
2016-01-01
User experience and visual design are greatly improved when usability testing is performed on a periodic basis. Design decisions should be tested by real users so that application owners can understand the effectiveness of each decision and identify areas for improvement. It is important that applications be tested not just once, but as a part of a continuing process that looks to build upon previous tests. NASA's Earthdata Search Client has undergone a usability study to ensure its users' needs are being met and that users understand how to use the tool efficiently and effectively. This poster will highlight the process followed for usability study, the results of the study, and what has been implemented in light of the results to improve the application's interface.
Reinforcement Learning in Information Searching
ERIC Educational Resources Information Center
Cen, Yonghua; Gan, Liren; Bai, Chen
2013-01-01
Introduction: The study seeks to answer two questions: How do university students learn to use correct strategies to conduct scholarly information searches without instructions? and, What are the differences in learning mechanisms between users at different cognitive levels? Method: Two groups of users, thirteen first year undergraduate students…
PubMedReco: A Real-Time Recommender System for PubMed Citations.
Samuel, Hamman W; Zaïane, Osmar R
2017-01-01
We present a recommender system, PubMedReco, for real-time suggestions of medical articles from PubMed, a database of over 23 million medical citations. PubMedReco can recommend medical article citations while users are conversing in a synchronous communication environment such as a chat room. Normally, users would have to leave their chat interface to open a new web browser window, and formulate an appropriate search query to retrieve relevant results. PubMedReco automatically generates the search query and shows relevant citations within the same integrated user interface. PubMedReco analyzes relevant keywords associated with the conversation and uses them to search for relevant citations using the PubMed E-utilities programming interface. Our contributions include improvements to the user experience for searching PubMed from within health forums and chat rooms, and a machine learning model for identifying relevant keywords. We demonstrate the feasibility of PubMedReco using BMJ's Doc2Doc forum discussions.
Querying Event Sequences by Exact Match or Similarity Search: Design and Empirical Evaluation
Wongsuphasawat, Krist; Plaisant, Catherine; Taieb-Maimon, Meirav; Shneiderman, Ben
2012-01-01
Specifying event sequence queries is challenging even for skilled computer professionals familiar with SQL. Most graphical user interfaces for database search use an exact match approach, which is often effective, but near misses may also be of interest. We describe a new similarity search interface, in which users specify a query by simply placing events on a blank timeline and retrieve a similarity-ranked list of results. Behind this user interface is a new similarity measure for event sequences which the users can customize by four decision criteria, enabling them to adjust the impact of missing, extra, or swapped events or the impact of time shifts. We describe a use case with Electronic Health Records based on our ongoing collaboration with hospital physicians. A controlled experiment with 18 participants compared exact match and similarity search interfaces. We report on the advantages and disadvantages of each interface and suggest a hybrid interface combining the best of both. PMID:22379286
Personalization of Rule-based Web Services.
Choi, Okkyung; Han, Sang Yong
2008-04-04
Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.
End-user search behaviors and their relationship to search effectiveness.
Wildemuth, B M; Moore, M E
1995-01-01
One hundred sixty-one MEDLINE searches conducted by third-year medical students were analyzed and evaluated to determine which search moves were used, whether those individual moves were effective, and whether there was a relationship between specific search behaviors and the effectiveness of the search strategy as a whole. The typical search included fourteen search statements, used seven terms or "limit" commands, and resulted in the display of eleven citations. The most common moves were selection of a database, entering single-word terms and free-text term phrases, and combining sets of terms. Syntactic errors were also common. Overall, librarians judged the searches to be adequate, and students were quite satisfied with their own searches. However, librarians also identified many missed opportunities in the search strategies, including underutilization of the controlled vocabulary, subheadings, and synonyms for search concepts. No strong relationships were found between specific search behaviors and search effectiveness (as measured by the librarians' or students' evaluations). Implications of these findings for system design and user education are discussed. PMID:7581185
CALIL.JP, a new web service that provides one-stop searching of Japan-wide libraries' collections
NASA Astrophysics Data System (ADS)
Yoshimoto, Ryuuji
Calil.JP is a new free online service that enables federated searching, marshalling and integration of Web-OPAC data on the collections of libraries from around Japan. It offers the search results through user-friendly interface. Developed with a concept of accelerating discovery of fun-to-read books and motivating users to head for libraries, Calil was initially designed mainly for public library users. It now extends to cover university libraries and special libraries. This article presents the Calil's basic capabilities, concept, progress made thus far, and plan for further development as viewed from an engineering development manager.
76 FR 62387 - Public User ID Badging
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-07
... additional information regarding online access cards or user training should be directed to Douglas Salser... issues online access cards to customers who wish to use the electronic search systems at the Public Search Facility. Customers may obtain an online access card by completing the application at the Public...
The User Interface: How Does Your Product Look and Feel?
ERIC Educational Resources Information Center
Strukhoff, Roger
1987-01-01
Discusses the importance of user cordial interfaces to the successful marketing of optical data disk products, and describes features of several online systems. The topics discussed include full text searching, indexed searching, menu driven interfaces, natural language interfaces, computer graphics, and possible future developments. (CLB)
Goldsmith, Lesley; Hewson, Paul; Kamel Boulos, Maged N; Williams, Christopher J
2012-01-01
Objective To estimate the effect of online adverts on the probability of finding online cognitive behavioural therapy (CBT) for depression. Design Exploratory online cross-sectional study of search experience of people in the UK with depression in 2011. (1) The authors identified the search terms over 6 months entered by users who subsequently clicked on the advert for online help for depression. (2) A panel of volunteers across the UK recorded websites presented by normal Google search for the term ‘depression’. (iii) The authors examined these websites to estimate probabilities of knowledgeable and naive internet users finding online CBT and the improved probability by addition of a Google advert. Participants (1) 3868 internet users entering search terms related to depression into Google. (2) Panel, recruited online, of 12 UK participants with an interest in depression. Main outcome measures Probability of finding online CBT for depression with/without an advert. Results The 3868 users entered 1748 different search terms but the single keyword ‘depression’ resulted in two-thirds of the presentations of, and over half the ‘clicks’ on, the advert. In total, 14 different websites were presented to our panel in the first page of Google results for ‘depression’. Four of the 14 websites had links enabling access to online CBT in three clicks for knowledgeable users. Extending this approach to the 10 most frequent search terms, the authors estimated probabilities of finding online CBT as 0.29 for knowledgeable users and 0.006 for naive users, making it unlikely CBT would be found. Adding adverts that linked directly to online CBT increased the probabilities to 0.31 (knowledgeable) and 0.02 (naive). Conclusions In this case, online CBT was not easy to find and online adverts substantially increased the chance for naive users. Others could use this approach to explore additional impact before committing to long-term Google AdWords advertising budgets. Trial registration This exploratory case study was a substudy within a cluster randomised trial, registered on http://www.clinicaltrials.gov (reference: NCT01469689). (The trial will be reported subsequently). PMID:22508957
Concentrations of indoor pollutants database: User`s manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-05-01
This manual describes the computer-based database on indoor air pollutants. This comprehensive database alloys helps utility personnel perform rapid searches on literature related to indoor air pollutants. Besides general information, it provides guidance for finding specific information on concentrations of indoor air pollutants. The manual includes information on installing and using the database as well as a tutorial to assist the user in becoming familiar with the procedures involved in doing bibliographic and summary section searches. The manual demonstrates how to search for information by going through a series of questions that provide search parameters such as pollutants type, year,more » building type, keywords (from a specific list), country, geographic region, author`s last name, and title. As more and more parameters are specified, the list of references found in the data search becomes smaller and more specific to the user`s needs. Appendixes list types of information that can be input into the database when making a request. The CIP database allows individual utilities to obtain information on indoor air quality based on building types and other factors in their own service territory. This information is useful for utilities with concerns about indoor air quality and the control of indoor air pollutants. The CIP database itself is distributed by the Electric Power Software Center and runs on IBM PC-compatible computers.« less
Visualization of usability and functionality of a professional website through web-mining.
Jones, Josette F; Mahoui, Malika; Gopa, Venkata Devi Pragna
2007-10-11
Functional interface design requires understanding of the information system structure and the user. Web logs record user interactions with the interface, and thus provide some insight into user search behavior and efficiency of the search process. The present study uses a data-mining approach with techniques such as association rules, clustering and classification, to visualize the usability and functionality of a digital library through in depth analyses of web logs.
Full Dome Development for Interactive Immersive Training Capabilities
2015-04-03
called the the vDome Player. This application serves as a familiar user interface for direct media playback. Modeled after the widely used VLC ...charrette challenge to task. Below are my notes on where everyone is in planning thei r f inal proj ects . Please let me know (comments or emai l...space with a lot of sound and feeling. What is challenging ? The challenge is how to get depth of field in the dome. Trying to gently allure people into
State-of-the-Art in Improved Parts Programming for Numerically Controlled Machines
1976-10-01
than expected let sizes for IIC. Cincinnati lilbcron, Inc., has built a $1.25 million Computer Ilumerical Control ( CNC ) 1,4nufacturing Center to "rw’ t...point-to- point user. Lathe and other turning operations are essentially two-axis opera- tions, and there has been some dissatisfaction over APT’s...a.particular machi-ne (50)." "Software is the key to CNC , the costs of which are easily overlooked. The cost of software development is growing in relation to
Trust-based Anonymous Communication: Adversary Models and Routing Algorithms
2011-10-01
pages 169–187. Springer-Verlag, LNCS 3621, August 2005. [6] D . Chaum . Untraceable electronic mail, return addresses, and digital pseudonyms...Communications of the ACM, 4(2), 1981. [7] D . Chaum . The dining cryptographers problem: Unconditional sender and recipient untraceability. Journal of...U ∪R∪ D , where U is a set of users1, R is a set of onion routers, and D is a set of destinations. 2. Let E ⊆ ( V 2 ) be the set of network links
1992-02-28
the primary goal of instituting remedial measures. Many apparel plants, as they function today in the United States, do not maintain an accu- rate...type of usage is the primary functional mode for FDAS. Alternatively, the user could suggest a defect to FDAS and let it find out if the defect is...Endeavor The primary objective of the research effort is to develop a knowledge-based system to an- alyze the causes of defects in apparel
Transterm—extended search facilities and improved integration with other databases
Jacobs, Grant H.; Stockwell, Peter A.; Tate, Warren P.; Brown, Chris M.
2006-01-01
Transterm has now been publicly available for >10 years. Major changes have been made since its last description in this database issue in 2002. The current database provides data for key regions of mRNA sequences, a curated database of mRNA motifs and tools to allow users to investigate their own motifs or mRNA sequences. The key mRNA regions database is derived computationally from Genbank. It contains 3′ and 5′ flanking regions, the initiation and termination signal context and coding sequence for annotated CDS features from Genbank and RefSeq. The database is non-redundant, enabling summary files and statistics to be prepared for each species. Advances include providing extended search facilities, the database may now be searched by BLAST in addition to regular expressions (patterns) allowing users to search for motifs such as known miRNA sequences, and the inclusion of RefSeq data. The database contains >40 motifs or structural patterns important for translational control. In this release, patterns from UTRsite and Rfam are also incorporated with cross-referencing. Users may search their sequence data with Transterm or user-defined patterns. The system is accessible at . PMID:16381889
A Fast, Minimalist Search Tool for Remote Sensing Data
NASA Astrophysics Data System (ADS)
Lynnes, C. S.; Macharrie, P. G.; Elkins, M.; Joshi, T.; Fenichel, L. H.
2005-12-01
We present a tool that emphasizes speed and simplicity in searching remotely sensed Earth Science data. The tool, nicknamed "Mirador" (Spanish for a scenic overlook), provides only four freetext search form fields, for Keywords, Location, Data Start and Data Stop. This contrasts with many current Earth Science search tools that offer highly structured interfaces in order to ensure precise, non-zero results. The disadvantages of the structured approach lie in its complexity and resultant learning curve, as well as the time it takes to formulate and execute the search, thus discouraging iterative discovery. On the other hand, the success of the basic Google search interface shows that many users are willing to forgo high search precision if the search process is fast enough to enable rapid iteration. Therefore, we employ several methods to increase the speed of search formulation and execution. Search formulation is expedited by the minimalist search form, with only one required field. Also, a gazetteer enables the use of geographic terms as shorthand for latitude/longitude coordinates. The search execution is accelerated by initially presenting dataset results (returned from a Google Mini appliance) with an estimated number of "hits" for each dataset based on the user's space-time constraints. The more costly file-level search is executed against a PostGres database only when the user "drills down", and then covering only the fraction of the time period needed to return the next page of results. The simplicity of the search form makes the tool easy to learn and use, and the speed of the searches enables an iterative form of data discovery.
Liu, Lei; Zhao, Jing
2014-01-01
An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users. PMID:24790579
Zhong, Cheng; Liu, Lei; Zhao, Jing
2014-01-01
An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users.
Administrative Issues in Planning a Library End User Searching Program. ERIC Digest.
ERIC Educational Resources Information Center
Machovec, George S.
This digest presents a reprint of an article which examines management principles that should be considered when implementing library end user searching programs. A brief discussion of specific implementation issues includes needs assessment, hardware, software, training, budgeting, what systems to offer, publicity and marketing, policies and…
Index Relativity and Patron Search Strategy.
ERIC Educational Resources Information Center
Allison, DeeAnn; Childers Scott
2002-01-01
Describes a study at the University of Nebraska-Lincoln that compared searches in two different keyword indexes with similar content where search results were dependent on search strategy quality, search engine execution, and content. Results showed search engine execution had an impact on the number of matches and that users ignored search help…
Study on user interface of pathology picture archiving and communication system.
Kim, Dasueran; Kang, Peter; Yun, Jungmin; Park, Sung-Hye; Seo, Jeong-Wook; Park, Peom
2014-01-01
It is necessary to improve the pathology workflow. A workflow task analysis was performed using a pathology picture archiving and communication system (pathology PACS) in order to propose a user interface for the Pathology PACS considering user experience. An interface analysis of the Pathology PACS in Seoul National University Hospital and a task analysis of the pathology workflow were performed by observing recorded video. Based on obtained results, a user interface for the Pathology PACS was proposed. Hierarchical task analysis of Pathology PACS was classified into 17 tasks including 1) pre-operation, 2) text, 3) images, 4) medical record viewer, 5) screen transition, 6) pathology identification number input, 7) admission date input, 8) diagnosis doctor, 9) diagnosis code, 10) diagnosis, 11) pathology identification number check box, 12) presence or absence of images, 13) search, 14) clear, 15) Excel save, 16) search results, and 17) re-search. And frequently used menu items were identified and schematized. A user interface for the Pathology PACS considering user experience could be proposed as a preliminary step, and this study may contribute to the development of medical information systems based on user experience and usability.
ERIC Educational Resources Information Center
National Women's Education Centre, Saitama (Japan).
Based on the success of the Fourth World Conference on Women, the National Women's Education Centre of Japan planned and carried out the 1995 International Forum on Intercultural Exchange to search for an up-to-date understanding of the problems of women and ways to solve them and to develop a network of already existing groups. This Forum focused…
On the complexity of search for keys in quantum cryptography
NASA Astrophysics Data System (ADS)
Molotkov, S. N.
2016-03-01
The trace distance is used as a security criterion in proofs of security of keys in quantum cryptography. Some authors doubted that this criterion can be reduced to criteria used in classical cryptography. The following question has been answered in this work. Let a quantum cryptography system provide an ɛ-secure key such that ½‖ρ XE - ρ U ⊗ ρ E ‖1 < ɛ, which will be repeatedly used in classical encryption algorithms. To what extent does the ɛ-secure key reduce the number of search steps (guesswork) as compared to the use of ideal keys? A direct relation has been demonstrated between the complexity of the complete consideration of keys, which is one of the main security criteria in classical systems, and the trace distance used in quantum cryptography. Bounds for the minimum and maximum numbers of search steps for the determination of the actual key have been presented.
Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur
2013-03-01
Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user's query, advanced data searching based on the specified user's query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/.
Volk, Ruti Malis
2007-04-01
The Patient Education Resource Center at the University of Michigan Comprehensive Cancer Center conducts mediated searches for patients and families seeking information on complex medical issues, state-of-the-art treatments, and rare cancers. The current study examined user satisfaction and the impact of information provided to this user population. This paper presents the results of 566 user evaluation forms collected between July 2000 and June 2006 (1,532 forms distributed; 37% response rate). Users provided both quantitative and qualitative feedback, which was analyzed and classified into recurrent themes. The majority of users reported they were very satisfied with the information provided (n = 472, 83%). Over half of users (n = 335, 60%) shared or planned to share the information with their health care provider, and 51% (n = 286) reported that the information made an impact on treatment or quality of life. For 96.2% of users (n = 545), some or all of the information provided had not been received through any other source. The results demonstrate that, despite the end-user driven Internet, patients and families are not able to find all the information they need on their own. Expert searching remains an important role for librarians working with consumer health information seekers.
SYRMEP Tomo Project: a graphical user interface for customizing CT reconstruction workflows.
Brun, Francesco; Massimi, Lorenzo; Fratini, Michela; Dreossi, Diego; Billé, Fulvio; Accardo, Agostino; Pugliese, Roberto; Cedola, Alessia
2017-01-01
When considering the acquisition of experimental synchrotron radiation (SR) X-ray CT data, the reconstruction workflow cannot be limited to the essential computational steps of flat fielding and filtered back projection (FBP). More refined image processing is often required, usually to compensate artifacts and enhance the quality of the reconstructed images. In principle, it would be desirable to optimize the reconstruction workflow at the facility during the experiment (beamtime). However, several practical factors affect the image reconstruction part of the experiment and users are likely to conclude the beamtime with sub-optimal reconstructed images. Through an example of application, this article presents SYRMEP Tomo Project (STP), an open-source software tool conceived to let users design custom CT reconstruction workflows. STP has been designed for post-beamtime (off-line use) and for a new reconstruction of past archived data at user's home institution where simple computing resources are available. Releases of the software can be downloaded at the Elettra Scientific Computing group GitHub repository https://github.com/ElettraSciComp/STP-Gui.
Automatic query formulations in information retrieval.
Salton, G; Buckley, C; Fox, E A
1983-07-01
Modern information retrieval systems are designed to supply relevant information in response to requests received from the user population. In most retrieval environments the search requests consist of keywords, or index terms, interrelated by appropriate Boolean operators. Since it is difficult for untrained users to generate effective Boolean search requests, trained search intermediaries are normally used to translate original statements of user need into useful Boolean search formulations. Methods are introduced in this study which reduce the role of the search intermediaries by making it possible to generate Boolean search formulations completely automatically from natural language statements provided by the system patrons. Frequency considerations are used automatically to generate appropriate term combinations as well as Boolean connectives relating the terms. Methods are covered to produce automatic query formulations both in a standard Boolean logic system, as well as in an extended Boolean system in which the strict interpretation of the connectives is relaxed. Experimental results are supplied to evaluate the effectiveness of the automatic query formulation process, and methods are described for applying the automatic query formulation process in practice.
Document Clustering Approach for Meta Search Engine
NASA Astrophysics Data System (ADS)
Kumar, Naresh, Dr.
2017-08-01
The size of WWW is growing exponentially with ever change in technology. This results in huge amount of information with long list of URLs. Manually it is not possible to visit each page individually. So, if the page ranking algorithms are used properly then user search space can be restricted up to some pages of searched results. But available literatures show that no single search system can provide qualitative results from all the domains. This paper provides solution to this problem by introducing a new meta search engine that determine the relevancy of query corresponding to web page and cluster the results accordingly. The proposed approach reduces the user efforts, improves the quality of results and performance of the meta search engine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schatz, B.R.; Johnson, E.H.; Cochrane, P.A.
The basic problem in information retrieval is that large-scale searches can only match terms specified by the user to terms appearing in documents in the digital library collection. Intermediate sources that support term suggestion can thus enhance retrieval by providing alternative search terms for the user. Term suggestion increases the recall, while interaction enables the user to attempt to not decrease the precision. We are building a prototype user interface that will become the Web interface for the University of Illinois Digital Library Initiative (DLI) testbed. It supports the principal of multiple views, where different kinds of term suggestors canmore » be used to complement search and each other. This paper discusses its operation with two complementary term suggestors, subject thesauri and co-occurrence lists, and compared their utility. Thesauri are generated by human indexers and place selected terms in a subject hierarchy. Co-occurrence lists are generated by computer and place all terms in frequency order of occurrence together. This paper concludes with a discussion of how multiple views can help provide good quality Search for the Net. This is a paper about the design of a retrieval system prototype that allows users to simultaneously combine terms offered by different suggestion techniques, not about comparing the merits of each in a systematic and controlled way. It offers no experimental results.« less
Environmental Information Management For Data Discovery and Access System
NASA Astrophysics Data System (ADS)
Giriprakash, P.
2011-01-01
Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury was developed during 2007 and released in early 2008. This new version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow ! the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.
Developing A Web-based User Interface for Semantic Information Retrieval
NASA Technical Reports Server (NTRS)
Berrios, Daniel C.; Keller, Richard M.
2003-01-01
While there are now a number of languages and frameworks that enable computer-based systems to search stored data semantically, the optimal design for effective user interfaces for such systems is still uncle ar. Such interfaces should mask unnecessary query detail from users, yet still allow them to build queries of arbitrary complexity without significant restrictions. We developed a user interface supporting s emantic query generation for Semanticorganizer, a tool used by scient ists and engineers at NASA to construct networks of knowledge and dat a. Through this interface users can select node types, node attribute s and node links to build ad-hoc semantic queries for searching the S emanticOrganizer network.
Learn what search terms brought users to choose your page in their search results, and what terms they entered in the EPA search box after visiting your page. Use this information to improve links and content on the page.
... you agree to the user agreement and disclaimer . Search By Location Search By Doctor Search Results Map Enter a location ... by American Society for Surgery of the Hand × Search Tips Tip 1: Start with the basics like " ...
Adaptive interface for personalizing information seeking.
Narayanan, S; Koppaka, Lavanya; Edala, Narasimha; Loritz, Don; Daley, Raymond
2004-12-01
An adaptive interface autonomously adjusts its display and available actions to current goals and abilities of the user by assessing user status, system task, and the context. Knowledge content adaptability is needed for knowledge acquisition and refinement tasks. In the case of knowledge content adaptability, the requirements of interface design focus on the elicitation of information from the user and the refinement of information based on patterns of interaction. In such cases, the emphasis on adaptability is on facilitating information search and knowledge discovery. In this article, we present research on adaptive interfaces that facilitates personalized information seeking from a large data warehouse. The resulting proof-of-concept system, called source recommendation system (SRS), assists users in locating and navigating data sources in the repository. Based on the initial user query and an analysis of the content of the search results, the SRS system generates a profile of the user tailored to the individual's context during information seeking. The user profiles are refined successively and are used in progressively guiding the user to the appropriate set of sources within the knowledge base. The SRS system is implemented as an Internet browser plug-in to provide a seamless and unobtrusive, personalized experience to the users during the information search process. The rationale behind our approach, system design, empirical evaluation, and implications for research on adaptive interfaces are described in this paper.
Jones, Josette; Harris, Marcelline; Bagley-Thompson, Cheryl; Root, Jane
2003-01-01
This poster describes the development of user-centered interfaces in order to extend the functionality of the Virginia Henderson International Nursing Library (VHINL) from library to web based portal to nursing knowledge resources. The existing knowledge structure and computational models are revised and made complementary. Nurses' search behavior is captured and analyzed, and the resulting search models are mapped to the revised knowledge structure and computational model.
BioCarian: search engine for exploratory searches in heterogeneous biological databases.
Zaki, Nazar; Tennakoon, Chandana
2017-10-02
There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search on previously published viral integration data and were able to deduce the main conclusions of the original publication. BioCarian is accessible via http://www.biocarian.com . We have developed a search engine to explore RDF databases that can be used by both novice and advanced users.
Engaging Elderly People in Telemedicine Through Gamification
Tabak, Monique; Dekker - van Weering, Marit; Vollenbroek-Hutten, Miriam
2015-01-01
Background Telemedicine can alleviate the increasing demand for elderly care caused by the rapidly aging population. However, user adherence to technology in telemedicine interventions is low and decreases over time. Therefore, there is a need for methods to increase adherence, specifically of the elderly user. A strategy that has recently emerged to address this problem is gamification. It is the application of game elements to nongame fields to motivate and increase user activity and retention. Objective This research aims to (1) provide an overview of existing theoretical frameworks for gamification and explore methods that specifically target the elderly user and (2) explore user classification theories for tailoring game content to the elderly user. This knowledge will provide a foundation for creating a new framework for applying gamification in telemedicine applications to effectively engage the elderly user by increasing and maintaining adherence. Methods We performed a broad Internet search using scientific and nonscientific search engines and included information that described either of the following subjects: the conceptualization of gamification, methods to engage elderly users through gamification, or user classification theories for tailored game content. Results Our search showed two main approaches concerning frameworks for gamification: from business practices, which mostly aim for more revenue, emerge an applied approach, while academia frameworks are developed incorporating theories on motivation while often aiming for lasting engagement. The search provided limited information regarding the application of gamification to engage elderly users, and a significant gap in knowledge on the effectiveness of a gamified application in practice. Several approaches for classifying users in general were found, based on archetypes and reasons to play, and we present them along with their corresponding taxonomies. The overview we created indicates great connectivity between these taxonomies. Conclusions Gamification frameworks have been developed from different backgrounds—business and academia—but rarely target the elderly user. The effectiveness of user classifications for tailored game content in this context is not yet known. As a next step, we propose the development of a framework based on the hypothesized existence of a relation between preference for game content and personality. PMID:26685287
Engaging Elderly People in Telemedicine Through Gamification.
de Vette, Frederiek; Tabak, Monique; Dekker-van Weering, Marit; Vollenbroek-Hutten, Miriam
2015-12-18
Telemedicine can alleviate the increasing demand for elderly care caused by the rapidly aging population. However, user adherence to technology in telemedicine interventions is low and decreases over time. Therefore, there is a need for methods to increase adherence, specifically of the elderly user. A strategy that has recently emerged to address this problem is gamification. It is the application of game elements to nongame fields to motivate and increase user activity and retention. This research aims to (1) provide an overview of existing theoretical frameworks for gamification and explore methods that specifically target the elderly user and (2) explore user classification theories for tailoring game content to the elderly user. This knowledge will provide a foundation for creating a new framework for applying gamification in telemedicine applications to effectively engage the elderly user by increasing and maintaining adherence. We performed a broad Internet search using scientific and nonscientific search engines and included information that described either of the following subjects: the conceptualization of gamification, methods to engage elderly users through gamification, or user classification theories for tailored game content. Our search showed two main approaches concerning frameworks for gamification: from business practices, which mostly aim for more revenue, emerge an applied approach, while academia frameworks are developed incorporating theories on motivation while often aiming for lasting engagement. The search provided limited information regarding the application of gamification to engage elderly users, and a significant gap in knowledge on the effectiveness of a gamified application in practice. Several approaches for classifying users in general were found, based on archetypes and reasons to play, and we present them along with their corresponding taxonomies. The overview we created indicates great connectivity between these taxonomies. Gamification frameworks have been developed from different backgrounds-business and academia-but rarely target the elderly user. The effectiveness of user classifications for tailored game content in this context is not yet known. As a next step, we propose the development of a framework based on the hypothesized existence of a relation between preference for game content and personality.
Web Searching: A Process-Oriented Experimental Study of Three Interactive Search Paradigms.
ERIC Educational Resources Information Center
Dennis, Simon; Bruza, Peter; McArthur, Robert
2002-01-01
Compares search effectiveness when using query-based Internet search via the Google search engine, directory-based search via Yahoo, and phrase-based query reformulation-assisted search via the Hyperindex browser by means of a controlled, user-based experimental study of undergraduates at the University of Queensland. Discusses cognitive load,…
The Searching Effectiveness of Social Tagging in Museum Websites
ERIC Educational Resources Information Center
Cho, Chung-Wen; Yeh, Ting-Kuang; Cheng, Shu-Wen; Chang, Chun-Yen
2012-01-01
This paper explores the search effectiveness of social tagging which allows the public to freely tag resources, denoted as keywords, with any words as well as to share personal opinions on those resources. Social tagging potentially helps users to organize, manage, and retrieve resources. Efficient retrieval can help users put more of their focus…
Designing Search: Effective Search Interfaces for Academic Library Web Sites
ERIC Educational Resources Information Center
Teague-Rector, Susan; Ghaphery, Jimmy
2008-01-01
Academic libraries customize, support, and provide access to myriad information systems, each with complex graphical user interfaces. The number of possible information entry points on an academic library Web site is both daunting to the end-user and consistently challenging to library Web site designers. Faced with the challenges inherent in…
Information Diversity in Web Search
ERIC Educational Resources Information Center
Liu, Jiahui
2009-01-01
The web is a rich and diverse information source with incredible amounts of information about all kinds of subjects in various forms. This information source affords great opportunity to build systems that support users in their work and everyday lives. To help users explore information on the web, web search systems should find information that…
After Losing Users in Catalogs, Libraries Find Better Search Software
ERIC Educational Resources Information Center
Parry, Marc
2010-01-01
Traditional online library catalogs do not tend to order search results by ranked relevance, and they can befuddle users with clunky interfaces. However, that's changing because of two technology trends. First, a growing number of universities are shelling out serious money for sophisticated software that makes exploring their collections more…
Research on multi-user encrypted search scheme in cloud environment
NASA Astrophysics Data System (ADS)
Yu, Zonghua; Lin, Sui
2017-05-01
Aiming at the existing problems of multi-user encrypted search scheme in cloud computing environment, a basic multi-user encrypted scheme is proposed firstly, and then the basic scheme is extended to an anonymous hierarchical management authority. Compared with most of the existing schemes, the scheme not only to achieve the protection of keyword information, but also to achieve the protection of user identity privacy; the same time, data owners can directly control the user query permissions, rather than the cloud server. In addition, through the use of a special query key generation rules, to achieve the hierarchical management of the user's query permissions. The safety analysis shows that the scheme is safe and that the performance analysis and experimental data show that the scheme is practicable.
Google Analytics Reports about Search Terms
Learn what search terms brought users to choose your page in their search results, and what terms they entered in the EPA search box after visiting your page. Use this information to improve links and content on the page.
COSMIC: Software catalog 1991 edition diskette format
NASA Technical Reports Server (NTRS)
1991-01-01
The PC edition of the annual COSMIC Software contains descriptions of the over 1,200 computer programs available for use within the United States as of January 1, 1991. By using the PC version of the catalog, it is possible to conduct extensive searches of the software inventory for programs that meet specific criteria. Elements such as program keywords, hardware specifications, source code languages, and title acronyms can be used for the basis of such searches. After isolating those programs that might be of best interest to the user, it is then possible to either view at the monitor, or generate a hardcopy listing of all information on those packages. In addition to the program elements that the user can search on, information such as total program size, distribution media, and program price, as well as extensive abstracts on the program, are also available to the user at this time. Another useful feature of the catalog allows for the retention of programs that meet certain search criteria between individual sessions of using the catalog. This allows users to save the information on those programs that are of interest to them in different areas of application. They can then recall a specific collection of programs for information retrieval or further search reduction if desired. In addition, this version of the catalog is adaptable to a network/shared resource environment, allowing multiple users access to a single copy of the catalog database simultaneously.
User needs analysis and usability assessment of DataMed - a biomedical data discovery index.
Dixit, Ram; Rogith, Deevakar; Narayana, Vidya; Salimi, Mandana; Gururaj, Anupama; Ohno-Machado, Lucila; Xu, Hua; Johnson, Todd R
2017-11-30
To present user needs and usability evaluations of DataMed, a Data Discovery Index (DDI) that allows searching for biomedical data from multiple sources. We conducted 2 phases of user studies. Phase 1 was a user needs analysis conducted before the development of DataMed, consisting of interviews with researchers. Phase 2 involved iterative usability evaluations of DataMed prototypes. We analyzed data qualitatively to document researchers' information and user interface needs. Biomedical researchers' information needs in data discovery are complex, multidimensional, and shaped by their context, domain knowledge, and technical experience. User needs analyses validate the need for a DDI, while usability evaluations of DataMed show that even though aggregating metadata into a common search engine and applying traditional information retrieval tools are promising first steps, there remain challenges for DataMed due to incomplete metadata and the complexity of data discovery. Biomedical data poses distinct problems for search when compared to websites or publications. Making data available is not enough to facilitate biomedical data discovery: new retrieval techniques and user interfaces are necessary for dataset exploration. Consistent, complete, and high-quality metadata are vital to enable this process. While available data and researchers' information needs are complex and heterogeneous, a successful DDI must meet those needs and fit into the processes of biomedical researchers. Research directions include formalizing researchers' information needs, standardizing overviews of data to facilitate relevance judgments, implementing user interfaces for concept-based searching, and developing evaluation methods for open-ended discovery systems such as DDIs. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
STScI Archive Manual, Version 7.0
NASA Astrophysics Data System (ADS)
Padovani, Paolo
1999-06-01
The STScI Archive Manual provides information a user needs to know to access the HST archive via its two user interfaces: StarView and a World Wide Web (WWW) interface. It provides descriptions of the StarView screens used to access information in the database and the format of that information, and introduces the use to the WWW interface. Using the two interfaces, users can search for observations, preview public data, and retrieve data from the archive. Using StarView one can also find calibration reference files and perform detailed association searches. With the WWW interface archive users can access, and obtain information on, all Multimission Archive at Space Telescope (MAST) data, a collection of mainly optical and ultraviolet datasets which include, amongst others, the International Ultraviolet Explorer (IUE) Final Archive. Both interfaces feature a name resolver which simplifies searches based on target name.
Intelligent web image retrieval system
NASA Astrophysics Data System (ADS)
Hong, Sungyong; Lee, Chungwoo; Nah, Yunmook
2001-07-01
Recently, the web sites such as e-business sites and shopping mall sites deal with lots of image information. To find a specific image from these image sources, we usually use web search engines or image database engines which rely on keyword only retrievals or color based retrievals with limited search capabilities. This paper presents an intelligent web image retrieval system. We propose the system architecture, the texture and color based image classification and indexing techniques, and representation schemes of user usage patterns. The query can be given by providing keywords, by selecting one or more sample texture patterns, by assigning color values within positional color blocks, or by combining some or all of these factors. The system keeps track of user's preferences by generating user query logs and automatically add more search information to subsequent user queries. To show the usefulness of the proposed system, some experimental results showing recall and precision are also explained.
The role of marijuana use etiquette in avoiding targeted police enforcement
REAM, GEOFFREY L.; JOHNSON, BRUCE D.; DUNLAP, ELOISE; BENOIT, ELLEN
2012-01-01
Internationally, where marijuana is illegal, users follow etiquette rules that prevent negative consequences of use. In this study, adherence to etiquette is hypothesized to reduce likelihood of marijuana-related police stop/search and arrest. Ethnographers administered group surveys to a diverse, purposive sample of 462 marijuana-using peer groups in several areas of New York City. Findings indicated that lack of etiquette was associated with dramatically higher likelihood of police stop/search or arrest only for users who were Black, male, and/or recruited from Harlem/South Bronx. If these users followed a few identified etiquette rules, their risk of police stop/search or arrest was comparable to that of other users. Implications are that etiquette represents an intentional conscientiousness about marijuana use. Groups that are specially targeted for anti-marijuana enforcement can remediate that heightened risk by following marijuana etiquette. PMID:23155303
Moreno, Eliana M; Moriana, Juan Antonio
2016-08-09
There is now broad consensus regarding the importance of involving users in the process of implementing guidelines. Few studies, however, have addressed this issue, let alone the implementation of guidelines for common mental health disorders. The aim of this study is to compile and describe implementation strategies and resources related to common clinical mental health disorders targeted at service users. The literature was reviewed and resources for the implementation of clinical guidelines were compiled using the PRISMA model. A mixed qualitative and quantitative analysis was performed based on a series of categories developed ad hoc. A total of 263 items were included in the preliminary analysis and 64 implementation resources aimed at users were analysed in depth. A wide variety of types, sources and formats were identified, including guides (40%), websites (29%), videos and leaflets, as well as instruments for the implementation of strategies regarding information and education (64%), self-care, or users' assessment of service quality. The results reveal the need to establish clear criteria for assessing the quality of implementation materials in general and standardising systems to classify user-targeted strategies. The compilation and description of key elements of strategies and resources for users can be of interest in designing materials and specific actions for this target audience, as well as improving the implementation of clinical guidelines.
Improving the User Experience of Finding and Visualizing Oceanographic Data
NASA Astrophysics Data System (ADS)
Rauch, S.; Allison, M. D.; Groman, R. C.; Chandler, C. L.; Galvarino, C.; Gegg, S. R.; Kinkade, D.; Shepherd, A.; Wiebe, P. H.; Glover, D. M.
2013-12-01
Searching for and locating data of interest can be a challenge to researchers as increasing volumes of data are made available online through various data centers, repositories, and archives. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is keenly aware of this challenge and, as a result, has implemented features and technologies aimed at improving data discovery and enhancing the user experience. BCO-DMO was created in 2006 to manage and publish data from research projects funded by the Division of Ocean Sciences (OCE) Biological and Chemical Oceanography Sections and the Division of Polar Programs (PLR) Antarctic Sciences Organisms and Ecosystems Program (ANT) of the US National Science Foundation (NSF). The BCO-DMO text-based and geospatial-based data access systems provide users with tools to search, filter, and visualize data in order to efficiently find data of interest. The geospatial interface, developed using a suite of open-source software (including MapServer [1], OpenLayers [2], ExtJS [3], and MySQL [4]), allows users to search and filter/subset metadata based on program, project, or deployment, or by using a simple word search. The map responds based on user selections, presents options that allow the user to choose specific data parameters (e.g., a species or an individual drifter), and presents further options for visualizing those data on the map or in "quick-view" plots. The data managed and made available by BCO-DMO are very heterogeneous in nature, from in-situ biogeochemical, ecological, and physical data, to controlled laboratory experiments. Due to the heterogeneity of the data types, a 'one size fits all' approach to visualization cannot be applied. Datasets are visualized in a way that will best allow users to assess fitness for purpose. An advanced geospatial interface, which contains a semantically-enabled faceted search [5], is also available. These search facets are highly interactive and responsive, allowing users to construct their own custom searches by applying multiple filters. New filtering and visualization tools are continually being added to the BCO-DMO system as new data types are encountered and as we receive feedback from our data contributors and users. As our system becomes more complex, teaching users about the many interactive features becomes increasingly important. Tutorials and videos are made available online. Recent in-person classroom-style tutorials have proven useful for both demonstrating our system to users and for obtaining feedback to further improve the user experience. References: [1] University of Minnesota. MapServer: Open source web mapping. http://www.mapserver.org [2] OpenLayers: Free Maps for the Web. http://www.openlayers.org [3] Sencha. ExtJS. http://www.sencha.com/products/extjs [4] MySQL. http://www.mysql.com/ [5] Maffei, A. R., Rozell, E. A., West, P., Zednik, S., and Fox, P. A. 2011. Open Standards and Technologies in the S2S Framework. Abstract IN31A-1435 presented at American Geophysical Union 2011 Fall Meeting, San Francisco, CA, 7 December 2011.
2015-01-01
Background PubMed is the largest biomedical bibliographic information source on the Internet. PubMed has been considered one of the most important and reliable sources of up-to-date health care evidence. Previous studies examined the effects of domain expertise/knowledge on search performance using PubMed. However, very little is known about PubMed users’ knowledge of information retrieval (IR) functions and their usage in query formulation. Objective The purpose of this study was to shed light on how experienced/nonexperienced PubMed users perform their search queries by analyzing a full-day query log. Our hypotheses were that (1) experienced PubMed users who use system functions quickly retrieve relevant documents and (2) nonexperienced PubMed users who do not use them have longer search sessions than experienced users. Methods To test these hypotheses, we analyzed PubMed query log data containing nearly 3 million queries. User sessions were divided into two categories: experienced and nonexperienced. We compared experienced and nonexperienced users per number of sessions, and experienced and nonexperienced user sessions per session length, with a focus on how fast they completed their sessions. Results To test our hypotheses, we measured how successful information retrieval was (at retrieving relevant documents), represented as the decrease rates of experienced and nonexperienced users from a session length of 1 to 2, 3, 4, and 5. The decrease rate (from a session length of 1 to 2) of the experienced users was significantly larger than that of the nonexperienced groups. Conclusions Experienced PubMed users retrieve relevant documents more quickly than nonexperienced PubMed users in terms of session length. PMID:26139516
Sander, U; Emmert, M; Grobe, T G
2013-06-01
The Internet provides ways for patients to obtain information about doctors. The study poses the question whether it is possible and how long it takes to find a suitable doctor with an Internet search. It focuses on the effectiveness and efficiency of the search. Specialised physician rating and searching portals and Google are analysed when used to solve specific tasks. The behaviour of volunteers when searching a suitable ophthalmologist, dermatologist or dentist was observed in a usability lab. Additionally, interviews were carried out by means of structured questionnaires to measure the satisfaction of the users with the search and their results. Three physician rating and searching portals that are frequently used in Germany (Jameda.de, DocInsider.de and Arztauskunft.de) were analysed as well as Google. When using Arztauskunft and Google most users found an appropriate physician. When using Docinsider or Jameda they found fewer doctors. Additionally, the time needed to locate a suitable doctor when using Docinsider and Jameda was higher compared to the time needed when using the Arztauskunft and Google. The satisfaction of users who used Google was significantly higher in comparison to those who used the specialised physician rating and searching portals. It emerged from this study that there is no added value when using specialised physician rating and searching portals compared to using the search engine Google when trying to find a doctor having a particular specialty. The usage of several searching portals is recommended to identify as many suitable doctors as possible. © Georg Thieme Verlag KG Stuttgart · New York.
After the bomb. Oklahoma City rescuers talk about their experiences.
Robinson, M; Kernes, R; Lindsay, W; Webster, M
1995-06-01
Rather than trying to write a second-hand description of the response to the April 19 bombing of the Federal Building in Oklahoma City, we thought we'd let some of the people who were there caring for patients and searching for victims share their experiences in their own words. Marion Angell Garza, JEMS editorial/news coordinator, spoke at length with six responders, including paramedics, the triage and treatment officer, a firefighter/EMT-1 and an emergency physician. The following excerpts are from those interviews.
Evans, Melanie
2008-08-04
The bill to aid homeowners that Congress passed last week also offered a gift for tax-exempt healthcare borrowers. The law allows the Federal Home Loan Banks to back tax-exempt bonds with letters of credit, thus letting borrowers benefit from those banks' credit strength. But don't expect the floodgates to open. "Banks are preserving their capital for less risky endeavors," says Kelly Arduino, left, of Wipfli.
XSemantic: An Extension of LCA Based XML Semantic Search
NASA Astrophysics Data System (ADS)
Supasitthimethee, Umaporn; Shimizu, Toshiyuki; Yoshikawa, Masatoshi; Porkaew, Kriengkrai
One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.
Water Pollution Search | ECHO | US EPA
The Water Pollution Search within the Water Pollutant Loading Tool gives users options to search for pollutant loading information from Discharge Monitoring Report (DMR) and Toxic Release Inventory (TRI) data.
Thesaurus-Enhanced Search Interfaces.
ERIC Educational Resources Information Center
Shiri, Ali Asghar; Revie, Crawford; Chowdhury, Gobinda
2002-01-01
Discussion of user interfaces to information retrieval systems focuses on interfaces that incorporate thesauri as part of their searching and browsing facilities. Discusses research literature related to information searching behavior, information retrieval interface evaluation, search term selection, and query expansion; and compares thesaurus…
Searching the Web: The Public and Their Queries.
ERIC Educational Resources Information Center
Spink, Amanda; Wolfram, Dietmar; Jansen, Major B. J.; Saracevic, Tefko
2001-01-01
Reports findings from a study of searching behavior by over 200,000 users of the Excite search engine. Analysis of over one million queries revealed most people use few search terms, few modified queries, view few Web pages, and rarely use advanced search features. Concludes that Web searching by the public differs significantly from searching of…
Pian, Wenjing; Khoo, Christopher SG
2017-01-01
Background Users searching for health information on the Internet may be searching for their own health issue, searching for someone else’s health issue, or browsing with no particular health issue in mind. Previous research has found that these three categories of users focus on different types of health information. However, most health information websites provide static content for all users. If the three types of user health information need contexts can be identified by the Web application, the search results or information offered to the user can be customized to increase its relevance or usefulness to the user. Objective The aim of this study was to investigate the possibility of identifying the three user health information contexts (searching for self, searching for others, or browsing with no particular health issue in mind) using just hyperlink clicking behavior; using eye-tracking information; and using a combination of eye-tracking, demographic, and urgency information. Predictive models are developed using multinomial logistic regression. Methods A total of 74 participants (39 females and 35 males) who were mainly staff and students of a university were asked to browse a health discussion forum, Healthboards.com. An eye tracker recorded their examining (eye fixation) and skimming (quick eye movement) behaviors on 2 types of screens: summary result screen displaying a list of post headers, and detailed post screen. The following three types of predictive models were developed using logistic regression analysis: model 1 used only the time spent in scanning the summary result screen and reading the detailed post screen, which can be determined from the user’s mouse clicks; model 2 used the examining and skimming durations on each screen, recorded by an eye tracker; and model 3 added user demographic and urgency information to model 2. Results An analysis of variance (ANOVA) analysis found that users’ browsing durations were significantly different for the three health information contexts (P<.001). The logistic regression model 3 was able to predict the user’s type of health information context with a 10-fold cross validation mean accuracy of 84% (62/74), followed by model 2 at 73% (54/74) and model 1 at 71% (52/78). In addition, correlation analysis found that particular browsing durations were highly correlated with users’ age, education level, and the urgency of their information need. Conclusions A user’s type of health information need context (ie, searching for self, for others, or with no health issue in mind) can be identified with reasonable accuracy using just user mouse clicks that can easily be detected by Web applications. Higher accuracy can be obtained using Google glass or future computing devices with eye tracking function. PMID:29269342
ERIC Educational Resources Information Center
Zhang, Xiangmin; Anghelescu, Hermina G. B.; Yuan, Xiaojun
2005-01-01
Introduction: This study sought to answer three questions: 1) Would the level of domain knowledge significantly affect the user's search behaviour? 2) Would the level of domain knowledge significantly affect search effectiveness, and 3) What would be the relationship between search behaviour and search effectiveness? Method: Participants were…
Designing a Visual Interface for Online Searching.
ERIC Educational Resources Information Center
Lin, Xia
1999-01-01
"MedLine Search Assistant" is a new interface for MEDLINE searching that improves both search precision and recall by helping the user convert a free text search to a controlled vocabulary-based search in a visual environment. Features of the interface are described, followed by details of the conceptual design and the physical design of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priedhorsky, Reid; Randles, Tim
Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less
Impact of Predicting Health Care Utilization Via Web Search Behavior: A Data-Driven Analysis.
Agarwal, Vibhu; Zhang, Liangliang; Zhu, Josh; Fang, Shiyuan; Cheng, Tim; Hong, Chloe; Shah, Nigam H
2016-09-21
By recent estimates, the steady rise in health care costs has deprived more than 45 million Americans of health care services and has encouraged health care providers to better understand the key drivers of health care utilization from a population health management perspective. Prior studies suggest the feasibility of mining population-level patterns of health care resource utilization from observational analysis of Internet search logs; however, the utility of the endeavor to the various stakeholders in a health ecosystem remains unclear. The aim was to carry out a closed-loop evaluation of the utility of health care use predictions using the conversion rates of advertisements that were displayed to the predicted future utilizers as a surrogate. The statistical models to predict the probability of user's future visit to a medical facility were built using effective predictors of health care resource utilization, extracted from a deidentified dataset of geotagged mobile Internet search logs representing searches made by users of the Baidu search engine between March 2015 and May 2015. We inferred presence within the geofence of a medical facility from location and duration information from users' search logs and putatively assigned medical facility visit labels to qualifying search logs. We constructed a matrix of general, semantic, and location-based features from search logs of users that had 42 or more search days preceding a medical facility visit as well as from search logs of users that had no medical visits and trained statistical learners for predicting future medical visits. We then carried out a closed-loop evaluation of the utility of health care use predictions using the show conversion rates of advertisements displayed to the predicted future utilizers. In the context of behaviorally targeted advertising, wherein health care providers are interested in minimizing their cost per conversion, the association between show conversion rate and predicted utilization score, served as a surrogate measure of the model's utility. We obtained the highest area under the curve (0.796) in medical visit prediction with our random forests model and daywise features. Ablating feature categories one at a time showed that the model performance worsened the most when location features were dropped. An online evaluation in which advertisements were served to users who had a high predicted probability of a future medical visit showed a 3.96% increase in the show conversion rate. Results from our experiments done in a research setting suggest that it is possible to accurately predict future patient visits from geotagged mobile search logs. Results from the offline and online experiments on the utility of health utilization predictions suggest that such prediction can have utility for health care providers.
Liverpool's Discovery: A University Library Applies a New Search Tool to Improve the User Experience
ERIC Educational Resources Information Center
Kenney, Brian
2011-01-01
This article features the University of Liverpool's arts and humanities library, which applies a new search tool to improve the user experience. In nearly every way imaginable, the Sydney Jones Library and the Harold Cohen Library--the university's two libraries that serve science, engineering, and medical students--support the lives of their…
Repetition and Diversification in Multi-Session Task Oriented Search
ERIC Educational Resources Information Center
Tyler, Sarah K.
2013-01-01
As the number of documents and the availability of information online grows, so to can the difficulty in sifting through documents to find what we're searching for. Traditional Information Retrieval (IR) systems consider the query as the representation of the user's needs, and as such are limited to the user's ability to describe the information…
Strategy Effects on Word Searching in Japanese Letter Fluency Tests: Evidence from the NIRS Findings
ERIC Educational Resources Information Center
Hatta, Takeshi; Kanari, Ayano; Mase, Mitsuhito; Nagano, Yuko; Shirataki, Tatsuaki; Hibino, Shinji
2009-01-01
Strategy effects on word searching in the Japanese letter fluency test were investigated using the Near-infrared Spectroscopy (NIRS). Participants were given a Japanese letter fluency test and they were classified into two types of strategy users, based on analysis of their recorded verbal responses. One group, AIUEO-order strategy users, employed…
A SOA broker solution for standard discovery and access services: the GI-cat framework
NASA Astrophysics Data System (ADS)
Boldrini, Enrico
2010-05-01
GI-cat ideal users are data providers or service providers within the geoscience community. The former have their data already available through an access service (e.g. an OGC Web Service) and would have it published through a standard catalog service, in a seamless way. The latter would develop a catalog broker and let users query and access different geospatial resources through one or more standard interfaces and Application Profiles (AP) (e.g. OGC CSW ISO AP, CSW ebRIM/EO AP, etc.). GI-cat actually implements a broker components (i.e. a middleware service) which carries out distribution and mediation functionalities among "well-adopted" catalog interfaces and data access protocols. GI-cat also publishes different discovery interfaces: the OGC CSW ISO and ebRIM Application Profiles (the latter coming with support for the EO and CIM extension packages) and two different OpenSearch interfaces developed in order to explore Web 2.0 possibilities. An extended interface is also available to exploit all available GI-cat features, such as interruptible incremental queries and queries feedback. Interoperability tests performed in the context of different projects have also pointed out the importance to enforce compatibility with existing and wide-spread tools of the open source community (e.g. GeoNetwork and Deegree catalogs), which was then achieved. Based on a service-oriented framework of modular components, GI-cat can effectively be customized and tailored to support different deployment scenarios. In addition to the distribution functionality an harvesting approach has been lately experimented, allowing the user to switch between a distributed and a local search giving thus more possibilities to support different deployment scenarios. A configurator tool is available in order to enable an effective high level configuration of the broker service. A specific geobrowser was also naturally developed, for demonstrating the advanced GI-cat functionalities. This client, called GI-go, is an example of the possible applications which may be built on top of the GI-cat broker component. GI-go allows discovering and browsing of the available datasets, retrieving and evaluating their description and performing distributed queries according to any combination of the following criteria: geographic area, temporal interval, topic of interest (free-text and/or keyword selection are allowed) and data source (i.e. where, when, what, who). The results set of a query (e.g. datasets metadata) are then displayed in an incremental way leveraging the asynchronous interactions approach implemented by GI-cat. This feature allows the user to access the intermediate query results. Query interruption and feedback features are also provided to the user. Alternatively, user may perform a browsing task by selecting a catalog resource from the current configuration and navigate through its aggregated and/or leaf datasets. In both cases datasets metadata, expressed according to ISO 19139 (and also Dublin Core and ebRIM if available), are displayed for download, along with a resource portrayal and actual data access (when this is meaningful and possible). The GI-cat distributed catalog service has been successfully deployed and experimented in the framework of different projects and initiative, including the SeaDataNet FP6 project, GEOSS IP3 (Interoperability Process Pilot Project), GEOSS AIP-2 (Architectural Implementation Project - Phase 2), FP7 GENESI-DR, CNR GIIDA, FP7 EUROGEOSS and ESA HMA project.
NASA Astrophysics Data System (ADS)
Roy, Anjana; Kostkova, Patty; Catchpole, Mike; Carson, Ewart
In the last decade, the Internet has profoundly changed the delivery of healthcare. Medical websites for professionals and patients are playing an increasingly important role in providing the latest evidence-based knowledge for professionals, facilitating virtual patient support groups, and providing an invaluable information source for patients. Information seeking is the key user activity on the Internet. However, the discrepancy between what information is available and what the user is able to find has a profound effect on user satisfaction. The UK National electronic Library of Infection (NeLI, www.neli.org.uk) and its subsidiary projects provide a single-access portal for quality-appraised evidence in infectious diseases. We use this national portal, as test-bed for investigating our research questions. In this paper, we investigate actual and perceived user navigation behaviour that reveals important information about user perceptions and actions, in searching for information. Our results show: (i) all users were able to access information they were seeking; (ii) broadly, there is an agreement between "reported" behaviour (from questionnaires) and "observed" behaviour (from web logs), although some important differences were identified; (iii) both browsing and searching were equally used to answer specific questions and (iv) the preferred route for browsing for data on the NeLI website was to enter via the "Top Ten Topics" menu option. These findings provide important insights into how to improve user experience and satisfaction with health information websites.
Setting Priorities in Behavioral Interventions: An Application to Reducing Phishing Risk.
Canfield, Casey Inez; Fischhoff, Baruch
2018-04-01
Phishing risk is a growing area of concern for corporations, governments, and individuals. Given the evidence that users vary widely in their vulnerability to phishing attacks, we demonstrate an approach for assessing the benefits and costs of interventions that target the most vulnerable users. Our approach uses Monte Carlo simulation to (1) identify which users were most vulnerable, in signal detection theory terms; (2) assess the proportion of system-level risk attributable to the most vulnerable users; (3) estimate the monetary benefit and cost of behavioral interventions targeting different vulnerability levels; and (4) evaluate the sensitivity of these results to whether the attacks involve random or spear phishing. Using parameter estimates from previous research, we find that the most vulnerable users were less cautious and less able to distinguish between phishing and legitimate emails (positive response bias and low sensitivity, in signal detection theory terms). They also accounted for a large share of phishing risk for both random and spear phishing attacks. Under these conditions, our analysis estimates much greater net benefit for behavioral interventions that target these vulnerable users. Within the range of the model's assumptions, there was generally net benefit even for the least vulnerable users. However, the differences in the return on investment for interventions with users with different degrees of vulnerability indicate the importance of measuring that performance, and letting it guide interventions. This study suggests that interventions to reduce response bias, rather than to increase sensitivity, have greater net benefit. © 2017 Society for Risk Analysis.
Accessing Biomedical Literature in the Current Information Landscape
Khare, Ritu; Leaman, Robert; Lu, Zhiyong
2015-01-01
i. Summary Biomedical and life sciences literature is unique because of its exponentially increasing volume and interdisciplinary nature. Biomedical literature access is essential for several types of users including biomedical researchers, clinicians, database curators, and bibliometricians. In the past few decades, several online search tools and literature archives, generic as well as biomedicine-specific, have been developed. We present this chapter in the light of three consecutive steps of literature access: searching for citations, retrieving full-text, and viewing the article. The first section presents the current state of practice of biomedical literature access, including an analysis of the search tools most frequently used by the users, including PubMed, Google Scholar, Web of Science, Scopus, and Embase, and a study on biomedical literature archives such as PubMed Central. The next section describes current research and the state-of-the-art systems motivated by the challenges a user faces during query formulation and interpretation of search results. The research solutions are classified into five key areas related to text and data mining, text similarity search, semantic search, query support, relevance ranking, and clustering results. Finally, the last section describes some predicted future trends for improving biomedical literature access, such as searching and reading articles on portable devices, and adoption of the open access policy. PMID:24788259
Fischer, Steve; Aurrecoechea, Cristina; Brunk, Brian P.; Gao, Xin; Harb, Omar S.; Kraemer, Eileen T.; Pennington, Cary; Treatman, Charles; Kissinger, Jessica C.; Roos, David S.; Stoeckert, Christian J.
2011-01-01
Web sites associated with the Eukaryotic Pathogen Bioinformatics Resource Center (EuPathDB.org) have recently introduced a graphical user interface, the Strategies WDK, intended to make advanced searching and set and interval operations easy and accessible to all users. With a design guided by usability studies, the system helps motivate researchers to perform dynamic computational experiments and explore relationships across data sets. For example, PlasmoDB users seeking novel therapeutic targets may wish to locate putative enzymes that distinguish pathogens from their hosts, and that are expressed during appropriate developmental stages. When a researcher runs one of the approximately 100 searches available on the site, the search is presented as a first step in a strategy. The strategy is extended by running additional searches, which are combined with set operators (union, intersect or minus), or genomic interval operators (overlap, contains). A graphical display uses Venn diagrams to make the strategy’s flow obvious. The interface facilitates interactive adjustment of the component searches with changes propagating forward through the strategy. Users may save their strategies, creating protocols that can be shared with colleagues. The strategy system has now been deployed on all EuPathDB databases, and successfully deployed by other projects. The Strategies WDK uses a configurable MVC architecture that is compatible with most genomics and biological warehouse databases, and is available for download at code.google.com/p/strategies-wdk. Database URL: www.eupathdb.org PMID:21705364
Keyword Extraction from Multiple Words for Report Recommendations in Media Wiki
NASA Astrophysics Data System (ADS)
Elakiya, K.; Sahayadhas, Arun
2017-03-01
This paper addresses the problem of multiple words search, with the goal of using these multiple word search to retrieve, relevant wiki page which will be recommended to end user. However, the existing system provides a link to wiki page for only a single keyword only which is available in Wikipedia. Therefore it is difficult to get the correct result when search input has multiple keywords or a sentence. We have introduced a ‘FastStringSearch’ technique which will provide option for efficient search with multiple key words and which will increase the flexibility for the end user to get his expected content easily.
Costs and benefits to industry of online literature searches
NASA Technical Reports Server (NTRS)
Jensen, R. J.; Asbury, H. O.; King, R. G.
1980-01-01
A description is given of a client survey conducted by the NASA Industrial Application Center, U.S.C., examining user-identified dollar costs and benefits of an online computerized literature search. Telephone interviews were conducted on a random sample of clients using a Denver Research Institute questionnaire. Of the total 159 clients surveyed, over 53% identified dollar benefits. A direct relationship between client dollars invested and benefits derived from the search was shown. The ratio of dollar benefit to investment dollar averaged 2.9 to 1. Precise data on the end user's evaluation of the dollar value of an information search are presented.
Automated search method for AFM and profilers
NASA Astrophysics Data System (ADS)
Ray, Michael; Martin, Yves C.
2001-08-01
A new automation software creates a search model as an initial setup and searches for a user-defined target in atomic force microscopes or stylus profilometers used in semiconductor manufacturing. The need for such automation has become critical in manufacturing lines. The new method starts with a survey map of a small area of a chip obtained from a chip-design database or an image of the area. The user interface requires a user to point to and define a precise location to be measured, and to select a macro function for an application such as line width or contact hole. The search algorithm automatically constructs a range of possible scan sequences within the survey, and provides increased speed and functionality compared to the methods used in instruments to date. Each sequence consists in a starting point relative to the target, a scan direction, and a scan length. The search algorithm stops when the location of a target is found and criteria for certainty in positioning is met. With today's capability in high speed processing and signal control, the tool can simultaneously scan and search for a target in a robotic and continuous manner. Examples are given that illustrate the key concepts.
Start Your Engines: Surfing with Search Engines for Kids.
ERIC Educational Resources Information Center
Byerly, Greg; Brodie, Carolyn S.
1999-01-01
Suggests that to be an effective educator and user of the Web it is essential to know the basics about search engines. Presents tips for using search engines. Describes several search engines for children and young adults, as well as some general filtered search engines for children. (AEF)
The Use of Web Search Engines in Information Science Research.
ERIC Educational Resources Information Center
Bar-Ilan, Judit
2004-01-01
Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…
Standardization of Keyword Search Mode
ERIC Educational Resources Information Center
Su, Di
2010-01-01
In spite of its popularity, keyword search mode has not been standardized. Though information professionals are quick to adapt to various presentations of keyword search mode, novice end-users may find keyword search confusing. This article compares keyword search mode in some major reference databases and calls for standardization. (Contains 3…
The Gaze of the Perfect Search Engine: Google as an Infrastructure of Dataveillance
NASA Astrophysics Data System (ADS)
Zimmer, M.
Web search engines have emerged as a ubiquitous and vital tool for the successful navigation of the growing online informational sphere. The goal of the world's largest search engine, Google, is to "organize the world's information and make it universally accessible and useful" and to create the "perfect search engine" that provides only intuitive, personalized, and relevant results. While intended to enhance intellectual mobility in the online sphere, this chapter reveals that the quest for the perfect search engine requires the widespread monitoring and aggregation of a users' online personal and intellectual activities, threatening the values the perfect search engines were designed to sustain. It argues that these search-based infrastructures of dataveillance contribute to a rapidly emerging "soft cage" of everyday digital surveillance, where they, like other dataveillance technologies before them, contribute to the curtailing of individual freedom, affect users' sense of self, and present issues of deep discrimination and social justice.
The US Geological Survey, digital spectral reflectance library: version 1: 0.2 to 3.0 microns
NASA Technical Reports Server (NTRS)
Clark, Roger N.; Swayze, Gregg A.; King, Trude V. V.; Gallagher, Andrea J.; Calvin, Wendy M.
1993-01-01
We have developed a digital reflectance spectral library, with management and spectral analysis software. The library includes 500 spectra of 447 samples (some samples include a series of grain sizes) measured from approximately 0.2 to 3.0 microns. The spectral resolution (Full Width Half Maximum) of the reflectance data is less than or equal to 4 nm in the visible (0.2-0.8 microns) and less than or equal 10 nm in the NIR (0.8-2.35 microns). All spectra were corrected to absolute reflectance using an NBS Halon standard. Library management software lets users search on parameters (e.g. chemical formulae, chemical analyses, purity of samples, mineral groups, etc.) as well as spectral features. Minerals from sulfide, oxide, hydroxide, halide, carbonate, nitrate, borate, phosphate, and silicate groups are represented. X-ray and chemical analyses are tabulated for many of the entries, and all samples have been evaluated for spectral purity. The library also contains end and intermediate members for the olivine, garnet, scapolite, montmorillonite, muscovite, jarosite, and alunite solid-solution series. We have included representative spectra of H2O ice, kerogen, ammonium-bearing minerals, rare-earth oxides, desert varnish coatings, kaolinite crystallinity series, kaolinite-smectite series, zeolite series, and an extensive evaporite series. Because of the importance of vegetation to climate-change studies we have include 17 spectra of tree leaves, bushes, and grasses.
The climate4impact portal: bridging CMIP5 data to impact users
NASA Astrophysics Data System (ADS)
Som de Cerff, Wim; Plieger, Maarten; Page, Christian; Hutjes, Ronald; de Jong, Fokke; Barring, Lars; Sjökvist, Elin
2013-04-01
Together with seven other partners (CERFACS, CNRS-IPSL, SMHI, INHGA, CMCC, WUR, MF-CNRM), KNMI is involved in the FP7 project IS-ENES (http://is.enes.org), which supports the European climate modeling infrastructure, in the work package 'Bridging Climate Research Data and the Needs of the Impact Community'. The aim of this work package is to enhance the use of climate model data and to enhance the interaction with climate effect/impact communities. The portal is based on 17 impact use cases from 5 different European countries, and is evaluated by a user panel consisting of use case owners. As the climate impact community is very broad, the focus is mainly on the scientific impact community. This work has resulted in a prototype portal, the ENES portal interface for climate impact communities, that can be visited at www.climate4impact.eu. The portal is connected to all Earth System Grid Federation (ESGF) nodes containing global climate model data (GCM data) from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and later from the Coordinated Regional Climate Downscaling Experiment (CORDEX). This global network of all major climate model data centers offers services for data description, discovery and download. The climate4impact portal connects to these services and offers a user interface for searching, visualizing and downloading global climate model data and more. A challenging task was to describe the available model data and how it can be used. The portal tries to inform users about possible caveats when using model data. All impact use cases are described in the documentation section, using highlighted keywords pointing to detailed information in the glossary. The current portal is a Prototype. It is built to explore state-of-art technologies to provide improved access to climate model data. The prototype will be evaluated and is the basis for development of an operational service. The portal and services provided will be sustained and supported during the development of these operational services (2013-2016) in the second phase of the FP7 IS-ENES project, ISENES2. In this presentation the architecture and following items will be detailed: • Security: Login using OpenID for access to the ESGF data nodes. The ESGF works in conjunction with several external websites and systems. The portal provides access to several distributed archives, most importantly the ESGF nodes. Single Sign-on (SSO) is used to let these websites and systems work together. • Discovery: Intelligent search based on e.g. variable name, model, institute. A catalog browser allows for browsing through CMIP5 and other climate model data catalogues (e.g. ESSENCE, EOBS, UNIDATA). • Download: Directly from ESGF nodes and other THREDDS catalogs • Visualization: Visualize any data directly on a map (ADAGUC Map services). • Transformation: Transform your data into other formats, perform basic calculations and extractions
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.
A Secure Mobile-Based Authentication System for e-Banking
NASA Astrophysics Data System (ADS)
Rifà-Pous, Helena
Financial information is extremely sensitive. Hence, electronic banking must provide a robust system to authenticate its customers and let them access their data remotely. On the other hand, such system must be usable, affordable, and portable. We propose a challenge-response based one-time password (OTP) scheme that uses symmetric cryptography in combination with a hardware security module. The proposed protocol safeguards passwords from keyloggers and phishing attacks. Besides, this solution provides convenient mobility for users who want to bank online anytime and anywhere, not just from their own trusted computers.
Genomic DNA Copy-Number Alterations of the let-7 Family in Human Cancers
Greshock, Joel; Shen, Liang; Yang, Xiaojun; Shao, Zhongjun; Liang, Shun; Tanyi, Janos L.; Sood, Anil K.; Zhang, Lin
2012-01-01
In human cancer, expression of the let-7 family is significantly reduced, and this is associated with shorter survival times in patients. However, the mechanisms leading to let-7 downregulation in cancer are still largely unclear. Since an alteration in copy-number is one of the causes of gene deregulation in cancer, we examined copy number alterations of the let-7 family in 2,969 cancer specimens from a high-resolution SNP array dataset. We found that there was a reduction in the copy number of let-7 genes in a cancer-type specific manner. Importantly, focal deletion of four let-7 family members was found in three cancer types: medulloblastoma (let-7a-2 and let-7e), breast cancer (let-7a-2), and ovarian cancer (let-7a-3/let-7b). For example, the genomic locus harboring let-7a-3/let-7b was deleted in 44% of the specimens from ovarian cancer patients. We also found a positive correlation between the copy number of let-7b and mature let-7b expression in ovarian cancer. Finally, we showed that restoration of let-7b expression dramatically reduced ovarian tumor growth in vitro and in vivo. Our results indicate that copy number deletion is an important mechanism leading to the downregulation of expression of specific let-7 family members in medulloblastoma, breast, and ovarian cancers. Restoration of let-7 expression in tumor cells could provide a novel therapeutic strategy for the treatment of cancer. PMID:22970210
Integrating User Reviews and Ratings for Enhanced Personalized Searching
ERIC Educational Resources Information Center
Hu, Shuyue; Cai, Yi; Leung, Ho-fung; Huang, Dongping; Yang, Yang
2017-01-01
With the development of e-commerce, websites such as Amazon and eBay have become very popular. Users post reviews of products and rate the helpfulness of reviews on these websites. Reviews written by a user and reviews rated by a user reflect the user's interests and disinterest. Thus, they are very useful for user profiling. In this study, the…
Multitasking Information Seeking and Searching Processes.
ERIC Educational Resources Information Center
Spink, Amanda; Ozmutlu, H. Cenk; Ozmutlu, Seda
2002-01-01
Presents findings from four studies of the prevalence of multitasking information seeking and searching by Web (via the Excite search engine), information retrieval system (mediated online database searching), and academic library users. Highlights include human information coordinating behavior (HICB); and implications for models of information…
Custom Search Engines: Tools & Tips
ERIC Educational Resources Information Center
Notess, Greg R.
2008-01-01
Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…
Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.
Demelo, Jonathan; Parsons, Paul; Sedig, Kamran
2017-02-02
Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. ©Jonathan Demelo, Paul Parsons, Kamran Sedig. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 02.02.2017.
Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE
2017-01-01
Background Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Objective Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. Methods We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Results Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Conclusions Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. PMID:28153818
SLIM: an alternative Web interface for MEDLINE/PubMed searches – a preliminary study
Muin, Michael; Fontelo, Paul; Liu, Fang; Ackerman, Michael
2005-01-01
Background With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded. Results Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features. Conclusion SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine. PMID:16321145
Building a better search engine for earth science data
NASA Astrophysics Data System (ADS)
Armstrong, E. M.; Yang, C. P.; Moroni, D. F.; McGibbney, L. J.; Jiang, Y.; Huang, T.; Greguska, F. R., III; Li, Y.; Finch, C. J.
2017-12-01
Free text data searching of earth science datasets has been implemented with varying degrees of success and completeness across the spectrum of the 12 NASA earth sciences data centers. At the JPL Physical Oceanography Distributed Active Archive Center (PO.DAAC) the search engine has been developed around the Solr/Lucene platform. Others have chosen other popular enterprise search platforms like Elasticsearch. Regardless, the default implementations of these search engines leveraging factors such as dataset popularity, term frequency and inverse document term frequency do not fully meet the needs of precise relevancy and ranking of earth science search results. For the PO.DAAC, this shortcoming has been identified for several years by its external User Working Group that has assigned several recommendations to improve the relevancy and discoverability of datasets related to remotely sensed sea surface temperature, ocean wind, waves, salinity, height and gravity that comprise a total count of over 500 public availability datasets. Recently, the PO.DAAC has teamed with an effort led by George Mason University to improve the improve the search and relevancy ranking of oceanographic data via a simple search interface and powerful backend services called MUDROD (Mining and Utilizing Dataset Relevancy from Oceanographic Datasets to Improve Data Discovery) funded by the NASA AIST program. MUDROD has mined and utilized the combination of PO.DAAC earth science dataset metadata, usage metrics, and user feedback and search history to objectively extract relevance for improved data discovery and access. In addition to improved dataset relevance and ranking, the MUDROD search engine also returns recommendations to related datasets and related user queries. This presentation will report on use cases that drove the architecture and development, and the success metrics and improvements on search precision and recall that MUDROD has demonstrated over the existing PO.DAAC search interfaces.
SLIM: an alternative Web interface for MEDLINE/PubMed searches - a preliminary study.
Muin, Michael; Fontelo, Paul; Liu, Fang; Ackerman, Michael
2005-12-01
With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded. Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features. SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine.
NASA Astrophysics Data System (ADS)
Knosp, B.; Gangl, M. E.; Hristova-Veleva, S. M.; Kim, R. M.; Lambrigtsen, B.; Li, P.; Niamsuwan, N.; Shen, T. P. J.; Turk, F. J.; Vu, Q. A.
2014-12-01
The JPL Tropical Cyclone Information System (TCIS) brings together satellite, aircraft, and model forecast data from several NASA, NOAA, and other data centers to assist researchers in comparing and analyzing data related to tropical cyclones. The TCIS has been supporting specific science field campaigns, such as the Genesis and Rapid Intensification Processes (GRIP) campaign and the Hurricane and Severe Storm Sentinel (HS3) campaign, by creating near real-time (NRT) data visualization portals. These portals are intended to assist in mission planning, enhance the understanding of current physical processes, and improve model data by comparing it to satellite and aircraft observations. The TCIS NRT portals allow the user to view plots on a Google Earth interface. To compliment these visualizations, the team has been working on developing data analysis tools to let the user actively interrogate areas of Level 2 swath and two-dimensional plots they see on their screen. As expected, these observation and model data are quite voluminous and bottlenecks in the system architecture can occur when the databases try to run geospatial searches for data files that need to be read by the tools. To improve the responsiveness of the data analysis tools, the TCIS team has been conducting studies on how to best store Level 2 swath footprints and run sub-second geospatial searches to discover data. The first objective was to improve the sampling accuracy of the footprints being stored in the TCIS database by comparing the Java-based NASA PO.DAAC Level 2 Swath Generator with a TCIS Python swath generator. The second objective was to compare the performance of four database implementations - MySQL, MySQL+Solr, MongoDB, and PostgreSQL - to see which database management system would yield the best geospatial query and storage performance. The final objective was to integrate our chosen technologies with our Joint Probability Density Function (Joint PDF), Wave Number Analysis, and Automated Rotational Center Hurricane Eye Retrieval (ARCHER) tools. In this presentation, we will compare the enabling technologies we tested and discuss which ones we selected for integration into the TCIS' data analysis tool architecture. We will also show how these techniques have been automated to provide access to NRT data through our analysis tools.
Some constructions on total labelling of m triangles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voon, Chen Huey, E-mail: chenhv@utar.edu.my; Hui, Liew How, E-mail: liewhh@utar.edu.my; How, Yim Kheng, E-mail: tidusyimhome@hotmail.com
2016-06-02
Let mK{sub 3} = (V{sub m}, E{sub m}) be a finite disconnected graph consisting of m disjoint triangles K{sub 3}, where V{sub m} is the set of vertices, E{sub m} is the set of edges and both V{sub m} and E{sub m} are of the same size 3m. A total labelling of mK{sub 3} is a function f which maps the elements in V{sub m} and E{sub m} to positive integer values, i.e. f : V{sub m} ∪ E{sub m} → {1, 2, 3,···}. Let c be a positive integer. A triangle is said have a c-Erdősian triangle labelling ifmore » it is a total labelling f : V{sub m} ∪ E{sub m} → {c, c + 1, ···, c + 6m − 1} such that f (x) + f (y) = f (xy) for any x, y ∈ V{sub m} and an edge xy ∈ E{sub m} joining them. In order to find all the c-Erdősian triangle labelling, a straightforward is to use the exhaustive search. However, the exhaustive search is only able to find c-Erdősian triangle labelling for m ≤ 5 due to combinatorial explosion. By studying the constant sum of vertex labels, we propose a strong permutation approach, which allows us to generate a certain classes of c-Erdősian triangle labelling up until m = 8.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granville, DA; Sawakuchi, GO
2014-08-15
In this work, we demonstrate inconsistencies in commonly used Monte Carlo methods of scoring linear energy transfer (LET) in proton therapy beams. In particle therapy beams, the LET is an important parameter because the relative biological effectiveness (RBE) depends on it. LET is often determined using Monte Carlo techniques. We used a realistic Monte Carlo model of a proton therapy nozzle to score proton LET in spread-out Bragg peak (SOBP) depth-dose distributions. We used three different scoring and calculation techniques to determine average LET at varying depths within a 140 MeV beam with a 4 cm SOBP and a 250more » MeV beam with a 10 cm SOBP. These techniques included fluence-weighted (Φ-LET) and dose-weighted average (D-LET) LET calculations from: 1) scored energy spectra converted to LET spectra through a lookup table, 2) directly scored LET spectra and 3) accumulated LET scored ‘on-the-fly’ during simulations. All protons (primary and secondary) were included in the scoring. Φ-LET was found to be less sensitive to changes in scoring technique than D-LET. In addition, the spectral scoring methods were sensitive to low-energy (high-LET) cutoff values in the averaging. Using cutoff parameters chosen carefully for consistency between techniques, we found variations in Φ-LET values of up to 1.6% and variations in D-LET values of up to 11.2% for the same irradiation conditions, depending on the method used to score LET. Variations were largest near the end of the SOBP, where the LET and energy spectra are broader.« less
Sketching Uncertainty into Simulations.
Ribicic, H; Waser, J; Gurbat, R; Sadransky, B; Groller, M E
2012-12-01
In a variety of application areas, the use of simulation steering in decision making is limited at best. Research focusing on this problem suggests that most user interfaces are too complex for the end user. Our goal is to let users create and investigate multiple, alternative scenarios without the need for special simulation expertise. To simplify the specification of parameters, we move from a traditional manipulation of numbers to a sketch-based input approach. Users steer both numeric parameters and parameters with a spatial correspondence by sketching a change onto the rendering. Special visualizations provide immediate visual feedback on how the sketches are transformed into boundary conditions of the simulation models. Since uncertainty with respect to many intertwined parameters plays an important role in planning, we also allow the user to intuitively setup complete value ranges, which are then automatically transformed into ensemble simulations. The interface and the underlying system were developed in collaboration with experts in the field of flood management. The real-world data they have provided has allowed us to construct scenarios used to evaluate the system. These were presented to a variety of flood response personnel, and their feedback is discussed in detail in the paper. The interface was found to be intuitive and relevant, although a certain amount of training might be necessary.
Zhao, Yongan; Wang, Xiaofeng; Jiang, Xiaoqian; Ohno-Machado, Lucila; Tang, Haixu
2015-01-01
To propose a new approach to privacy preserving data selection, which helps the data users access human genomic datasets efficiently without undermining patients' privacy. Our idea is to let each data owner publish a set of differentially-private pilot data, on which a data user can test-run arbitrary association-test algorithms, including those not known to the data owner a priori. We developed a suite of new techniques, including a pilot-data generation approach that leverages the linkage disequilibrium in the human genome to preserve both the utility of the data and the privacy of the patients, and a utility evaluation method that helps the user assess the value of the real data from its pilot version with high confidence. We evaluated our approach on real human genomic data using four popular association tests. Our study shows that the proposed approach can help data users make the right choices in most cases. Even though the pilot data cannot be directly used for scientific discovery, it provides a useful indication of which datasets are more likely to be useful to data users, who can therefore approach the appropriate data owners to gain access to the data. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association.
2013-01-01
Background The Internet’s potential impact on suicide is of major public health interest as easy online access to pro-suicide information or specific suicide methods may increase suicide risk among vulnerable Internet users. Little is known, however, about users’ actual searching and browsing behaviors of online suicide-related information. Objective To investigate what webpages people actually clicked on after searching with suicide-related queries on a search engine and to examine what queries people used to get access to pro-suicide websites. Methods A retrospective observational study was done. We used a web search dataset released by America Online (AOL). The dataset was randomly sampled from all AOL subscribers’ web queries between March and May 2006 and generated by 657,000 service subscribers. Results We found 5526 search queries (0.026%, 5526/21,000,000) that included the keyword "suicide". The 5526 search queries included 1586 different search terms and were generated by 1625 unique subscribers (0.25%, 1625/657,000). Of these queries, 61.38% (3392/5526) were followed by users clicking on a search result. Of these 3392 queries, 1344 (39.62%) webpages were clicked on by 930 unique users but only 1314 of those webpages were accessible during the study period. Each clicked-through webpage was classified into 11 categories. The categories of the most visited webpages were: entertainment (30.13%; 396/1314), scientific information (18.31%; 240/1314), and community resources (14.53%; 191/1314). Among the 1314 accessed webpages, we could identify only two pro-suicide websites. We found that the search terms used to access these sites included “commiting suicide with a gas oven”, “hairless goat”, “pictures of murder by strangulation”, and “photo of a severe burn”. A limitation of our study is that the database may be dated and confined to mainly English webpages. Conclusions Searching or browsing suicide-related or pro-suicide webpages was uncommon, although a small group of users did access websites that contain detailed suicide method information. PMID:23305632
Using the TSAR electromagnetic modeling system
NASA Astrophysics Data System (ADS)
Pennock, S. T.; Laguna, G. W.
1993-09-01
A new user, upon receipt of the TSAR EM modeling system, may be overwhelmed by the number of software packages to learn and the number of manuals associated with those packages. This is a document to describe the creation of a simple TSAR model, beginning with an MGED solid and continuing the process through final results from TSAR. It is not intended to be a complete description of all the parts of the TSAR package. Rather, it is intended simply to touch on all the steps in the modeling process and to take a new user through the system from start to finish. There are six basic parts to the TSAR package. The first, MGED, is part of the BRL-CAD package and is used to create a solid model. The second part, ANASTASIA, is the program used to sample the solid model and create a finite-difference mesh. The third program, IMAGE, lets the user view the mesh itself and verify its accuracy. If everything about the mesh is correct, the process continues to the fourth step, SETUP-TSAR, which creates the parameter files for compiling TSAR and the input file for running a particular simulation. The fifth step is actually running TSAR, the field modeling program. Finally, the output from TSAR is placed into SIG, B2RAS or another program for post-processing and plotting. Each of these steps will be described below. The best way to learn to use the TSAR software is to actually create and run a simple test problem. As an example of how to use the TSAR package, let's create a sphere with a rectangular internal cavity, with conical and cylindrical penetrations connecting the outside to the inside, and find the electric field inside the cavity when the object is exposed to a Gaussian plane wave. We will begin with the solid modeling software, MGED, a part of the BRL-CAD modeling release.
Chen, Chou-Cheng; Ho, Chung-Liang
2014-01-01
While a huge amount of information about biological literature can be obtained by searching the PubMed database, reading through all the titles and abstracts resulting from such a search for useful information is inefficient. Text mining makes it possible to increase this efficiency. Some websites use text mining to gather information from the PubMed database; however, they are database-oriented, using pre-defined search keywords while lacking a query interface for user-defined search inputs. We present the PubMed Abstract Reading Helper (PubstractHelper) website which combines text mining and reading assistance for an efficient PubMed search. PubstractHelper can accept a maximum of ten groups of keywords, within each group containing up to ten keywords. The principle behind the text-mining function of PubstractHelper is that keywords contained in the same sentence are likely to be related. PubstractHelper highlights sentences with co-occurring keywords in different colors. The user can download the PMID and the abstracts with color markings to be reviewed later. The PubstractHelper website can help users to identify relevant publications based on the presence of related keywords, which should be a handy tool for their research. http://bio.yungyun.com.tw/ATM/PubstractHelper.aspx and http://holab.med.ncku.edu.tw/ATM/PubstractHelper.aspx.
Stepwise assembly of multiple Lin28 proteins on the terminal loop of let-7 miRNA precursors
Desjardins, Alexandre; Bouvette, Jonathan; Legault, Pascale
2014-01-01
Lin28 inhibits the biogenesis of let-7 miRNAs through direct interactions with let-7 precursors. Previous studies have described seemingly inconsistent Lin28 binding sites on pre-let-7 RNAs. Here, we reconcile these data by examining the binding mechanism of Lin28 to the terminal loop of pre-let-7g (TL-let-7g) using biochemical and biophysical methods. First, we investigate Lin28 binding to TL-let-7g variants and short RNA fragments and identify three independent binding sites for Lin28 on TL-let-7g. We then determine that Lin28 assembles in a stepwise manner on TL-let-7g to form a stable 1:3 complex. We show that the cold-shock domain (CSD) of Lin28 is responsible for remodelling the terminal loop of TL-let-7g, whereas the NCp7-like domain facilitates the initial binding of Lin28 to TL-let-7g. This stable binding of multiple Lin28 molecules to the terminal loop of pre-let-7g extends to other precursors of the let-7 family, but not to other pre-miRNAs tested. We propose a model for stepwise assembly of the 1:1, 1:2 and 1:3 pre-let-7g/Lin28 complexes. Stepwise multimerization of Lin28 on pre-let-7 is required for maximum inhibition of Dicer cleavage for a least one member of the let-7 family and may be important for orchestrating the activity of the several factors that regulate let-7 biogenesis. PMID:24452802
Mercury Toolset for Spatiotemporal Metadata
NASA Technical Reports Server (NTRS)
Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James
2010-01-01
Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.
Mercury Toolset for Spatiotemporal Metadata
NASA Astrophysics Data System (ADS)
Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris
2010-06-01
Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.
CITE NLM: Natural-Language Searching in an Online Catalog.
ERIC Educational Resources Information Center
Doszkocs, Tamas E.
1983-01-01
The National Library of Medicine's Current Information Transfer in English public access online catalog offers unique subject search capabilities--natural-language query input, automatic medical subject headings display, closest match search strategy, ranked document output, dynamic end user feedback for search refinement. References, description…
Peeling the Onion: Okapi System Architecture and Software Design Issues.
ERIC Educational Resources Information Center
Jones, S.; And Others
1997-01-01
Discusses software design issues for Okapi, an information retrieval system that incorporates both search engine and user interface and supports weighted searching, relevance feedback, and query expansion. The basic search system, adjacency searching, and moving toward a distributed system are discussed. (Author/LRW)
The rise and fall of the medical mediated searcher
Atlas, Michel C.
2000-01-01
The relationship between the development of mediated online literature searching and the recruitment of medical librarians to fill positions as online searchers was investigated. The history of database searching by medical librarians was outlined and a content analysis of thirty-five years of job advertisements in MLA News from 1961 through 1996 was summarized. Advertisements for online searchers were examined to test the hypothesis that the growth of mediated online searching was reflected in the recruitment of librarians to fill positions as mediated online searchers in medical libraries. The advent of end-user searching was also traced to determine how this trend affected the demand for mediated online searching and job availability of online searchers. Job advertisements were analyzed to determine what skills were in demand as end-user searching replaced mediated online searching as the norm in medical libraries. Finally, the trend away from mediated online searching to support of other library services was placed in the context of new roles for medical librarians. PMID:10658961
Scripting for Collaborative Search Computer-Supported Classroom Activities
ERIC Educational Resources Information Center
Verdugo, Renato; Barros, Leonardo; Albornoz, Daniela; Nussbaum, Miguel; McFarlane, Angela
2014-01-01
Searching online is one of the most powerful resources today's students have for accessing information. Searching in groups is a daily practice across multiple contexts; however, the tools we use for searching online do not enable collaborative practices and traditional search models consider a single user navigating online in solitary. This paper…
MetaSpider: Meta-Searching and Categorization on the Web.
ERIC Educational Resources Information Center
Chen, Hsinchun; Fan, Haiyan; Chau, Michael; Zeng, Daniel
2001-01-01
Discusses the difficulty of locating relevant information on the Web and studies two approaches to addressing the low precision and poor presentation of search results: meta-search and document categorization. Introduces MetaSpider, a meta-search engine, and presents results of a user evaluation study that compared three search engines.…
Abdulla, Ahmed AbdoAziz Ahmed; Lin, Hongfei; Xu, Bo; Banbhrani, Santosh Kumar
2016-07-25
Biomedical literature retrieval is becoming increasingly complex, and there is a fundamental need for advanced information retrieval systems. Information Retrieval (IR) programs scour unstructured materials such as text documents in large reserves of data that are usually stored on computers. IR is related to the representation, storage, and organization of information items, as well as to access. In IR one of the main problems is to determine which documents are relevant and which are not to the user's needs. Under the current regime, users cannot precisely construct queries in an accurate way to retrieve particular pieces of data from large reserves of data. Basic information retrieval systems are producing low-quality search results. In our proposed system for this paper we present a new technique to refine Information Retrieval searches to better represent the user's information need in order to enhance the performance of information retrieval by using different query expansion techniques and apply a linear combinations between them, where the combinations was linearly between two expansion results at one time. Query expansions expand the search query, for example, by finding synonyms and reweighting original terms. They provide significantly more focused, particularized search results than do basic search queries. The retrieval performance is measured by some variants of MAP (Mean Average Precision) and according to our experimental results, the combination of best results of query expansion is enhanced the retrieved documents and outperforms our baseline by 21.06 %, even it outperforms a previous study by 7.12 %. We propose several query expansion techniques and their combinations (linearly) to make user queries more cognizable to search engines and to produce higher-quality search results.
Wittek, Peter; Liu, Ying-Hsang; Darányi, Sándor; Gedeon, Tom; Lim, Ik Soo
2016-01-01
Information foraging connects optimal foraging theory in ecology with how humans search for information. The theory suggests that, following an information scent, the information seeker must optimize the tradeoff between exploration by repeated steps in the search space vs. exploitation, using the resources encountered. We conjecture that this tradeoff characterizes how a user deals with uncertainty and its two aspects, risk and ambiguity in economic theory. Risk is related to the perceived quality of the actually visited patch of information, and can be reduced by exploiting and understanding the patch to a better extent. Ambiguity, on the other hand, is the opportunity cost of having higher quality patches elsewhere in the search space. The aforementioned tradeoff depends on many attributes, including traits of the user: at the two extreme ends of the spectrum, analytic and wholistic searchers employ entirely different strategies. The former type focuses on exploitation first, interspersed with bouts of exploration, whereas the latter type prefers to explore the search space first and consume later. Our findings from an eye-tracking study of experts' interactions with novel search interfaces in the biomedical domain suggest that user traits of cognitive styles and perceived search task difficulty are significantly correlated with eye gaze and search behavior. We also demonstrate that perceived risk shifts the balance between exploration and exploitation in either type of users, tilting it against vs. in favor of ambiguity minimization. Since the pattern of behavior in information foraging is quintessentially sequential, risk and ambiguity minimization cannot happen simultaneously, leading to a fundamental limit on how good such a tradeoff can be. This in turn connects information seeking with the emergent field of quantum decision theory.
Assessing the User Experience of E-Books in Academic Libraries
ERIC Educational Resources Information Center
Zhang, Tao; Niu, Xi; Promann, Marlen
2017-01-01
We report findings from an assessment of e-book user experience (search and information seeking) from usage data and user tests. The usage data showed that most reading sessions were brief and focused on certain pages, suggesting that users mainly use e-books to find specific information. The user tests found that participants tended to use…
Information Discovery and Retrieval Tools
2004-12-01
information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.
Information Discovery and Retrieval Tools
2003-04-01
information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.
ERIC Educational Resources Information Center
Wisconsin Univ. - Stout, Menomonie. Center for Vocational, Technical and Adult Education.
The teacher directed problem solving activities package contains 17 units: Future Community Design, Let's Build an Elevator, Let's Construct a Catapult, Let's Design a Recreational Game, Let's Make a Hand Fishing Reel, Let's Make a Wall Hanging, Let's Make a Yo-Yo, Marooned in the Past, Metrication, Mousetrap Vehicles, The Multi System…
Personalizing Information Retrieval Using Task Features, Topic Knowledge, and Task Products
ERIC Educational Resources Information Center
Liu, Jingjing
2010-01-01
Personalization of information retrieval tailors search towards individual users to meet their particular information needs by taking into account information about users and their contexts, often through implicit sources of evidence such as user behaviors and contextual factors. The current study looks particularly at users' dwelling behavior,…
Support Services for Remote Users of Online Public Access Catalogs.
ERIC Educational Resources Information Center
Kalin, Sally W.
1991-01-01
Discusses the needs of remote users of online public access catalogs (OPACs). User expectations are discussed; problems encountered by remote-access users are examined, including technical problems and searching problems; support services are described, including instruction, print guides, and online help; and differences from the needs of…
The Internet as a Source of Academic Research Information: Findings of Two Pilot Studies.
ERIC Educational Resources Information Center
Kibirige, Harry M.; DePalo, Lisa
2000-01-01
Discussion of information available on the Internet focuses on two pilot studies that investigated how academic users perceive search engines and subject-oriented databases as sources of topical information. Highlights include information seeking behavior of academic users; undergraduate users; graduate users; faculty; and implications for…
Web information retrieval based on ontology
NASA Astrophysics Data System (ADS)
Zhang, Jian
2013-03-01
The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.
The Role of Metadata Standards in EOSDIS Search and Retrieval Applications
NASA Technical Reports Server (NTRS)
Pfister, Robin
1999-01-01
Metadata standards play a critical role in data search and retrieval systems. Metadata tie software to data so the data can be processed, stored, searched, retrieved and distributed. Without metadata these actions are not possible. The process of populating metadata to describe science data is an important service to the end user community so that a user who is unfamiliar with the data, can easily find and learn about a particular dataset before an order decision is made. Once a good set of standards are in place, the accuracy with which data search can be performed depends on the degree to which metadata standards are adhered during product definition. NASA's Earth Observing System Data and Information System (EOSDIS) provides examples of how metadata standards are used in data search and retrieval.
Flexible cue combination in the guidance of attention in visual search
Brand, John; Oriet, Chris; Johnson, Aaron P.; Wolfe, Jeremy M.
2014-01-01
Hodsoll and Humphreys (2001) have assessed the relative contributions of stimulus-driven and user-driven knowledge on linearly- and nonlinearly separable search. However, the target feature used to determine linear separability in their task (i.e., target size) was required to locate the target. In the present work, we investigated the contributions of stimulus-driven and user-driven knowledge when a linearly- or nonlinearly-separable feature is available but not required for target identification. We asked observers to complete a series of standard color X orientation conjunction searches in which target size was either linearly- or nonlinearly separable from the size of the distractors. When guidance by color X orientation and by size information are both available, observers rely on whichever information results in the best search efficiency. This is the case irrespective of whether we provide target foreknowledge by blocking stimulus conditions, suggesting that feature information is used in both a stimulus-driven and user-driven fashion. PMID:25463553
Hirayama, Shusuke; Matsuura, Taeko; Ueda, Hideaki; Fujii, Yusuke; Fujii, Takaaki; Takao, Seishin; Miyamoto, Naoki; Shimizu, Shinichi; Fujimoto, Rintaro; Umegaki, Kikuo; Shirato, Hiroki
2018-05-22
To evaluate the biological effects of proton beams as part of daily clinical routine, fast and accurate calculation of dose-averaged linear energy transfer (LET d ) is required. In this study, we have developed the analytical LET d calculation method based on the pencil-beam algorithm (PBA) considering the off-axis enhancement by secondary protons. This algorithm (PBA-dLET) was then validated using Monte Carlo simulation (MCS) results. In PBA-dLET, LET values were assigned separately for each individual dose kernel based on the PBA. For the dose kernel, we employed a triple Gaussian model which consists of the primary component (protons that undergo the multiple Coulomb scattering) and the halo component (protons that undergo inelastic, nonelastic and elastic nuclear reaction); the primary and halo components were represented by a single Gaussian and the sum of two Gaussian distributions, respectively. Although the previous analytical approaches assumed a constant LET d value for the lateral distribution of a pencil beam, the actual LET d increases away from the beam axis, because there are more scattered and therefore lower energy protons with higher stopping powers. To reflect this LET d behavior, we have assumed that the LETs of primary and halo components can take different values (LET p and LET halo ), which vary only along the depth direction. The values of dual-LET kernels were determined such that the PBA-dLET reproduced the MCS-generated LET d distribution in both small and large fields. These values were generated at intervals of 1 mm in depth for 96 energies from 70.2 to 220 MeV and collected in the look-up table. Finally, we compared the LET d distributions and mean LET d (LET d,mean ) values of targets and organs at risk between PBA-dLET and MCS. Both homogeneous phantom and patient geometries (prostate, liver, and lung cases) were used to validate the present method. In the homogeneous phantom, the LET d profiles obtained by the dual-LET kernels agree well with the MCS results except for the low-dose region in the lateral penumbra, where the actual dose was below 10% of the maximum dose. In the patient geometry, the LET d profiles calculated with the developed method reproduces MCS with the similar accuracy as in the homogeneous phantom. The maximum differences in LET d,mean for each structure between the PBA-dLET and the MCS were 0.06 keV/μm in homogeneous phantoms and 0.08 keV/μm in patient geometries under all tested conditions, respectively. We confirmed that the dual-LET-kernel model well reproduced the MCS, not only in the homogeneous phantom but also in complex patient geometries. The accuracy of the LET d was largely improved from the single-LET-kernel model, especially at the lateral penumbra. The model is expected to be useful, especially for proper recognition of the risk of side effects when the target is next to critical organs. © 2018 American Association of Physicists in Medicine.
Let-7 miRNA Precursors Co-express with LIN28B in Cervical Cells.
Zamora-Contreras, Aida Margarita; Alvarez-Salas, Luis Marat
2018-01-01
The let-7 microRNAs (miRNAs) are frequently dysregulated in carcinogenic processes, including cervical cancer. LIN28 proteins regulate let-7 biogenesis by binding to conserved sequences within the pre-miRNA structure. Nevertheless, recent research has shown that some let-7 miRNAs may escape LIN28 regulation. Correlate pre-let-7 miRNAs and LIN28B levels in cervical cell lines with different malignancy and HPV content. Pre-let-7 levels were determined by RTqPCR. LIN28B and other let-7 targets were analyzed by immunoblot. In silico tools were used to correlate let-7 and LIN28B expression and to analyze prelet- 7 sequences and structures. Lin28B protein was detected in all tested cell lines although it was more expressed in tumor cell lines. High levels of pre-let-7c/f-1 and pre-miR-98 were present in almost all cell lines regardless malignancy and LIN28B expression. Pre-let-7g/i were mainly expressed in tumor cell lines, pre-let-7e and pre-let-7-a3 were absent in all cell lines and pre-let-7a-2 showed indistinct expression. LIN28B showed positive correlation with pre-let-7i/g/f-1 and pre-miR-98 in tumor cell lines, suggesting escape from regulation. Sequence alignment and analysis of pre-let-7 miRNAs showed distinctive structural features within the preE region that may influence the ideal pre-let-7 structuring for LIN28B interaction. Short preE-stems were present in pre-let-7 that may escape LIN28B regulation, but long preEstems were mostly associated with high-level pre-let-7 miRNAs. The observed differences of pre-let-7 levels in cervical cell lines may be the result of alternative preE structuring affecting interaction with LIN28B thus resulting in differential let-7 regulation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Many Libraries Have Gone to Federated Searching to Win Users Back from Google. Is It Working?
ERIC Educational Resources Information Center
King, Douglas
2008-01-01
In the last issue, this journal asked a question on many librarians' minds, and it was pleased with the depth and variety of responses. As suggested by this journal editorial board member Oliver Pesch, readers were asked, "Many libraries have gone to federated searching to win users back from Google. Is it working?" Respondents approached the…
Introducing Products to DoD Using Specifications and Standards
2011-08-18
to utilize the Product Introduction Tool. Search ~Favorites .S » Links ~Customize Links ~ EDS-NMCI ~Free Hotmail Product Introduction Process User...the Product Introduction Tool. Search ~Favorites .S » Links ~Customize Links ~ EDS-NMCI ~Free Hotmail Product Introduction Process User Pol icy...Links i1 EDS-NMCI ~ Free Hotmail i] I] Go ldentitify Categories/Subcategories Identify the category/subcategory that most closely covers your
The Use of OPAC in a Large Academic Library: A Transactional Log Analysis Study of Subject Searching
ERIC Educational Resources Information Center
Villen-Rueda, Luis; Senso, Jose A.; de Moya-Anegon, Felix
2007-01-01
The analysis of user searches in catalogs has been the topic of research for over four decades, involving numerous studies and diverse methodologies. The present study looks at how different types of users effect queries in the catalog of a university library. For this purpose, we analyzed log files to determine which was the most frequent type of…
Grover's unstructured search by using a transverse field
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Rieffel, Eleanor; Wang, Zhihui
2017-04-01
We design a circuit-based quantum algorithm to search for a needle in a haystack, giving the same quadratic speedup achieved by Grover's original algorithm. In our circuit-based algorithm, the problem Hamiltonian (oracle) and a transverse field (instead of Grover's diffusion operator) are applied to the system alternatively. We construct a periodic time sequence such that the resultant unitary drives a closed transition between two states, which have high degrees of overlap with the initial state (even superposition of all states) and the target state, respectively. Let N =2n be the size of the search space. The transition rate in our algorithm is of order Θ(1 /√{ N}) , and the overlaps are of order Θ(1) , yielding a nearly optimal query complexity of T =√{ N}(π / 2√{ 2}) . Our algorithm is inspired by a class of algorithms proposed by Farhi et al., namely the Quantum Approximate Optimization Algorithm (QAOA); our method offers a route to optimizing the parameters in QAOA by restricting them to be periodic in time.
GeNemo: a search engine for web-based functional genomic data.
Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng
2016-07-08
A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Accomazzi, Alberto; Kurtz, Michael J.; Henneken, Edwin; Grant, Carolyn S.; Thompson, Donna M.; Chyla, Roman; McDonald, Steven; Shaulis, Taylor J.; Blanco-Cuaresma, Sergi; Shapurian, Golnaz; Hostetler, Timothy W.; Templeton, Matthew R.; Lockhart, Kelly E.
2018-01-01
The ADS Team has been working on a new system architecture and user interface named “ADS Bumblebee” since 2015. The new system presents many advantages over the traditional ADS interface and search engine (“ADS Classic”). A new, state of the art search engine features a number of new capabilities such as full-text search, advanced citation queries, filtering of results and scalable analytics for any search results. Its services are built on a cloud computing platform which can be easily scaled to match user demand. The Bumblebee user interface is a rich javascript application which leverages the features of the search engine and integrates a number of additional visualizations such as co-author and co-citation networks which provide a hierarchical view of research groups and research topics, respectively. Displays of paper analytics provide views of the basic article metrics (citations, reads, and age). All visualizations are interactive and provide ways to further refine search results. This new search system, which has been in beta for the past three years, has now matured to the point that it provides feature and content parity with ADS Classic, and has become the recommended way to access ADS content and services. Following a successful transition to Bumblebee, the use of ADS Classic will be discouraged starting in 2018 and phased out in 2019. You can access our new interface at https://ui.adsabs.harvard.edu
let-7 Contributes to Diabetic Retinopathy but Represses Pathological Ocular Angiogenesis
Zhou, Qinbo; Frost, Robert J. A.; Anderson, Chastain; Zhao, Fangkun; Ma, Jing; Yu, Bo
2017-01-01
ABSTRACT The in vivo function of microRNAs (miRs) in diabetic retinopathy (DR) and age-related macular degeneration (AMD) remains unclear. We report here that let-7 family members are expressed in retinal and choroidal endothelial cells (ECs). In ECs, overexpression of let-7 by adenovirus represses EC proliferation, migration, and networking in vitro, whereas inhibition of the let-7 family with a locked nucleic acid (LNA)–anti-miR has the opposite effect. Mechanistically, silencing of the let-7 target HMGA2 gene mimics the phenotype of let-7 overexpression in ECs. let-7 transgenic (let-7-Tg) mice show features of nonproliferative DR, including tortuous retinal vessels and defective pericyte coverage. However, these mice develop significantly less choroidal neovascularization (CNV) compared to wild-type controls after laser injury. Consistently, silencing of let-7 in the eye increased laser-induced CNV in wild-type mice. Together, our data establish a causative role of let-7 in nonproliferative diabetic retinopathy and a repressive function of let-7 in pathological angiogenesis, suggesting distinct implications of let-7 in the pathogenesis of DR and AMD. PMID:28584193
Gibbons, Chris J.; Bee, Penny E.; Walker, Lauren; Price, Owen; Lovell, Karina
2014-01-01
Background: Increasing service user and carer involvement in mental health care planning is a key healthcare priority but one that is difficult to achieve in practice. To better understand and measure user and carer involvement, it is crucial to have measurement questionnaires that are both psychometrically robust and acceptable to the end user. Methods: We conducted a systematic review using the terms “care plan$,” “mental health,” “user perspective$,” and “user participation” and their linguistic variants as search terms. Databases were searched from inception to November 2012, with an update search at the end of September 2014. We included any articles that described the development, validation or use of a user and/or carer-reported outcome measures of involvement in mental health care planning. We assessed the psychometric quality of each instrument using the “Evaluating the Measurement of Patient-Reported Outcomes” (EMPRO) criteria. Acceptability of each instrument was assessed using novel criteria developed in consultation with a mental health service user and carer consultation group. Results: We identified eleven papers describing the use, development, and/or validation of nine user/carer-reported outcome measures. Psychometric properties were sparsely reported and the questionnaires met few service user/carer-nominated attributes for acceptability. Where reported, basic psychometric statistics were of good quality, indicating that some measures may perform well if subjected to more rigorous psychometric tests. The majority were deemed to be too long for use in practice. Discussion: Multiple instruments are available to measure user/carer involvement in mental health care planning but are either of poor quality or poorly described. Existing measures cannot be considered psychometrically robust by modern standards, and cannot currently be recommended for use. Our review has identified an important knowledge gap, and an urgent need to develop new user and carer measures of care-planning involvement. PMID:25566099
An anti-let-7 sponge decoys and decays endogenous let-7 functions
Yang, Xiangling; Rutnam, Zina Jeyapalan; Jiao, Chunwei; Wei, Duo; Xie, Yizhen; Du, Jun; Zhong, Ling; Yang, Burton B.
2012-01-01
The let-7 family contains 12 members, which share identical seed regions, suggesting that they may target the same mRNAs. It is essential to develop a means that can regulate the functions of all members. Using a DNA synthesis technique, we have generated an anti-let-7 sponge aiming to modulate the function of all members. We found that products of the anti-let-7 construct could bind and inactivate all members of the let-7 family, producing decoy and decay effects. To test the role of the anti-let-7 sponge, we stably expressed the anti-let-7 construct in two types of cells, the breast carcinoma cells MT-1 and the oldest and most commonly used human cervical cancer cell line, HeLa cells. We found that expression of anti-let-7 increased cell survival, invasion and adhesion, which corroborate with known functions of let-7 family members. We further identified a novel target site across all species of the let-7 family in hyaluronan synthase 2 (HAS2). HAS2 overexpression produced similar effects as the anti-let-7 sponge. Silencing HAS2 expression by siRNAs produced opposite effects to anti-let-7 on cell survival and invasion. The ability of anti-let-7 to regulate multiple members of the let-7 family allows us to observe their multiple functions using a single reagent. This approach can be applied to other family members with conserved sequences. PMID:22871741
MEDLINE SDI services: how do they compare?*
Shultz, Mary; De Groote, Sandra L.
2003-01-01
Introduction: Selective dissemination of information (SDI) services regularly alert users to new information on their chosen topics. This type of service can increase a user's ability to keep current and may have a positive impact on efficiency and productivity. Currently, there are many venues available where users can establish, store, and automatically run MEDLINE searches. Purpose: To describe, evaluate, and compare SDI services for MEDLINE. Resources: The following SDI services were selected for this study: PubMed Cubby, BioMail, JADE, PubCrawler, OVID, and ScienceDirect. Methodology: Identical searches were established in four of the six selected SDI services and were run on a weekly basis over a period of two months. Eight search strategies were used in each system to test performance under various search conditions. The PubMed Cubby system was used as the baseline against which the other systems were compared. Other aspects were evaluated in all six services and include ease of use, frequency of results, ability to use MeSH, ability to access and edit existing search strategies, and ability to download to a bibliographic management program. Results: Not all MEDLINE SDI services retrieve identical results, even when identical search strategies are used. This study also showed that the services vary in terms of features and functions offered. PMID:14566377
Revamping Spacecraft Operational Intelligence with Splunk
NASA Technical Reports Server (NTRS)
Hwang, Victor
2012-01-01
So what is Splunk? Instead of giving the technical details, which you can find online, I'll tell you what it did for me. Splunk slapped everything into one place, with one uniform format, and gave me the ability to forget about all these annoying details of where it is, how to parse it, and all that. Instead, I only need to interact with Splunk to find the data I need. This sounds simple and obvious, but it's surprising what you can do once you all of your data is indexed in one place. By having your data organized, querying becomes much easier. Let's say that I want to search telemetry for a sensor_name gtemp_1 h and to return all data that is at most five minutes old. And because Splunk can hook into a real ]time stream, this data will always be up-to-date. Extending the previous example, I can now aggregate all types of data into one view based in time. In this picture, I've got transaction logs, telemetry, and downlinked files all in one page, organized by time. Even though the raw data looks completely than this, I've defined interfaces that transform it into this uniform format. This gives me a more complete picture for the question what was the spacecraft doing at this particular time? And because querying data is simple, I can start with a big block of data and whiddle it down to what I need, rather than hunting around for the individual pieces of data that I need. When we have all the data we need, we can begin widdling down the data with Splunk's Unix-like search syntax. These three examples highlights my trial-and-error attempts to find large temperature changes. I begin by showing the first 5 temperatures, only to find that they're sorted chronologically, rather than from highest temperatures to lowest temperatures. The next line shows sorting temperatures by their values, but I find that that fs not really what I want either. I want to know the delta temperatures between readings. Looking through Splunk's user manual, I find the delta function, which lets me dynamically generate new information to use in my query. With that extra piece of information, I can now return only the telemetry readings where the temperature changed by at least 10. One other useful feature I'll mention is that all of these queries can be run through Splunk's API. So any scripting language you can think of can plug right in and make these queries. This gives us the ability to build a lot of new tools.
Jadhav, Ashutosh; Sheth, Amit; Pathak, Jyotishman
2014-01-01
Since the early 2000’s, Internet usage for health information searching has increased significantly. Studying search queries can help us to understand users “information need” and how do they formulate search queries (“expression of information need”). Although cardiovascular diseases (CVD) affect a large percentage of the population, few studies have investigated how and what users search for CVD. We address this knowledge gap in the community by analyzing a large corpus of 10 million CVD related search queries from MayoClinic.com. Using UMLS MetaMap and UMLS semantic types/concepts, we developed a rule-based approach to categorize the queries into 14 health categories. We analyzed structural properties, types (keyword-based/Wh-questions/Yes-No questions) and linguistic structure of the queries. Our results show that the most searched health categories are ‘Diseases/Conditions’, ‘Vital-Sings’, ‘Symptoms’ and ‘Living-with’. CVD queries are longer and are predominantly keyword-based. This study extends our knowledge about online health information searching and provides useful insights for Web search engines and health websites. PMID:25954380
Design and Empirical Evaluation of Search Software for Legal Professionals on the WWW.
ERIC Educational Resources Information Center
Dempsey, Bert J.; Vreeland, Robert C.; Sumner, Robert G., Jr.; Yang, Kiduk
2000-01-01
Discussion of effective search aids for legal researchers on the World Wide Web focuses on the design and evaluation of two software systems developed to explore models for browsing and searching across a user-selected set of Web sites. Describes crawler-enhanced search engines, filters, distributed full-text searching, and natural language…
The Pricing of Information--A Search-Based Approach to Pricing an Online Search Service.
ERIC Educational Resources Information Center
Boyle, Harry F.
1982-01-01
Describes innovative pricing structure consisting of low connect time fee, print fees, and search fees, offered by Chemical Abstracts Service (CAS) ONLINE--an online searching system used to locate chemical substances. Pricing options considered by CAS, the search-based pricing approach, and users' reactions to pricing structures are noted. (EJS)
ERIC Educational Resources Information Center
Teague-Rector, Susan; Ballard, Angela; Pauley, Susan K.
2011-01-01
Creating a learnable, effective, and user-friendly library Web site hinges on providing easy access to search. Designing a search interface for academic libraries can be particularly challenging given the complexity and range of searchable library collections, such as bibliographic databases, electronic journals, and article search silos. Library…
Brunstein-Klomek, Anat; Mandel, Or; Hadas, Arie; Fennig, Silvana
2018-01-01
Background The influence of pro-anorexia (pro-ana) websites is debated, with studies indicating both negative and positive effects, as well as significant variation in the effects of different websites for those suffering from eating disorders (EDs) and the general population. Online advertising, known to induce behavioral change both online and in the physical world, has not been used so far to modify the search behavior of people seeking pro-ana content. Objective The objective of this randomized controlled trial (RCT) was to examine if online advertisements (ads) can change online search behaviors of users who are looking for online pro-ana content. Methods Using the Bing Ads system, we conducted an RCT to randomly expose the searchers for pro-ana content to 10 different ads referring people to one of the three websites: the National Eating Disorders Association, the National Institutes of Mental Health, and MyProAna. MyProAna is a pro-ana website that was found in a previous study to be associated with less pathological online behaviors than other pro-ana websites. We followed participants exposed and unexposed to the ads to explore their past and future online searches. The ads were shown 25,554 times and clicked on 217 times. Results Exposure to the ads was associated with a decrease in searches for pro-ana and self-harm content. Reductions were greatest among those referred to MyProAna (reduction of 34.0% [73/215] and 37.2% [80/215] for pro-ana and self-harm, respectively) compared with users who were referred elsewhere (reduction of 15.47% [410/2650] and 3.21% [85/2650], respectively), and with users who were not shown the ads, who increased their behaviors (increase of 57.12% [6462/11,314] and 4.07% [461/11,314], respectively). In addition, those referred to MyProAna increased their search for treatment, as did control users, who did so to a lesser extent. However, users referred elsewhere decreased their searches for this content. Conclusions We found that referring users interested in ED-related content to specific pro-ana communities might lessen their maladaptive online search behavior. This suggests that those who are preoccupied with EDs can be redirected to less pathological online searches through appropriate pathways. Trial Registration ClinicalTrials.gov NCT03439553; https://clinicaltrials.gov/show/NCT03439553 (Archived by WebCite at http://www.webcitation.org/6xNYnxYlw) PMID:29472176
Yom-Tov, Elad; Brunstein-Klomek, Anat; Mandel, Or; Hadas, Arie; Fennig, Silvana
2018-02-22
The influence of pro-anorexia (pro-ana) websites is debated, with studies indicating both negative and positive effects, as well as significant variation in the effects of different websites for those suffering from eating disorders (EDs) and the general population. Online advertising, known to induce behavioral change both online and in the physical world, has not been used so far to modify the search behavior of people seeking pro-ana content. The objective of this randomized controlled trial (RCT) was to examine if online advertisements (ads) can change online search behaviors of users who are looking for online pro-ana content. Using the Bing Ads system, we conducted an RCT to randomly expose the searchers for pro-ana content to 10 different ads referring people to one of the three websites: the National Eating Disorders Association, the National Institutes of Mental Health, and MyProAna. MyProAna is a pro-ana website that was found in a previous study to be associated with less pathological online behaviors than other pro-ana websites. We followed participants exposed and unexposed to the ads to explore their past and future online searches. The ads were shown 25,554 times and clicked on 217 times. Exposure to the ads was associated with a decrease in searches for pro-ana and self-harm content. Reductions were greatest among those referred to MyProAna (reduction of 34.0% [73/215] and 37.2% [80/215] for pro-ana and self-harm, respectively) compared with users who were referred elsewhere (reduction of 15.47% [410/2650] and 3.21% [85/2650], respectively), and with users who were not shown the ads, who increased their behaviors (increase of 57.12% [6462/11,314] and 4.07% [461/11,314], respectively). In addition, those referred to MyProAna increased their search for treatment, as did control users, who did so to a lesser extent. However, users referred elsewhere decreased their searches for this content. We found that referring users interested in ED-related content to specific pro-ana communities might lessen their maladaptive online search behavior. This suggests that those who are preoccupied with EDs can be redirected to less pathological online searches through appropriate pathways. ClinicalTrials.gov NCT03439553; https://clinicaltrials.gov/show/NCT03439553 (Archived by WebCite at http://www.webcitation.org/6xNYnxYlw). ©Elad Yom-Tov, Anat Brunstein-Klomek, Or Mandel, Arie Hadas, Silvana Fennig. Originally published in JMIR Mental Health (http://mental.jmir.org), 22.02.2018.
Online Searching in the Small College Library--Ten Years Later.
ERIC Educational Resources Information Center
Smith, Scott; Smith, Jane B.
1991-01-01
Reviews experiences with online searching at the Nazareth College library. Topics discussed include user expectations; actual and perceived search quality; the impact of laser printers; growth in online searching; increases in other reference services; the use of CD-ROM technology; and costs and pricing policies. (LRW)
THE ROLE OF SEARCHING SERVICES IN AN ACQUISITIONS PROGRAM.
ERIC Educational Resources Information Center
LUECK, ANTOINETTE L.; AND OTHERS
A USER PRESENTS HIS POINT OF VIEW ON LITERATURE SEARCHING THROUGH THE MAJOR SEARCHING SERVICES IN THE OVERALL PROGRAM OF ACQUISITIONS FOR THE ENGINEERING STAFF OF THE AIR FORCE AERO PROPULSION LABORATORY. THESE MAJOR SEARCHING SERVICES INCLUDE THE DEFENSE DOCUMENTATION CENTER (DDC), THE NATIONAL AERONAUTICS AND SPACE ADMINISTRATION (NASA), THE…
NASA Technical Reports Server (NTRS)
Nees, M.; Green, H. O.
1977-01-01
An IBM-developed program, STAIRS, was selected for performing a search on the BIOSIS file. The evaluation of the hardware and search systems and the strategies used are discussed. The searches are analyzed by type of end user.
ERIC Educational Resources Information Center
Rochkind, Jonathan
2007-01-01
The ability to search and receive results in more than one database through a single interface--or metasearch--is something many users want. Google Scholar--the search engine of specifically scholarly content--and library metasearch products like Ex Libris's MetaLib, Serials Solution's Central Search, WebFeat, and products based on MuseGlobal used…
Sueki, Hajime; Ito, Jiro
2015-01-01
Nurturing gatekeepers is an effective suicide prevention strategy. Internet-based methods to screen those at high risk of suicide have been developed in recent years but have not been used for online gatekeeping. A preliminary study was conducted to examine the feasibility and effects of online gatekeeping. Advertisements to promote e-mail psychological consultation service use among Internet users were placed on web pages identified by searches using suicide-related keywords. We replied to all emails received between July and December 2013 and analyzed their contents. A total of 139 consultation service users were analyzed. The mean age was 23.8 years (SD = 9.7), and female users accounted for 80% of the sample. Suicidal ideation was present in 74.1%, and 12.2% had a history of suicide attempts. After consultation, positive changes in mood were observed in 10.8%, 16.5% showed intentions to seek help from new supporters, and 10.1% of all 139 users actually took help-seeking actions. Online gatekeeping to prevent suicide by placing advertisements on web search pages to promote consultation service use among Internet users with suicidal ideation may be feasible.
Consultant management estimating tool : users' manual.
DOT National Transportation Integrated Search
2012-04-01
The Switchboard is the opening form displayed to users. Use : the Switchboard to access the main functions of the estimating : tool. Double-click on a box to select the desired function. From : the Switchboard a user can initiate a search for project...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Fada; Peeler, Christopher; Taleei, Reza
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking stepmore » size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in GEANT 4. The incorrect LET{sub d} values lead to substantial differences in the calculated RBE. Conclusions: When the GEANT 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LET{sub t} in the dose plateau region and LET{sub d} around the Bragg peak. For a large step limit, i.e., 500 μm, LET{sub d} is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LET{sub d} and LET{sub t} becomes positive.« less
Infrared speckle interferometry and spectroscopy of Io
NASA Technical Reports Server (NTRS)
Howell, Robert R.
1991-01-01
The goal during the last year was to continue the speckle monitoring of volcanic hot spots on Io, and to begin observations of the 1991 series of mutual events between Io and Europa. The former provide a time history of the volcanic activity, while the latter give the highest spatial resolution and the best sensitivity to faint spots. A minor component of the program is lunar occultation observations of young T Tauri stars. The occultations provide milliarcsecond resolution which let us search for circumstellar material and determine which systems are multiple.
Managing Online Search Statistics with dBASE III Plus.
ERIC Educational Resources Information Center
Speer, Susan C.
1987-01-01
Describes a computer program designed to manage statistics about online searches which reports the number of searches by vendor, purpose, and librarian; calculates charges to departments and individuals; and prints monthly invoices to users with standing accounts. (CLB)
GeoSearch: A lightweight broking middleware for geospatial resources discovery
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; Liu, K.; Xia, J.
2012-12-01
With petabytes of geodata, thousands of geospatial web services available over the Internet, it is critical to support geoscience research and applications by finding the best-fit geospatial resources from the massive and heterogeneous resources. Past decades' developments witnessed the operation of many service components to facilitate geospatial resource management and discovery. However, efficient and accurate geospatial resource discovery is still a big challenge due to the following reasons: 1)The entry barriers (also called "learning curves") hinder the usability of discovery services to end users. Different portals and catalogues always adopt various access protocols, metadata formats and GUI styles to organize, present and publish metadata. It is hard for end users to learn all these technical details and differences. 2)The cost for federating heterogeneous services is high. To provide sufficient resources and facilitate data discovery, many registries adopt periodic harvesting mechanism to retrieve metadata from other federated catalogues. These time-consuming processes lead to network and storage burdens, data redundancy, and also the overhead of maintaining data consistency. 3)The heterogeneous semantics issues in data discovery. Since the keyword matching is still the primary search method in many operational discovery services, the search accuracy (precision and recall) is hard to guarantee. Semantic technologies (such as semantic reasoning and similarity evaluation) offer a solution to solve these issues. However, integrating semantic technologies with existing service is challenging due to the expandability limitations on the service frameworks and metadata templates. 4)The capabilities to help users make final selection are inadequate. Most of the existing search portals lack intuitive and diverse information visualization methods and functions (sort, filter) to present, explore and analyze search results. Furthermore, the presentation of the value-added additional information (such as, service quality and user feedback), which conveys important decision supporting information, is missing. To address these issues, we prototyped a distributed search engine, GeoSearch, based on brokering middleware framework to search, integrate and visualize heterogeneous geospatial resources. Specifically, 1) A lightweight discover broker is developed to conduct distributed search. The broker retrieves metadata records for geospatial resources and additional information from dispersed services (portals and catalogues) and other systems on the fly. 2) A quality monitoring and evaluation broker (i.e., QoS Checker) is developed and integrated to provide quality information for geospatial web services. 3) The semantic assisted search and relevance evaluation functions are implemented by loosely interoperating with ESIP Testbed component. 4) Sophisticated information and data visualization functionalities and tools are assembled to improve user experience and assist resource selection.
Creating and Searching a Local Inventory for Data Granules in a Remote Archive
NASA Astrophysics Data System (ADS)
Cornillon, P. C.
2016-12-01
More often than not, search capabilities for network accessible data do not exist or do not meet the requirements of the user. For large archives this can make finding data of interest tedious at best. This summer, the author encountered such a problem with regard to the two existing archives of VIIRS L2 sea surface temperature (SST) fields obtained with the new ACSPO retrieval algorithm; one at the Jet Propulsion Laboratory's PO-DAAC and the other at NOAA's National Centers for Environmental Information (NCEI). In both cases the data were available via ftp and OPeNDAP but there was no search capability at the PO-DAAC and the NCEI archive was incomplete. Furthermore, in order to meet the needs of a broad range of datasets and users, the beta version of the search engine at NCEI was cumbersome for the searches of interest. Although some of these problems have been resolved since (and may be described in other posters/presentations at this meeting), the solution described in this presentation offers the user the ability to develop a search capability for archives lacking a search capability and/or to configure searches more to his or her preferences than the generic searches offered by the data provider. The solution, a Matlab script, used html access to the PO-DAAC web site to locate all VIIRS 10 minute granules and OPeNDAP access to acquire the bounding box for each granule from the metadata bound to the file. This task required several hours of wall time to acquire the data and to write the bounding boxes to a local file with the associated ftp and OPeNDAP urls for the 110,000+ granule archive. A second Matlab script searched the local archive, seconds, for granules falling in a user defined space-time window and an ascii file of wget commands associated with these was generated. This file was then executed to acquire the data of interest. The wget commands can be configured to acquire the entire files via ftp or a subset of each file via OPeNDAP. Furthermore, the search capability, based on bounding boxes and rectangular regions, could easily be modified to further refine the search. Finally, the script that builds the inventory has been designed to update the local inventory, minutes per month rather than hours.
NASA Technical Reports Server (NTRS)
Olsen, Lola; Morahan, Michael; Aleman, Alicia; Cepero, Laurel; Stevens, Tyler; Ritz, Scott; Holland, Monica
2011-01-01
The Global Change Master Directory (GCMD) provides an extensive directory of descriptive and spatial information about data sets and data-related services, which are relevant to Earth science research. The directory's data discovery components include controlled keywords, free-text searches, and map/date searches. The GCMD portal for NASA's Land Atmosphere Near-real-time Capability for EOS (LANCE) data products leverages these discovery features by providing users a direct route to NASA's Near-Real-Time (NRT) collections. This portal offers direct access to collection entries by instrument name, informing users of the availability of data. After a relevant collection entry is found through the GCMD's search components, the "Get Data" URL within the entry directs the user to the desired data. http://gcmd.nasa.gov/r/p/gcmd_lance_nrt.
Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Gur, Tamer; Cowley, Andrew; Li, Weizhong; Uludag, Mahmut; Pundir, Sangya; Cham, Jennifer A; McWilliam, Hamish; Lopez, Rodrigo
2015-07-01
The European Bioinformatics Institute (EMBL-EBI-https://www.ebi.ac.uk) provides free and unrestricted access to data across all major areas of biology and biomedicine. Searching and extracting knowledge across these domains requires a fast and scalable solution that addresses the requirements of domain experts as well as casual users. We present the EBI Search engine, referred to here as 'EBI Search', an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. API integration provides access to analytical tools, allowing users to further investigate the results of their search. The interconnectivity that exists between data resources at EMBL-EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types including sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, together with relevant life science literature. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Allam, Ahmed; Schulz, Peter Johannes; Nakamoto, Kent
2014-04-02
During the past 2 decades, the Internet has evolved to become a necessity in our daily lives. The selection and sorting algorithms of search engines exert tremendous influence over the global spread of information and other communication processes. This study is concerned with demonstrating the influence of selection and sorting/ranking criteria operating in search engines on users' knowledge, beliefs, and attitudes of websites about vaccination. In particular, it is to compare the effects of search engines that deliver websites emphasizing on the pro side of vaccination with those focusing on the con side and with normal Google as a control group. We conducted 2 online experiments using manipulated search engines. A pilot study was to verify the existence of dangerous health literacy in connection with searching and using health information on the Internet by exploring the effect of 2 manipulated search engines that yielded either pro or con vaccination sites only, with a group receiving normal Google as control. A pre-post test design was used; participants were American marketing students enrolled in a study-abroad program in Lugano, Switzerland. The second experiment manipulated the search engine by applying different ratios of con versus pro vaccination webpages displayed in the search results. Participants were recruited from Amazon's Mechanical Turk platform where it was published as a human intelligence task (HIT). Both experiments showed knowledge highest in the group offered only pro vaccination sites (Z=-2.088, P=.03; Kruskal-Wallis H test [H₅]=11.30, P=.04). They acknowledged the importance/benefits (Z=-2.326, P=.02; H5=11.34, P=.04) and effectiveness (Z=-2.230, P=.03) of vaccination more, whereas groups offered antivaccination sites only showed increased concern about effects (Z=-2.582, P=.01; H₅=16.88, P=.005) and harmful health outcomes (Z=-2.200, P=.02) of vaccination. Normal Google users perceived information quality to be positive despite a small effect on knowledge and a negative effect on their beliefs and attitudes toward vaccination and willingness to recommend the information (χ²₅=14.1, P=.01). More exposure to antivaccination websites lowered participants' knowledge (J=4783.5, z=-2.142, P=.03) increased their fear of side effects (J=6496, z=2.724, P=.006), and lowered their acknowledgment of benefits (J=4805, z=-2.067, P=.03). The selection and sorting/ranking criteria of search engines play a vital role in online health information seeking. Search engines delivering websites containing credible and evidence-based medical information impact positively Internet users seeking health information. Whereas sites retrieved by biased search engines create some opinion change in users. These effects are apparently independent of users' site credibility and evaluation judgments. Users are affected beneficially or detrimentally but are unaware, suggesting they are not consciously perceptive of indicators that steer them toward the credible sources or away from the dangerous ones. In this sense, the online health information seeker is flying blind.
He, Ji; Dai, Xinbin; Zhao, Xuechun
2007-02-09
BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Personal BLAST Navigator (PLAN) is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1) query and target sequence database management, (2) automated high-throughput BLAST searching, (3) indexing and searching of results, (4) filtering results online, (5) managing results of personal interest in favorite categories, (6) automated sequence annotation (such as NCBI NR and ontology-based annotation). PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results. The PLAN web interface is platform-independent, easily configurable and capable of comprehensive expansion, and user-intuitive. PLAN is freely available to academic users at http://bioinfo.noble.org/plan/. The source code for local deployment is provided under free license. Full support on system utilization, installation, configuration and customization are provided to academic users.
He, Ji; Dai, Xinbin; Zhao, Xuechun
2007-01-01
Background BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Results Personal BLAST Navigator (PLAN) is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1) query and target sequence database management, (2) automated high-throughput BLAST searching, (3) indexing and searching of results, (4) filtering results online, (5) managing results of personal interest in favorite categories, (6) automated sequence annotation (such as NCBI NR and ontology-based annotation). PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. Conclusion PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results. The PLAN web interface is platform-independent, easily configurable and capable of comprehensive expansion, and user-intuitive. PLAN is freely available to academic users at . The source code for local deployment is provided under free license. Full support on system utilization, installation, configuration and customization are provided to academic users. PMID:17291345
Exploring antecedents of consumer satisfaction and repeated search behavior on e-health information.
Lee, Yun Jung; Park, Jungkun; Widdows, Richard
2009-03-01
E-health information has become an important resource for people seeking health information. Even though many studies have been conducted to examine the quality of e-health information, only a few studies have explored the effects of the information seekers' motivations on the perceived quality of e-health information. There is even less information about repeated searches for e-health information after the users' initial experience of e-health information use. Using an online survey of information seekers, 252 e-health information users' responses were collected. The research examines the relationship among motivation, perceived quality, satisfaction, and intention to repeat-search e-health information. The results identify motivations to search e-health information and confirm the relationship among motivation, perceived quality dimensions, and satisfaction and intention to repeat searches for e-health information.
NASA Astrophysics Data System (ADS)
Li, Y.; Jiang, Y.; Yang, C. P.; Armstrong, E. M.; Huang, T.; Moroni, D. F.; McGibbney, L. J.
2016-12-01
Big oceanographic data have been produced, archived and made available online, but finding the right data for scientific research and application development is still a significant challenge. A long-standing problem in data discovery is how to find the interrelationships between keywords and data, as well as the intrarelationships of the two individually. Most previous research attempted to solve this problem by building domain-specific ontology either manually or through automatic machine learning techniques. The former is costly, labor intensive and hard to keep up-to-date, while the latter is prone to noise and may be difficult for human to understand. Large-scale user behavior data modelling represents a largely untapped, unique, and valuable source for discovering semantic relationships among domain-specific vocabulary. In this article, we propose a search engine framework for mining and utilizing dataset relevancy from oceanographic dataset metadata, user behaviors, and existing ontology. The objective is to improve discovery accuracy of oceanographic data and reduce time for scientist to discover, download and reformat data for their projects. Experiments and a search example show that the proposed search engine helps both scientists and general users search with better ranking results, recommendation, and ontology navigation.
NASA Astrophysics Data System (ADS)
Aleman, A.; Olsen, L. M.; Ritz, S.; Stevens, T.; Morahan, M.; Grebas, S. K.
2011-12-01
NASA's Global Change Master Directory provides the scientific community with the ability to discover, access, and use Earth science data, data-related services, and climate diagnostics worldwide.The GCMD offers descriptions of Earth science data sets using the Directory Interchange Format (DIF) metadata standard; Earth science related data services are described using the Service Entry Resource Format (SERF); and climate visualizations are described using the Climate Diagnostic (CD) standard. The DIF, SERF and CD standards each capture data attributes used to determine whether a data set, service, or climate visualization is relevant to a user's needs.Metadata fields include: title, summary, science keywords, service keywords, data center, data set citation, personnel, instrument, platform, quality, related URL, temporal and spatial coverage, data resolution and distribution information.In addition, nine valuable sets of controlled vocabularies have been developed to assist users in normalizing the search for data descriptions. An update to the GCMD's search functionality is planned to further capitalize on the controlled vocabularies during database queries.By implementing a dynamic keyword "tree", users will have the ability to search for data sets by combining keywords in new ways.This will allow users to conduct more relevant and efficient database searches to support the free exchange and re-use of Earth science data.
Shilov, Ignat V; Seymour, Sean L; Patel, Alpesh A; Loboda, Alex; Tang, Wilfred H; Keating, Sean P; Hunter, Christie L; Nuwaysir, Lydia M; Schaeffer, Daniel A
2007-09-01
The Paragon Algorithm, a novel database search engine for the identification of peptides from tandem mass spectrometry data, is presented. Sequence Temperature Values are computed using a sequence tag algorithm, allowing the degree of implication by an MS/MS spectrum of each region of a database to be determined on a continuum. Counter to conventional approaches, features such as modifications, substitutions, and cleavage events are modeled with probabilities rather than by discrete user-controlled settings to consider or not consider a feature. The use of feature probabilities in conjunction with Sequence Temperature Values allows for a very large increase in the effective search space with only a very small increase in the actual number of hypotheses that must be scored. The algorithm has a new kind of user interface that removes the user expertise requirement, presenting control settings in the language of the laboratory that are translated to optimal algorithmic settings. To validate this new algorithm, a comparison with Mascot is presented for a series of analogous searches to explore the relative impact of increasing search space probed with Mascot by relaxing the tryptic digestion conformance requirements from trypsin to semitrypsin to no enzyme and with the Paragon Algorithm using its Rapid mode and Thorough mode with and without tryptic specificity. Although they performed similarly for small search space, dramatic differences were observed in large search space. With the Paragon Algorithm, hundreds of biological and artifact modifications, all possible substitutions, and all levels of conformance to the expected digestion pattern can be searched in a single search step, yet the typical cost in search time is only 2-5 times that of conventional small search space. Despite this large increase in effective search space, there is no drastic loss of discrimination that typically accompanies the exploration of large search space.
Adjacency and Proximity Searching in the Science Citation Index and Google
2005-01-01
major database search engines , including commercial S&T database search engines (e.g., Science Citation Index (SCI), Engineering Compendex (EC...PubMed, OVID), Federal agency award database search engines (e.g., NSF, NIH, DOE, EPA, as accessed in Federal R&D Project Summaries), Web search Engines (e.g...searching. Some database search engines allow strict constrained co- occurrence searching as a user option (e.g., OVID, EC), while others do not (e.g., SCI
Foraging patterns in online searches.
Wang, Xiangwen; Pleimling, Michel
2017-03-01
Nowadays online searches are undeniably the most common form of information gathering, as witnessed by billions of clicks generated each day on search engines. In this work we describe online searches as foraging processes that take place on the semi-infinite line. Using a variety of quantities like probability distributions and complementary cumulative distribution functions of step length and waiting time as well as mean square displacements and entropies, we analyze three different click-through logs that contain the detailed information of millions of queries submitted to search engines. Notable differences between the different logs reveal an increased efficiency of the search engines. In the language of foraging, the newer logs indicate that online searches overwhelmingly yield local searches (i.e., on one page of links provided by the search engines), whereas for the older logs the foraging processes are a combination of local searches and relocation phases that are power law distributed. Our investigation of click logs of search engines therefore highlights the presence of intermittent search processes (where phases of local explorations are separated by power law distributed relocation jumps) in online searches. It follows that good search engines enable the users to find the information they are looking for through a local exploration of a single page with search results, whereas for poor search engine users are often forced to do a broader exploration of different pages.
Foraging patterns in online searches
NASA Astrophysics Data System (ADS)
Wang, Xiangwen; Pleimling, Michel
2017-03-01
Nowadays online searches are undeniably the most common form of information gathering, as witnessed by billions of clicks generated each day on search engines. In this work we describe online searches as foraging processes that take place on the semi-infinite line. Using a variety of quantities like probability distributions and complementary cumulative distribution functions of step length and waiting time as well as mean square displacements and entropies, we analyze three different click-through logs that contain the detailed information of millions of queries submitted to search engines. Notable differences between the different logs reveal an increased efficiency of the search engines. In the language of foraging, the newer logs indicate that online searches overwhelmingly yield local searches (i.e., on one page of links provided by the search engines), whereas for the older logs the foraging processes are a combination of local searches and relocation phases that are power law distributed. Our investigation of click logs of search engines therefore highlights the presence of intermittent search processes (where phases of local explorations are separated by power law distributed relocation jumps) in online searches. It follows that good search engines enable the users to find the information they are looking for through a local exploration of a single page with search results, whereas for poor search engine users are often forced to do a broader exploration of different pages.
Cole, Curtis L; Kanter, Andrew S; Cummens, Michael; Vostinar, Sean; Naeymi-Rad, Frank
2004-01-01
To design and implement a real world application using a terminology server to assist patients and physicians who use common language search terms to find specialist physicians with a particular clinical expertise. Terminology servers have been developed to help users encoding of information using complicated structured vocabulary during data entry tasks, such as recording clinical information. We describe a methodology using Personal Health Terminology trade mark and a SNOMED CT-based hierarchical concept server. Construction of a pilot mediated-search engine to assist users who use vernacular speech in querying data which is more technical than vernacular. This approach, which combines theoretical and practical requirements, provides a useful example of concept-based searching for physician referrals.
Aroian, R. V.; Sternberg, P. W.
1991-01-01
The let-23 gene, which encodes a putative tyrosine kinase of the epidermal growth factor (EGF) receptor subfamily, has multiple functions during Caenorhabditis elegans development. We show that let-23 function is required for vulval precursor cells (VPCs) to respond to the signal that induces vulval differentiation: a complete loss of let-23 function results in no induction. However, some let-23 mutations that genetically reduce but do not eliminate let-23 function result in VPCs apparently hypersensitive to inductive signal: as many as five of six VPCs can adopt vulval fates, in contrast to the three that normally do. These results suggest that the let-23 receptor tyrosine kinase controls two opposing pathways, one that stimulates vulval differentiation and another that negatively regulates vulval differentiation. Furthermore, analysis of 16 new let-23 mutations indicates that the let-23 kinase functions in at least five tissues. Since various let-23 mutant phenotypes can be obtained independently, the let-23 gene is likely to have tissue-specific functions. PMID:2071015
Precise let-7 expression levels balance organ regeneration against tumor suppression
Wu, Linwei; Nguyen, Liem H; Zhou, Kejin; de Soysa, T Yvanka; Li, Lin; Miller, Jason B; Tian, Jianmin; Locker, Joseph; Zhang, Shuyuan; Shinoda, Gen; Seligson, Marc T; Zeitels, Lauren R; Acharya, Asha; Wang, Sam C; Mendell, Joshua T; He, Xiaoshun; Nishino, Jinsuke; Morrison, Sean J; Siegwart, Daniel J; Daley, George Q; Shyh-Chang, Ng; Zhu, Hao
2015-01-01
The in vivo roles for even the most intensely studied microRNAs remain poorly defined. Here, analysis of mouse models revealed that let-7, a large and ancient microRNA family, performs tumor suppressive roles at the expense of regeneration. Too little or too much let-7 resulted in compromised protection against cancer or tissue damage, respectively. Modest let-7 overexpression abrogated MYC-driven liver cancer by antagonizing multiple let-7 sensitive oncogenes. However, the same level of overexpression blocked liver regeneration, while let-7 deletion enhanced it, demonstrating that distinct let-7 levels can mediate desirable phenotypes. let-7 dependent regeneration phenotypes resulted from influences on the insulin-PI3K-mTOR pathway. We found that chronic high-dose let-7 overexpression caused liver damage and degeneration, paradoxically leading to tumorigenesis. These dose-dependent roles for let-7 in tissue repair and tumorigenesis rationalize the tight regulation of this microRNA in development, and have important implications for let-7 based therapeutics. DOI: http://dx.doi.org/10.7554/eLife.09431.001 PMID:26445246
ScienceCinema Database Search DOE ScienceCinema for Multimedia à Find + Fielded Search Audio Search à Fielded Search Title: à Description/Abstract: à Bibliographic Data: à Author/Speaker: à Name Name ORCID Media ScienceCinema? ScienceCinema allows users to search for specific words and phrases spoken within video files
Zhu, Xiuming; Wu, Lingjiao; Yao, Jian; Jiang, Han; Wang, Qiangfeng; Yang, Zhijian; Wu, Fusheng
2015-01-01
Down-regulation of the microRNA let-7c plays an important role in the pathogenesis of human hepatocellular carcinoma (HCC). The aim of the present study was to determine whether the cell cycle regulator CDC25A is involved in the antitumor effect of let-7c in HCC. The expression levels of let-7c in HCC cell lines were examined by quantitative real-time PCR, and a let-7c agomir was transfected into HCC cells to overexpress let-7c. The effects of let-7c on HCC proliferation, apoptosis and cell cycle were analyzed. The in vivo tumor-inhibitory efficacy of let-7c was evaluated in a xenograft mouse model of HCC. Luciferase reporter assays and western blotting were conducted to identify the targets of let-7c and to determine the effects of let-7c on CDC25A, CyclinD1, CDK6, pRb and E2F2 expression. The results showed that the expression levels of let-7c were significantly decreased in HCC cell lines. Overexpression of let-7c repressed cell growth, induced cell apoptosis, led to G1 cell cycle arrest in vitro, and suppressed tumor growth in a HepG2 xenograft model in vivo. The luciferase reporter assay showed that CDC25A was a direct target of let-7c, and that let-7c inhibited the expression of CDC25A protein by directly targeting its 3ʹ UTR. Restoration of CDC25A induced a let-7c-mediated G1-to-S phase transition. Western blot analysis demonstrated that overexpression of let-7c decreased CyclinD1, CDK6, pRb and E2F2 protein levels. In conclusion, this study indicates that let-7c suppresses HCC progression, possibly by directly targeting the cell cycle regulator CDC25A and indirectly affecting its downstream target molecules. Let-7c may therefore be an effective therapeutic target for HCC. PMID:25909324
A Markov Chain Model for Changes in Users' Assessment of Search Results.
Zhitomirsky-Geffet, Maayan; Bar-Ilan, Judit; Levene, Mark
2016-01-01
Previous research shows that users tend to change their assessment of search results over time. This is a first study that investigates the factors and reasons for these changes, and describes a stochastic model of user behaviour that may explain these changes. In particular, we hypothesise that most of the changes are local, i.e. between results with similar or close relevance to the query, and thus belong to the same"coarse" relevance category. According to the theory of coarse beliefs and categorical thinking, humans tend to divide the range of values under consideration into coarse categories, and are thus able to distinguish only between cross-category values but not within them. To test this hypothesis we conducted five experiments with about 120 subjects divided into 3 groups. Each student in every group was asked to rank and assign relevance scores to the same set of search results over two or three rounds, with a period of three to nine weeks between each round. The subjects of the last three-round experiment were then exposed to the differences in their judgements and were asked to explain them. We make use of a Markov chain model to measure change in users' judgments between the different rounds. The Markov chain demonstrates that the changes converge, and that a majority of the changes are local to a neighbouring relevance category. We found that most of the subjects were satisfied with their changes, and did not perceive them as mistakes but rather as a legitimate phenomenon, since they believe that time has influenced their relevance assessment. Both our quantitative analysis and user comments support the hypothesis of the existence of coarse relevance categories resulting from categorical thinking in the context of user evaluation of search results.
Multidisciplinary Aerospace Systems Optimization: Computational AeroSciences (CAS) Project
NASA Technical Reports Server (NTRS)
Kodiyalam, S.; Sobieski, Jaroslaw S. (Technical Monitor)
2001-01-01
The report describes a method for performing optimization of a system whose analysis is so expensive that it is impractical to let the optimization code invoke it directly because excessive computational cost and elapsed time might result. In such situation it is imperative to have user control the number of times the analysis is invoked. The reported method achieves that by two techniques in the Design of Experiment category: a uniform dispersal of the trial design points over a n-dimensional hypersphere and a response surface fitting, and the technique of krigging. Analyses of all the trial designs whose number may be set by the user are performed before activation of the optimization code and the results are stored as a data base. That code is then executed and referred to the above data base. Two applications, one of the airborne laser system, and one of an aircraft optimization illustrate the method application.
Estiri, Hossein; Lovins, Terri; Afzalan, Nader; Stephens, Kari A.
2016-01-01
We applied a participatory design approach to define the objectives, characteristics, and features of a “data profiling” tool for primary care Electronic Health Data (EHD). Through three participatory design workshops, we collected input from potential tool users who had experience working with EHD. We present 15 recommended features and characteristics for the data profiling tool. From these recommendations we derived three overarching objectives and five properties for the tool. A data profiling tool, in Biomedical Informatics, is a visual, clear, usable, interactive, and smart tool that is designed to inform clinical and biomedical researchers of data utility and let them explore the data, while conveniently orienting the users to the tool’s functionalities. We suggest that developing scalable data profiling tools will provide new capacities to disseminate knowledge about clinical data that will foster translational research and accelerate new discoveries. PMID:27570651
Reengineering a database for clinical trials management: lessons for system architects.
Brandt, C A; Nadkarni, P; Marenco, L; Karras, B T; Lu, C; Schacter, L; Fisk, J M; Miller, P L
2000-10-01
This paper describes the process of enhancing Trial/DB, a database system for clinical studies management. The system's enhancements have been driven by the need to maximize the effectiveness of developer personnel in supporting numerous and diverse users, of study designers in setting up new studies, and of administrators in managing ongoing studies. Trial/DB was originally designed to work over a local area network within a single institution, and basic architectural changes were necessary to make it work over the Internet efficiently as well as securely. Further, as its use spread to diverse communities of users, changes were made to let the processes of study design and project management adapt to the working styles of the principal investigators and administrators for each study. The lessons learned in the process should prove instructive for system architects as well as managers of electronic patient record systems.
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; LaBel, Kenneth A.
2018-01-01
The following are updated or new subjects added to the FPGA SEE Test Guidelines manual: academic versus mission specific device evaluation, single event latch-up (SEL) test and analysis, SEE response visibility enhancement during radiation testing, mitigation evaluation (embedded and user-implemented), unreliable design and its affects to SEE Data, testing flushable architectures versus non-flushable architectures, intellectual property core (IP Core) test and evaluation (addresses embedded and user-inserted), heavy-ion energy and linear energy transfer (LET) selection, proton versus heavy-ion testing, fault injection, mean fluence to failure analysis, and mission specific system-level single event upset (SEU) response prediction. Most sections within the guidelines manual provide information regarding best practices for test structure and test system development. The scope of this manual addresses academic versus mission specific device evaluation and visibility enhancement in IP Core testing.
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; LaBel, Kenneth A.
2018-01-01
The following are updated or new subjects added to the FPGA SEE Test Guidelines manual: academic versus mission specific device evaluation, single event latch-up (SEL) test and analysis, SEE response visibility enhancement during radiation testing, mitigation evaluation (embedded and user-implemented), unreliable design and its affects to SEE Data, testing flushable architectures versus non-flushable architectures, intellectual property core (IP Core) test and evaluation (addresses embedded and user-inserted), heavy-ion energy and linear energy transfer (LET) selection, proton versus heavy-ion testing, fault injection, mean fluence to failure analysis, and mission specific system-level single event upset (SEU) response prediction. Most sections within the guidelines manual provide information regarding best practices for test structure and test system development. The scope of this manual addresses academic versus mission specific device evaluation and visibility enhancement in IP Core testing.
NASA Astrophysics Data System (ADS)
Krejcar, Ondrej
New kind of mobile lightweight devices can run full scale applications with same comfort as on desktop devices only with several limitations. One of them is insufficient transfer speed on wireless connectivity. Main area of interest is in a model of a radio-frequency based system enhancement for locating and tracking users of a mobile information system. The experimental framework prototype uses a wireless network infrastructure to let a mobile lightweight device determine its indoor or outdoor position. User location is used for data prebuffering and pushing information from server to user’s PDA. All server data is saved as artifacts along with its position information in building or larger area environment. The accessing of prebuffered data on mobile lightweight device can highly improve response time needed to view large multimedia data. This fact can help with design of new full scale applications for mobile lightweight devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Kevin
The software provides a simple web api to allow users to request a time window where a file will not be removed from cache. HPSS provides the concept of a "purge lock". When a purge lock is set on a file, the file will not be removed from disk, entering tape only state. A lot of network file protocols assume a file is on disk so it is good to purge lock a file before transferring using one of those protocols. HPSS's purge lock system is very coarse grained though. A file is either purge locked or not. Nothing enforcesmore » quotas, timely unlocking of purge locks, or managing the races inherent with multiple users wanting to lock/unlock the same file. The Purge Lock Server lets you, through a simple REST API, specify a list of files to purge lock and an expire time, and the system will ensure things happen properly.« less
ERIC Educational Resources Information Center
Spink, Amanda
1995-01-01
This study uses the human approach to examine the sources and effectiveness of search terms selected during 40 mediated interactive database searches and focuses on determining the retrieval effectiveness of search terms identified by users and intermediaries from retrieved items during term relevance feedback. (Author/JKP)
Cognitive and Task Influences on Web Searching Behavior.
ERIC Educational Resources Information Center
Kim, Kyung-Sun; Allen, Bryce
2002-01-01
Describes results from two independent investigations of college students that were conducted to study the impact of differences in users' cognition and search tasks on Web search activities and outcomes. Topics include cognitive style; problem-solving; and implications for the design and use of the Web and Web search engines. (Author/LRW)
The IBM PC as an Online Search Machine. Part 5: Searching through Crosstalk.
ERIC Educational Resources Information Center
Kolner, Stuart J.
1985-01-01
This last of a five-part series on using the IBM personal computer for online searching highlights a brief review, search process, making the connection, switching between screens and modes, online transaction, capture buffer controls, coping with options, function keys, script files, processing downloaded information, note to TELEX users, and…
PubMed searches: overview and strategies for clinicians.
Lindsey, Wesley T; Olin, Bernie R
2013-04-01
PubMed is a biomedical and life sciences database maintained by a division of the National Library of Medicine known as the National Center for Biotechnology Information (NCBI). It is a large resource with more than 5600 journals indexed and greater than 22 million total citations. Searches conducted in PubMed provide references that are more specific for the intended topic compared with other popular search engines. Effective PubMed searches allow the clinician to remain current on the latest clinical trials, systematic reviews, and practice guidelines. PubMed continues to evolve by allowing users to create a customized experience through the My NCBI portal, new arrangements and options in search filters, and supporting scholarly projects through exportation of citations to reference managing software. Prepackaged search options available in the Clinical Queries feature also allow users to efficiently search for clinical literature. PubMed also provides information regarding the source journals themselves through the Journals in NCBI Databases link. This article provides an overview of the PubMed database's structure and features as well as strategies for conducting an effective search.
IdentiPy: An Extensible Search Engine for Protein Identification in Shotgun Proteomics.
Levitsky, Lev I; Ivanov, Mark V; Lobas, Anna A; Bubis, Julia A; Tarasova, Irina A; Solovyeva, Elizaveta M; Pridatchenko, Marina L; Gorshkov, Mikhail V
2018-06-18
We present an open-source, extensible search engine for shotgun proteomics. Implemented in Python programming language, IdentiPy shows competitive processing speed and sensitivity compared with the state-of-the-art search engines. It is equipped with a user-friendly web interface, IdentiPy Server, enabling the use of a single server installation accessed from multiple workstations. Using a simplified version of X!Tandem scoring algorithm and its novel "autotune" feature, IdentiPy outperforms the popular alternatives on high-resolution data sets. Autotune adjusts the search parameters for the particular data set, resulting in improved search efficiency and simplifying the user experience. IdentiPy with the autotune feature shows higher sensitivity compared with the evaluated search engines. IdentiPy Server has built-in postprocessing and protein inference procedures and provides graphic visualization of the statistical properties of the data set and the search results. It is open-source and can be freely extended to use third-party scoring functions or processing algorithms and allows customization of the search workflow for specialized applications.
Helicopter winchmens' experiences with pain management in challenging environments.
van der Velde, J; Linehan, L; Cusack, S
2013-02-01
We conducted a survey of Irish Coast Guard Search and Rescue Helicopter winchmen to establish if their pain management scope of practice was adequate for their working environment. We surveyed 17 SAR personnel. 88% of winchmen have experienced scenarios where they were unable to reduce pain scores below 6/10. In seeking solutions within current Irish Prehospital Clinical Practice Guidelines, repeated descriptions of operations in extreme weather and sea conditions were given which were entirely incompatible with the dexterity required to break a glass ampoule and draw up solution, let alone site an intravenous (IV) line or administer a drug via intramuscular (IM) injection. Irish Coast Guard Search and Rescue Helicopter winchmen encounter polytrauma patients in extreme pain in uniquely challenging environments. Novel solutions to pain management within this tightly governed system are urgently required.
Guide to Regulated Facilities in ECHO | ECHO | US EPA
There are multiple ways ECHO can be used to search compliance data. By default, ECHO searches focus on larger, more regulated facilities. Each search page allows users to search a more comprehensive group of facilities by electing to search for minor or smaller facilities. Information is presented that explains the types and approximate numbers of facilities that are included in searches when the default and custom options are used.
Capabilities in Context: Evaluating the Net-Centric Enterprise
2009-03-01
with an intuitive keyword search using the enterprise’s federated search capability. Service accessibility. Testers will ensure that local service has...search using the enterprise’s federated search capability. Data accessibility. Testers will ensure that Feder- ated Search results provide active link...user may request access to the data, and be available within ‘‘2 clicks’’ from the active link provided by Federated Search . Data understandability
Nicholson, Scott
2005-01-01
The paper explores the current state of generalist search education in library schools and considers that foundation in respect to the Medical Library Association's statement on expert searching. Syllabi from courses with significant searching components were examined from ten of the top library schools, as determined by the U.S. News & World Report rankings. Mixed methods were used, but primarily quantitative bibliometric methods were used. The educational focus in these searching components was on understanding the generalist searching resources and typical users and on performing a reflective search through application of search strategies, controlled vocabulary, and logic appropriate to the search tool. There is a growing emphasis on Web-based search tools and a movement away from traditional set-based searching and toward free-text search strategies. While a core set of authors is used in these courses, no core set of readings is used. While library schools provide a strong foundation, future medical librarians still need to take courses that introduce them to the resources, settings, and users associated with medical libraries. In addition, as more emphasis is placed on Web-based search tools and free-text searching, instructors of the specialist medical informatics courses will need to focus on teaching traditional search methods appropriate for common tools in the medical domain.
Nicholson, Scott
2005-01-01
Purpose: The paper explores the current state of generalist search education in library schools and considers that foundation in respect to the Medical Library Association's statement on expert searching. Setting/Subjects: Syllabi from courses with significant searching components were examined from ten of the top library schools, as determined by the U.S. News & World Report rankings. Methodology: Mixed methods were used, but primarily quantitative bibliometric methods were used. Results: The educational focus in these searching components was on understanding the generalist searching resources and typical users and on performing a reflective search through application of search strategies, controlled vocabulary, and logic appropriate to the search tool. There is a growing emphasis on Web-based search tools and a movement away from traditional set-based searching and toward free-text search strategies. While a core set of authors is used in these courses, no core set of readings is used. Discussion/Conclusion: While library schools provide a strong foundation, future medical librarians still need to take courses that introduce them to the resources, settings, and users associated with medical libraries. In addition, as more emphasis is placed on Web-based search tools and free-text searching, instructors of the specialist medical informatics courses will need to focus on teaching traditional search methods appropriate for common tools in the medical domain. PMID:15685276