Custom Search Engines: Tools & Tips
ERIC Educational Resources Information Center
Notess, Greg R.
2008-01-01
Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…
Curating the Web: Building a Google Custom Search Engine for the Arts
ERIC Educational Resources Information Center
Hennesy, Cody; Bowman, John
2008-01-01
Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…
FOAMSearch.net: A custom search engine for emergency medicine and critical care.
Raine, Todd; Thoma, Brent; Chan, Teresa M; Lin, Michelle
2015-08-01
The number of online resources read by and pertinent to clinicians has increased dramatically. However, most healthcare professionals still use mainstream search engines as their primary port of entry to the resources on the Internet. These search engines use algorithms that do not make it easy to find clinician-oriented resources. FOAMSearch, a custom search engine (CSE), was developed to find relevant, high-quality online resources for emergency medicine and critical care (EMCC) clinicians. Using Google™ algorithms, it searches a vetted list of >300 blogs, podcasts, wikis, knowledge translation tools, clinical decision support tools and medical journals. Utilisation has increased progressively to >3000 users/month since its launch in 2011. Further study of the role of CSEs to find medical resources is needed, and it might be possible to develop similar CSEs for other areas of medicine. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
ERIC Educational Resources Information Center
Isakson, Carol
2004-01-01
Search engines rapidly add new services and experimental tools in trying to outmaneuver each other for customers. In this article, the author describes the latest additional services of some search engines and provides its sources. The author also suggests tips for using these new search upgrades.
A fuzzy-match search engine for physician directories.
Rastegar-Mojarad, Majid; Kadolph, Christopher; Ye, Zhan; Wall, Daniel; Murali, Narayana; Lin, Simon
2014-11-04
A search engine to find physicians' information is a basic but crucial function of a health care provider's website. Inefficient search engines, which return no results or incorrect results, can lead to patient frustration and potential customer loss. A search engine that can handle misspellings and spelling variations of names is needed, as the United States (US) has culturally, racially, and ethnically diverse names. The Marshfield Clinic website provides a search engine for users to search for physicians' names. The current search engine provides an auto-completion function, but it requires an exact match. We observed that 26% of all searches yielded no results. The goal was to design a fuzzy-match algorithm to aid users in finding physicians easier and faster. Instead of an exact match search, we used a fuzzy algorithm to find similar matches for searched terms. In the algorithm, we solved three types of search engine failures: "Typographic", "Phonetic spelling variation", and "Nickname". To solve these mismatches, we used a customized Levenshtein distance calculation that incorporated Soundex coding and a lookup table of nicknames derived from US census data. Using the "Challenge Data Set of Marshfield Physician Names," we evaluated the accuracy of fuzzy-match engine-top ten (90%) and compared it with exact match (0%), Soundex (24%), Levenshtein distance (59%), and fuzzy-match engine-top one (71%). We designed, created a reference implementation, and evaluated a fuzzy-match search engine for physician directories. The open-source code is available at the codeplex website and a reference implementation is available for demonstration at the datamarsh website.
Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália
2016-07-01
Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations meaningful to that particular scope of research. Conversely, indirect concept associations, i.e. concepts related by other intermediary concepts, can be useful to integrate information from different studies and look into non-trivial relations. The BIOMedical Search Engine Framework supports the development of domain-specific search engines. The key strengths of the framework are modularity and extensibilityin terms of software design, the use of open-source consolidated Web technologies, and the ability to integrate any number of biomedical text mining tools and information resources. Currently, the Smart Drug Search keeps over 1,186,000 documents, containing more than 11,854,000 annotations for 77,200 different concepts. The Smart Drug Search is publicly accessible at http://sing.ei.uvigo.es/sds/. The BIOMedical Search Engine Framework is freely available for non-commercial use at https://github.com/agjacome/biomsef. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Quality Dimensions of Internet Search Engines.
ERIC Educational Resources Information Center
Xie, M.; Wang, H.; Goh, T. N.
1998-01-01
Reviews commonly used search engines (AltaVista, Excite, infoseek, Lycos, HotBot, WebCrawler), focusing on existing comparative studies; considers quality dimensions from the customer's point of view based on a SERVQUAL framework; and groups these quality expectations in five dimensions: tangibles, reliability, responsiveness, assurance, and…
Evidence-based Medicine Search: a customizable federated search engine.
Bracke, Paul J; Howse, David K; Keim, Samuel M
2008-04-01
This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center.
Evidence-based Medicine Search: a customizable federated search engine
Bracke, Paul J.; Howse, David K.; Keim, Samuel M.
2008-01-01
Purpose: This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. Brief Description: The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Outcomes/Conclusion: Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center. PMID:18379665
Do Pazo-Oubiña, F; Calvo Pita, C; Puigventós Latorre, F; Periañez-Párraga, L; Ventayol Bosch, P
2011-01-01
To identify publishers of pharmacotherapeutic information not found in biomedical journals that focuses on evaluating and providing advice on medicines and to develop a search engine to access this information. Compiling web sites that publish information on the rational use of medicines and have no commercial interests. Free-access web sites in Spanish, Galician, Catalan or English. Designing a search engine using the Google "custom search" application. Overall 159 internet addresses were compiled and were classified into 9 labels. We were able to recover the information from the selected sources using a search engine, which is called "AlquimiA" and available from http://www.elcomprimido.com/FARHSD/AlquimiA.htm. The main sources of pharmacotherapeutic information not published in biomedical journals were identified. The search engine is a useful tool for searching and accessing "grey literature" on the internet. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.
Relevance of Google-customized search engine vs. CISMeF quality-controlled health gateway.
Gehanno, Jean-François; Kerdelhue, Gaétan; Sakji, Saoussen; Massari, Philippe; Joubert, Michel; Darmoni, Stéfan J
2009-01-01
CISMeF (acronym for Catalog and Index of French Language Health Resources on the Internet) is a quality-controlled health gateway conceived to catalog and index the most important and quality-controlled sources of institutional health information in French. The goal of this study is to compare the relevance of results provided by this gateway from a small set of documents selected and described by human experts to those provided by a search engine from a large set of automatically indexed and ranked resources. The Google-Customized search engine (CSE) was used. The evaluation was made using the 10th first results of 15 queries and two blinded physician evaluators. There was no significant difference between the relevance of information retrieval in CISMeF and Google CSE. In conclusion, automatic indexing does not lead to lower relevance than a manual MeSH indexing and may help to cope with the increasing number of references to be indexed in a controlled health quality gateway.
"Where Is My Answer?": A Customer Service Status Report.
ERIC Educational Resources Information Center
Marcinko, Randy
1997-01-01
Describes the results of a study that tested the customer service responses from 11 companies selling online information including online hosts, database producers, and World Wide Web search engine companies. Highlights include content-oriented issues, costs, training, human interaction, and the use of technology to save time and increase…
Writing for the Robot: How Employer Search Tools Have Influenced Resume Rhetoric and Ethics
ERIC Educational Resources Information Center
Amare, Nicole; Manning, Alan
2009-01-01
To date, business communication scholars and textbook writers have encouraged resume rhetoric that accommodates technology, for example, recommending keyword-enhancing techniques to attract the attention of searchbots: customized search engines that allow companies to automatically scan resumes for relevant keywords. However, few scholars have…
IdentiPy: An Extensible Search Engine for Protein Identification in Shotgun Proteomics.
Levitsky, Lev I; Ivanov, Mark V; Lobas, Anna A; Bubis, Julia A; Tarasova, Irina A; Solovyeva, Elizaveta M; Pridatchenko, Marina L; Gorshkov, Mikhail V
2018-06-18
We present an open-source, extensible search engine for shotgun proteomics. Implemented in Python programming language, IdentiPy shows competitive processing speed and sensitivity compared with the state-of-the-art search engines. It is equipped with a user-friendly web interface, IdentiPy Server, enabling the use of a single server installation accessed from multiple workstations. Using a simplified version of X!Tandem scoring algorithm and its novel "autotune" feature, IdentiPy outperforms the popular alternatives on high-resolution data sets. Autotune adjusts the search parameters for the particular data set, resulting in improved search efficiency and simplifying the user experience. IdentiPy with the autotune feature shows higher sensitivity compared with the evaluated search engines. IdentiPy Server has built-in postprocessing and protein inference procedures and provides graphic visualization of the statistical properties of the data set and the search results. It is open-source and can be freely extended to use third-party scoring functions or processing algorithms and allows customization of the search workflow for specialized applications.
Proteomic Cinderella: Customized analysis of bulky MS/MS data in one night.
Kiseleva, Olga; Poverennaya, Ekaterina; Shargunov, Alexander; Lisitsa, Andrey
2018-02-01
Proteomic challenges, stirred up by the advent of high-throughput technologies, produce large amount of MS data. Nowadays, the routine manual search does not satisfy the "speed" of modern science any longer. In our work, the necessity of single-thread analysis of bulky data emerged during interpretation of HepG2 proteome profiling results for proteoforms searching. We compared the contribution of each of the eight search engines (X!Tandem, MS-GF[Formula: see text], MS Amanda, MyriMatch, Comet, Tide, Andromeda, and OMSSA) integrated in an open-source graphical user interface SearchGUI ( http://searchgui.googlecode.com ) into total result of proteoforms identification and optimized set of engines working simultaneously. We also compared the results of our search combination with Mascot results using protein kit UPS2, containing 48 human proteins. We selected combination of X!Tandem, MS-GF[Formula: see text] and OMMSA as the most time-efficient and productive combination of search. We added homemade java-script to automatize pipeline from file picking to report generation. These settings resulted in rise of the efficiency of our customized pipeline unobtainable by manual scouting: the analysis of 192 files searched against human proteome (42153 entries) downloaded from UniProt took 11[Formula: see text]h.
Fermilab Science Education Office - Classroom Presentations
| Fermilab Home | Employees | Students | Visitors | Undergraduates Fermilab Ed Site Search Google Custom and provide your students with the opportunity to meet a Fermilab scientist or engineer. We put on engaging interactive physics presentations. These presentations will expose students to Next Generation
Activities of Western Research Application Center
NASA Technical Reports Server (NTRS)
1972-01-01
Operations of the regional dissemination center for NASA technology collection and information transfer are reported. Activities include customized searches for engineering and scientific applications in industry and technology transfers to businesses engaged in manufacturing high energy physics devices, subsurface instruments, batteries, medical instrumentation, and hydraulic equipment.
Virtual Boutique: a 3D modeling and content-based management approach to e-commerce
NASA Astrophysics Data System (ADS)
Paquet, Eric; El-Hakim, Sabry F.
2000-12-01
The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.
The Digital School Library: A World-Wide Development and a Fascinating Challenge.
ERIC Educational Resources Information Center
Loertscher, David
2003-01-01
Explores the academic environment of a total information system for school libraries based on the idea of a digital intranet. Discusses safety; customization; the core library collection; curriculum-specific collections; access to short-term resources; Internet access; personalized features; search engines; equity issues; and staffing. (LRW)
BioCarian: search engine for exploratory searches in heterogeneous biological databases.
Zaki, Nazar; Tennakoon, Chandana
2017-10-02
There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search on previously published viral integration data and were able to deduce the main conclusions of the original publication. BioCarian is accessible via http://www.biocarian.com . We have developed a search engine to explore RDF databases that can be used by both novice and advanced users.
Abbott, Kevin C; Oliver, David K; Boal, Thomas R; Gadiyak, Grigorii; Boocks, Carl; Yuan, Christina M; Welch, Paul G; Poropatich, Ronald K
2002-04-01
Studies of the use of the World Wide Web to obtain medical knowledge have largely focused on patients. In particular, neither the international use of academic nephrology World Wide Web sites (websites) as primary information sources nor the use of search engines (and search strategies) to obtain medical information have been described. Visits ("hits") to the Walter Reed Army Medical Center (WRAMC) Nephrology Service website from April 30, 2000, to March 14, 2001, were analyzed for the location of originating source using Webtrends, and search engines (Google, Lycos, etc.) were analyzed manually for search strategies used. From April 30, 2000 to March 14, 2001, the WRAMC Nephrology Service website received 1,007,103 hits and 12,175 visits. These visits were from 33 different countries, and the most frequent regions were Western Europe, Asia, Australia, the Middle East, Pacific Islands, and South America. The most frequent organization using the site was the military Internet system, followed by America Online and automated search programs of online search engines, most commonly Google. The online lecture series was the most frequently visited section of the website. Search strategies used in search engines were extremely technical. The use of "robots" by standard Internet search engines to locate websites, which may be blocked by mandatory registration, has allowed users worldwide to access the WRAMC Nephrology Service website to answer very technical questions. This suggests that it is being used as an alternative to other primary sources of medical information and that the use of mandatory registration may hinder users from finding valuable sites. With current Internet technology, even a single service can become a worldwide information resource without sacrificing its primary customers.
ERIC Educational Resources Information Center
Bushweller, Kevin
2000-01-01
In the ephemeral dot.com economy, numerous education portals (search engines) are emerging just as unsuccessful ones are terminating or trimming services. Ideally, portals offer an online school/home gateway and provide tailored content for parents, students, and teachers. However, quality, equity, reliability, and commercialism issues abound.…
Sauer, Ursula G; Wächter, Thomas; Hareng, Lars; Wareing, Britta; Langsch, Angelika; Zschunke, Matthias; Alvers, Michael R; Landsiedel, Robert
2014-06-01
The knowledge-based search engine Go3R, www.Go3R.org, has been developed to assist scientists from industry and regulatory authorities in collecting comprehensive toxicological information with a special focus on identifying available alternatives to animal testing. The semantic search paradigm of Go3R makes use of expert knowledge on 3Rs methods and regulatory toxicology, laid down in the ontology, a network of concepts, terms, and synonyms, to recognize the contents of documents. Search results are automatically sorted into a dynamic table of contents presented alongside the list of documents retrieved. This table of contents allows the user to quickly filter the set of documents by topics of interest. Documents containing hazard information are automatically assigned to a user interface following the endpoint-specific IUCLID5 categorization scheme required, e.g. for REACH registration dossiers. For this purpose, complex endpoint-specific search queries were compiled and integrated into the search engine (based upon a gold standard of 310 references that had been assigned manually to the different endpoint categories). Go3R sorts 87% of the references concordantly into the respective IUCLID5 categories. Currently, Go3R searches in the 22 million documents available in the PubMed and TOXNET databases. However, it can be customized to search in other databases including in-house databanks. Copyright © 2013 Elsevier Ltd. All rights reserved.
19 CFR 162.4 - Search for letters.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Search for letters. 162.4 Section 162.4 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Inspection, Examination, and Search § 162.4 Search for letters. A...
19 CFR 162.12 - Service of search warrant.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Service of search warrant. 162.12 Section 162.12 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Search Warrants § 162.12 Service of search warrant. A...
The Custom Search allows users to search for and generate customized data downloads of pollutant loadings information. Users can select varying levels of detail for outputs: annual, monitoring period, and facility level.
19 CFR 162.13 - Search of rooms not described in warrant.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Search of rooms not described in warrant. 162.13 Section 162.13 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Search Warrants § 162.13 Search of...
19 CFR 162.3 - Boarding and search of vessels.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Boarding and search of vessels. 162.3 Section 162.3 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Inspection, Examination, and Search § 162.3...
19 CFR 162.7 - Search of vehicles, persons, or beasts.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Search of vehicles, persons, or beasts. 162.7 Section 162.7 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Inspection, Examination, and Search...
19 CFR 162.5 - Search of arriving vehicles and aircraft.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Search of arriving vehicles and aircraft. 162.5 Section 162.5 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Inspection, Examination, and Search...
19 CFR 162.6 - Search of persons, baggage, and merchandise.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Search of persons, baggage, and merchandise. 162.6 Section 162.6 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Inspection, Examination, and Search...
Chew, Avenell L.; Lamey, Tina; McLaren, Terri; De Roach, John
2016-01-01
Purpose To present en face optical coherence tomography (OCT) images generated by graph-search theory algorithm-based custom software and examine correlation with other imaging modalities. Methods En face OCT images derived from high density OCT volumetric scans of 3 healthy subjects and 4 patients using a custom algorithm (graph-search theory) and commercial software (Heidelberg Eye Explorer software (Heidelberg Engineering)) were compared and correlated with near infrared reflectance, fundus autofluorescence, adaptive optics flood-illumination ophthalmoscopy (AO-FIO) and microperimetry. Results Commercial software was unable to generate accurate en face OCT images in eyes with retinal pigment epithelium (RPE) pathology due to segmentation error at the level of Bruch’s membrane (BM). Accurate segmentation of the basal RPE and BM was achieved using custom software. The en face OCT images from eyes with isolated interdigitation or ellipsoid zone pathology were of similar quality between custom software and Heidelberg Eye Explorer software in the absence of any other significant outer retinal pathology. En face OCT images demonstrated angioid streaks, lesions of acute macular neuroretinopathy, hydroxychloroquine toxicity and Bietti crystalline deposits that correlated with other imaging modalities. Conclusions Graph-search theory algorithm helps to overcome the limitations of outer retinal segmentation inaccuracies in commercial software. En face OCT images can provide detailed topography of the reflectivity within a specific layer of the retina which correlates with other forms of fundus imaging. Our results highlight the need for standardization of image reflectivity to facilitate quantification of en face OCT images and longitudinal analysis. PMID:27959968
A Method for Efficient Searching at Online Shopping
NASA Astrophysics Data System (ADS)
Sanjo, Tomomi; Nagata, Moiro
In recent years, online shopping has been popularized. However, the users can not find efficiently their items at on-line markets. This paper proposes an engine to find items easily at the online market. This engine has the following facilities. First, it presents information in a fixed format. Second, the user can find items by selected keywords. Third, it presents only necessary information by using his/her history. Finally, it has a customize function for each user. Moreover, the system asks the users to down load a page of recommended items. We show the effectives of our proposal with some experiments.
A user-friendly tool for medical-related patent retrieval.
Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnyakova, Dina; Lovis, Christian; Ruch, Patrick
2012-01-01
Health-related information retrieval is complicated by the variety of nomenclatures available to name entities, since different communities of users will use different ways to name a same entity. We present in this report the development and evaluation of a user-friendly interactive Web application aiming at facilitating health-related patent search. Our tool, called TWINC, relies on a search engine tuned during several patent retrieval competitions, enhanced with intelligent interaction modules, such as chemical query, normalization and expansion. While the functionality of related article search showed promising performances, the ad hoc search results in fairly contrasted results. Nonetheless, TWINC performed well during the PatOlympics competition and was appreciated by intellectual property experts. This result should be balanced by the limited evaluation sample. We can also assume that it can be customized to be applied in corporate search environments to process domain and company-specific vocabularies, including non-English literature and patents reports.
Using the Web for Competitive Intelligence (CI) Gathering
NASA Technical Reports Server (NTRS)
Rocker, JoAnne; Roncaglia, George
2002-01-01
Businesses use the Internet as a way to communicate company information as a way of engaging their customers. As the use of the Web for business transactions and advertising grows, so too, does the amount of useful information for practitioners of competitive intelligence (CI). CI is the legal and ethical practice of information gathering about competitors and the marketplace. Information sources like company webpages, online newspapers and news organizations, electronic journal articles and reports, and Internet search engines allow CI practitioners analyze company strengths and weaknesses for their customers. More company and marketplace information than ever is available on the Internet and a lot of it is free. Companies should view the Web not only as a business tool but also as a source of competitive intelligence. In a highly competitive marketplace can any organization afford to ignore information about the other players and customers in that same marketplace?
Discovery in a World of Mashups
NASA Astrophysics Data System (ADS)
King, T. A.; Ritschel, B.; Hourcle, J. A.; Moon, I. S.
2014-12-01
When the first digital information was stored electronically, discovery of what existed was through file names and the organization of the file system. With the advent of networks, digital information was shared on a wider scale, but discovery remained based on file and folder names. With a growing number of information sources, named based discovery quickly became ineffective. The keyword based search engine was one of the first types of a mashup in the world of Web 1.0. Embedded links from one document to another with prescribed relationships between files and the world of Web 2.0 was formed. Search engines like Google used the links to improve search results and a worldwide mashup was formed. While a vast improvement, the need for semantic (meaning rich) discovery was clear, especially for the discovery of scientific data. In response, every science discipline defined schemas to describe their type of data. Some core schemas where shared, but most schemas are custom tailored even though they share many common concepts. As with the networking of information sources, science increasingly relies on data from multiple disciplines. So there is a need to bring together multiple sources of semantically rich information. We explore how harvesting, conceptual mapping, facet based search engines, search term promotion, and style sheets can be combined to create the next generation of mashups in the emerging world of Web 3.0. We use NASA's Planetary Data System and NASA's Heliophysics Data Environment to illustrate how to create a multi-discipline mash-up.
Transnational tobacco industry promotion of the cigarette gifting custom in China.
Chu, Alexandria; Jiang, Nan; Glantz, Stanton A
2011-07-01
To understand how British American Tobacco (BAT) and Philip Morris (PM) researched the role and popularity of cigarette gifting in forming relationships among Chinese customs and how they exploited the practice to promote their brands State Express 555 and Marlboro. Searches and analysis of industry documents from the Legacy Tobacco Documents Library complemented by searches on LexisNexis Academic news, online search engines and information from the tobacco industry trade press. From 1980-1999, BAT and PM employed Chinese market research firms to gather consumer information about perceptions of foreign cigarettes and the companies discovered that cigarettes, especially prestigious ones, were gifted and smoked purposely for building relationships and social status in China. BAT and PM promoted their brands as gifts by enhancing cigarette cartons and promoting culturally themed packages, particularly during the gifting festivals of Chinese New Year and Mid-Autumn Festival to tie their brands in to festival values such as warmth, friendship and celebration. They used similar marketing in Chinese communities outside China. BAT and PM tied their brands to Chinese cigarette gifting customs by appealing to social and cultural values of respect and personal honour. Decoupling cigarettes from their social significance in China and removing their appeal would probably reduce cigarette gifting and promote a decline in smoking. Tobacco control efforts in countermarketing, large graphic warnings and plain packaging to make cigarette packages less attractive as gifts could contribute to denormalising cigarette gifting.
Transnational tobacco industry promotion of the cigarette gifting custom in China
Chu, Alexandria; Jiang, Nan; Glantz, Stanton A
2011-01-01
Objective To understand how British American Tobacco (BAT) and Philip Morris (PM) researched the role and popularity of cigarette gifting in forming relationships among Chinese customs and how they exploited the practice to promote their brands State Express 555 and Marlboro. Methods Searches and analysis of industry documents from the Legacy Tobacco Documents Library complemented by searches on LexisNexis Academic news, online search engines and information from the tobacco industry trade press. Results From 1980–1999, BAT and PM employed Chinese market research firms to gather consumer information about perceptions of foreign cigarettes and the companies discovered that cigarettes, especially prestigious ones, were gifted and smoked purposely for building relationships and social status in China. BAT and PM promoted their brands as gifts by enhancing cigarette cartons and promoting culturally themed packages, particularly during the gifting festivals of Chinese New Year and Mid-Autumn Festival to tie their brands in to festival values such as warmth, friendship and celebration. They used similar marketing in Chinese communities outside China. Conclusions BAT and PM tied their brands to Chinese cigarette gifting customs by appealing to social and cultural values of respect and personal honour. Decoupling cigarettes from their social significance in China and removing their appeal would probably reduce cigarette gifting and promote a decline in smoking. Tobacco control efforts in countermarketing, large graphic warnings and plain packaging to make cigarette packages less attractive as gifts could contribute to denormalising cigarette gifting. PMID:21282136
Customer Management Skills for Effective Air Force Civil Engineering Customer Service.
1986-09-01
advertise --competence. (1) Craftsmen working closely with customer service -doing what is promised when it’s promised -if return to job site required, tell...RD-RI74 1 4 CUSTOMER MANAGEMENT SKILLS FOR EFFECTIVE AIR FORCE / I CIVIL ENGINEERING CUST (U) AIR FORCE INST OF TECH WRIGHT-PATTERSON RFS ON...I93 -A CUSTOMER MANAGEMENT SKILLS FOR EFFECTIVE AIR FORCE CIVIL ENGINEERING CUSTOMER SERVICE THESIS Danny S.- Long Captain, USAF AFIT/GEM/DEM/86S-1 7
NASA Technology Transfer System
NASA Technical Reports Server (NTRS)
Tran, Peter B.; Okimura, Takeshi
2017-01-01
NTTS is the IT infrastructure for the Agency's Technology Transfer (T2) program containing 60,000+ technology portfolio supporting all ten NASA field centers and HQ. It is the enterprise IT system for facilitating the Agency's technology transfer process, which includes reporting of new technologies (e.g., technology invention disclosures NF1679), protecting intellectual properties (e.g., patents), and commercializing technologies through various technology licenses, software releases, spinoffs, and success stories using custom built workflow, reporting, data consolidation, integration, and search engines.
The availability of prescription-only analgesics purchased from the internet in the UK.
Raine, Connie; Webb, David J; Maxwell, Simon R J
2009-02-01
Increasing numbers of people are accessing medicines from the internet. This online market is poorly regulated and represents a potential threat to the health of patients and members of the public. Prescription-only analgesics, including controlled opioids, are readily available to the UK public through internet pharmacies that are easily identified by popular search engines. The majority of websites do not require the customer to possess a valid prescription for the drug. Less than half provide an online health screen to assess suitability for supply. The majority have no registered geographical location. Analgesic medicines are usually purchased at prices significantly above British National Formulary prices and are often supplied in large quantities. These findings are of particular relevance to pain-management specialists who are trying to improve the rational use of analgesic drugs. To explore the availability to the UK population of prescription-only analgesics from the internet. Websites were identified by using several keywords in the most popular internet search engines. From 2000 websites, details of 96 were entered into a database. Forty-six (48%) websites sold prescription analgesics, including seven opioids, two non-opioids and 18 nonsteroidal anti-inflammatory drugs. Thirty-five (76%) of these did not require the customer to possess a valid prescription. Prescription-only analgesics, including controlled opioids, are readily available from internet websites, often without a valid prescription.
PubMed searches: overview and strategies for clinicians.
Lindsey, Wesley T; Olin, Bernie R
2013-04-01
PubMed is a biomedical and life sciences database maintained by a division of the National Library of Medicine known as the National Center for Biotechnology Information (NCBI). It is a large resource with more than 5600 journals indexed and greater than 22 million total citations. Searches conducted in PubMed provide references that are more specific for the intended topic compared with other popular search engines. Effective PubMed searches allow the clinician to remain current on the latest clinical trials, systematic reviews, and practice guidelines. PubMed continues to evolve by allowing users to create a customized experience through the My NCBI portal, new arrangements and options in search filters, and supporting scholarly projects through exportation of citations to reference managing software. Prepackaged search options available in the Clinical Queries feature also allow users to efficiently search for clinical literature. PubMed also provides information regarding the source journals themselves through the Journals in NCBI Databases link. This article provides an overview of the PubMed database's structure and features as well as strategies for conducting an effective search.
Mercury: Reusable software application for Metadata Management, Data Discovery and Access
NASA Astrophysics Data System (ADS)
Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.
2009-12-01
Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury is itself a reusable toolset for metadata, with current use in 12 different projects. Mercury also supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects To balance these common and project-specific needs, Mercury’s architecture includes three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of configuration files. The harvested files are then passed to the Indexing system, where each of the fields in these structured metadata records are indexed properly, so that the query engine can perform simple, keyword, spatial and temporal searches across these metadata sources. The search user interface software has two API categories; a common core API which is used by all the Mercury user interfaces for querying the index and a customized API for project specific user interfaces. For our work in producing a reusable, portable, robust, feature-rich application, Mercury received a 2008 NASA Earth Science Data Systems Software Reuse Working Group Peer-Recognition Software Reuse Award. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.
19 CFR 162.11 - Authority to procure warrants.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Authority to procure warrants. 162.11 Section 162.11 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Search Warrants § 162.11 Authority to procure...
19 CFR 162.15 - Receipt for seized property.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Receipt for seized property. 162.15 Section 162.15 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Search Warrants § 162.15 Receipt for seized property...
Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel
String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less
Information-seeking behavior of basic science researchers: implications for library services.
Haines, Laura L; Light, Jeanene; O'Malley, Donna; Delwiche, Frances A
2010-01-01
This study examined the information-seeking behaviors of basic science researchers to inform the development of customized library services. A qualitative study using semi-structured interviews was conducted on a sample of basic science researchers employed at a university medical school. The basic science researchers used a variety of information resources ranging from popular Internet search engines to highly technical databases. They generally relied on basic keyword searching, using the simplest interface of a database or search engine. They were highly collegial, interacting primarily with coworkers in their laboratories and colleagues employed at other institutions. They made little use of traditional library services and instead performed many traditional library functions internally. Although the basic science researchers expressed a positive attitude toward the library, they did not view its resources or services as integral to their work. To maximize their use by researchers, library resources must be accessible via departmental websites. Use of library services may be increased by cultivating relationships with key departmental administrative personnel. Despite their self-sufficiency, subjects expressed a desire for centralized information about ongoing research on campus and shared resources, suggesting a role for the library in creating and managing an institutional repository.
Information-seeking behavior of basic science researchers: implications for library services
Haines, Laura L.; Light, Jeanene; O'Malley, Donna; Delwiche, Frances A.
2010-01-01
Objectives: This study examined the information-seeking behaviors of basic science researchers to inform the development of customized library services. Methods: A qualitative study using semi-structured interviews was conducted on a sample of basic science researchers employed at a university medical school. Results: The basic science researchers used a variety of information resources ranging from popular Internet search engines to highly technical databases. They generally relied on basic keyword searching, using the simplest interface of a database or search engine. They were highly collegial, interacting primarily with coworkers in their laboratories and colleagues employed at other institutions. They made little use of traditional library services and instead performed many traditional library functions internally. Conclusions: Although the basic science researchers expressed a positive attitude toward the library, they did not view its resources or services as integral to their work. To maximize their use by researchers, library resources must be accessible via departmental websites. Use of library services may be increased by cultivating relationships with key departmental administrative personnel. Despite their self-sufficiency, subjects expressed a desire for centralized information about ongoing research on campus and shared resources, suggesting a role for the library in creating and managing an institutional repository. PMID:20098658
Design and Implementation of Distributed Crawler System Based on Scrapy
NASA Astrophysics Data System (ADS)
Fan, Yuhao
2018-01-01
At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.
19 CFR 162.1-162.2 - [Reserved
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false [Reserved] 162.1-162.2 Section 162.1-162.2 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Inspection, Examination, and Search §§ 162.1-162.2 [Reserved] ...
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Lobas, Anna A.; Levitsky, Lev I.; Moshkovskii, Sergei A.; Gorshkov, Mikhail V.
2018-02-01
In a proteogenomic approach based on tandem mass spectrometry analysis of proteolytic peptide mixtures, customized exome or RNA-seq databases are employed for identifying protein sequence variants. However, the problem of variant peptide identification without personalized genomic data is important for a variety of applications. Following the recent proposal by Chick et al. (Nat. Biotechnol. 33, 743-749, 2015) on the feasibility of such variant peptide search, we evaluated two available approaches based on the previously suggested "open" search and the "brute-force" strategy. To improve the efficiency of these approaches, we propose an algorithm for exclusion of false variant identifications from the search results involving analysis of modifications mimicking single amino acid substitutions. Also, we propose a de novo based scoring scheme for assessment of identified point mutations. In the scheme, the search engine analyzes y-type fragment ions in MS/MS spectra to confirm the location of the mutation in the variant peptide sequence.
1986-09-01
customers . The article states that in response to a White House Office of Consumer Affairs study and with the wide use of minicomputers: Companies are...D-A174 l16 MEASUREMENT OF CIVIL ENGINEERING CUSTOMER SRTISFACTIbN 1/ IN TACTICAL AIR CO (U) AIR FORCE INST OF TECH ...... RIGHT-PATTERSON AFB ON...BUREAU OF STANDARDS- 1963-A_ . -_- ’II I-F MEASUREMENT OF CIVIL ENGINEERING CUSTOMER SATISFACTION IN TACTICAL AIR COMMAND: A PROTOTYPE EVALUATION PROGRAM
A Customizable Dashboarding System for Watershed Model Interpretation
NASA Astrophysics Data System (ADS)
Easton, Z. M.; Collick, A.; Wagena, M. B.; Sommerlot, A.; Fuka, D.
2017-12-01
Stakeholders, including policymakers, agricultural water managers, and small farm managers, can benefit from the outputs of commonly run watershed models. However, the information that each stakeholder needs is be different. While policy makers are often interested in the broader effects that small farm management may have on a watershed during extreme events or over long periods, farmers are often interested in field specific effects at daily or seasonal period. To provide stakeholders with the ability to analyze and interpret data from large scale watershed models, we have developed a framework that can support custom exploration of the large datasets produced. For the volume of data produced by these models, SQL-based data queries are not efficient; thus, we employ a "Not Only SQL" (NO-SQL) query language, which allows data to scale in both quantity and query volumes. We demonstrate a stakeholder customizable Dashboarding system that allows stakeholders to create custom `dashboards' to summarize model output specific to their needs. Dashboarding is a dynamic and purpose-based visual interface needed to display one-to-many database linkages so that the information can be presented for a single time period or dynamically monitored over time and allows a user to quickly define focus areas of interest for their analysis. We utilize a single watershed model that is run four times daily with a combined set of climate projections, which are then indexed, and added to an ElasticSearch datastore. ElasticSearch is a NO-SQL search engine built on top of Apache Lucene, a free and open-source information retrieval software library. Aligned with the ElasticSearch project is the open source visualization and analysis system, Kibana, which we utilize for custom stakeholder dashboarding. The dashboards create a visualization of the stakeholder selected analysis and can be extended to recommend robust strategies to support decision-making.
DTS: Building custom, intelligent schedulers
NASA Technical Reports Server (NTRS)
Hansson, Othar; Mayer, Andrew
1994-01-01
DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.
Burton, R; Mauk, D
1993-03-01
By integrating customer satisfaction planning and industrial engineering techniques when examining internal costs and efficiencies, materiel managers are able to better realize what concepts will best meet their customers' needs. Defining your customer(s), applying industrial engineering techniques, completing work sampling studies, itemizing recommendations and benefits to each alternative, performing feasibility and cost-analysis matrixes and utilizing resources through productivity monitoring will get you on the right path toward selecting concepts to use. This article reviews the above procedures as they applied to one hospital's decision-making process to determine whether to incorporate a stockless inventory program. Through an analysis of customer demand, the hospital realized that stockless was the way to go, but not by outsourcing the function--the hospital incorporated an in-house stockless inventory program.
Operational Support for Instrument Stability through ODI-PPA Metadata Visualization and Analysis
NASA Astrophysics Data System (ADS)
Young, M. D.; Hayashi, S.; Gopu, A.; Kotulla, R.; Harbeck, D.; Liu, W.
2015-09-01
Over long time scales, quality assurance metrics taken from calibration and calibrated data products can aid observatory operations in quantifying the performance and stability of the instrument, and identify potential areas of concern or guide troubleshooting and engineering efforts. Such methods traditionally require manual SQL entries, assuming the requisite metadata has even been ingested into a database. With the ODI-PPA system, QA metadata has been harvested and indexed for all data products produced over the life of the instrument. In this paper we will describe how, utilizing the industry standard Highcharts Javascript charting package with a customized AngularJS-driven user interface, we have made the process of visualizing the long-term behavior of these QA metadata simple and easily replicated. Operators can easily craft a custom query using the powerful and flexible ODI-PPA search interface and visualize the associated metadata in a variety of ways. These customized visualizations can be bookmarked, shared, or embedded externally, and will be dynamically updated as new data products enter the system, enabling operators to monitor the long-term health of their instrument with ease.
Open courses: one view of the future of online education.
Alemi, Farrokh; Maddox, P J
2008-01-01
Open courses provide the entire course (lectures, assignments, syllabus, student's discussions, and student's projects) online without revealing student's personal information. We report on our experience in managing 8 open online courses at http://nhs.georgetown.edu/open. Open courses have several advantages over password protected courses: (1) they are available through search engines and thus reduce the program's marketing cost, (2) continuous feedback from the web enables rapid improvements to the course, (3) customer relationship tools, tied to open courses, radically reduce faculty time spent on one-on-one emails while increasing student/faculty interaction. We provide details of one course. In 15 weeks, 803 emails were received by and 1181 sent by the faculty (all within 6% of a working week and 82% savings of faculty time). We show how open courses can be accessed through search engines, how students questions are answered on the web and how student projects, in popular sites such as You Tube and Face Book, improve course marketing. The paper reports that student satisfaction with three open online courses delivered overall several semesters was high.
SemanticOrganizer: A Customizable Semantic Repository for Distributed NASA Project Teams
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Berrios, Daniel C.; Carvalho, Robert E.; Hall, David R.; Rich, Stephen J.; Sturken, Ian B.; Swanson, Keith J.; Wolfe, Shawn R.
2004-01-01
SemanticOrganizer is a collaborative knowledge management system designed to support distributed NASA projects, including diverse teams of scientists, engineers, and accident investigators. The system provides a customizable, semantically structured information repository that stores work products relevant to multiple projects of differing types. SemanticOrganizer is one of the earliest and largest semantic web applications deployed at NASA to date, and has been used in diverse contexts ranging from the investigation of Space Shuttle Columbia's accident to the search for life on other planets. Although the underlying repository employs a single unified ontology, access control and ontology customization mechanisms make the repository contents appear different for each project team. This paper describes SemanticOrganizer, its customization facilities, and a sampling of its applications. The paper also summarizes some key lessons learned from building and fielding a successful semantic web application across a wide-ranging set of domains with diverse users.
A Practical Guide for Managing Customer Service in Base Civil Engineering.
1988-04-01
IMPROVING CUSTOMER SERVICE IN BASE CIVIL ENGINEERING Step One: Evaluate Present Service Quality .. ......... .11 Step Two: Develop and Clarify a...cross sectional viewpoint. In chapter three, specific steps will be presented for managers to evaluate and improve the present level of service quality in...customer service in base civil engineering or any other organization for that matter is to evaluate the present level of service quality (1:170). Data
Toward Mass Customization in the Age of Information: The Case for Open Engineering Systems
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Lautenschlager, Uwe; Mistree, Farrokh
1997-01-01
In the Industrial Era, manufacturers used "dedicated" engineering systems to mass produce their products. In today's increasingly competitive markets, the trend is toward mass customization, something that becomes increasingly feasible when modern information technologies are used to create open engineering systems. Our focus is on how designers can provide enhanced product flexibility and variety (if not fully customized products) through the development of open engineering systems. After presenting several industrial examples, we anchor our new systems philosophy with two real engineering applications. We believe that manufacturers who adopt open systems will achieve competitive advantage in the Information Age.
Science and Engineering Education : Who is the Customer?
2012-05-30
business relationships are at the heart of the negative consequences of misidentifying the student as customer [8]. Student evaluations of teachers are...Journal of Education Management , 8, 29-36. 7. Scott, S.V. (1999) The academic as service provider: is the customer ‘always right’? Journal of...Engineering Education: Who is the Customer ? 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Michael Courtney
A Measurement of Civil Engineering Customer Satisfaction.
1987-09-01
to best represent civil engineering customers : military building managers , civilian building managers , and field grade officers. Building managers ...not know how well they are meeting the expectations of their customers . In their book on service management , 5- I8 Albrecht and Zemke fault American...Austin provide the simplest definition of a customer -- one who pays the bills .59 (2:45). In his book on service management , Richard Normann labels tile
A Quantitative Approach to Assessing System Evolvability
NASA Technical Reports Server (NTRS)
Christian, John A., III
2004-01-01
When selecting a system from multiple candidates, the customer seeks the one that best meets his or her needs. Recently the desire for evolvable systems has become more important and engineers are striving to develop systems that accommodate this need. In response to this search for evolvability, we present a historical perspective on evolvability, propose a refined definition of evolvability, and develop a quantitative method for measuring this property. We address this quantitative methodology from both a theoretical and practical perspective. This quantitative model is then applied to the problem of evolving a lunar mission to a Mars mission as a case study.
Reducing Information Overload in Large Seismic Data Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
HAMPTON,JEFFERY W.; YOUNG,CHRISTOPHER J.; MERCHANT,BION J.
2000-08-02
Event catalogs for seismic data can become very large. Furthermore, as researchers collect multiple catalogs and reconcile them into a single catalog that is stored in a relational database, the reconciled set becomes even larger. The sheer number of these events makes searching for relevant events to compare with events of interest problematic. Information overload in this form can lead to the data sets being under-utilized and/or used incorrectly or inconsistently. Thus, efforts have been initiated to research techniques and strategies for helping researchers to make better use of large data sets. In this paper, the authors present their effortsmore » to do so in two ways: (1) the Event Search Engine, which is a waveform correlation tool and (2) some content analysis tools, which area combination of custom-built and commercial off-the-shelf tools for accessing, managing, and querying seismic data stored in a relational database. The current Event Search Engine is based on a hierarchical clustering tool known as the dendrogram tool, which is written as a MatSeis graphical user interface. The dendrogram tool allows the user to build dendrogram diagrams for a set of waveforms by controlling phase windowing, down-sampling, filtering, enveloping, and the clustering method (e.g. single linkage, complete linkage, flexible method). It also allows the clustering to be based on two or more stations simultaneously, which is important to bridge gaps in the sparsely recorded event sets anticipated in such a large reconciled event set. Current efforts are focusing on tools to help the researcher winnow the clusters defined using the dendrogram tool down to the minimum optimal identification set. This will become critical as the number of reference events in the reconciled event set continually grows. The dendrogram tool is part of the MatSeis analysis package, which is available on the Nuclear Explosion Monitoring Research and Engineering Program Web Site. As part of the research into how to winnow the reference events in these large reconciled event sets, additional database query approaches have been developed to provide windows into these datasets. These custom built content analysis tools help identify dataset characteristics that can potentially aid in providing a basis for comparing similar reference events in these large reconciled event sets. Once these characteristics can be identified, algorithms can be developed to create and add to the reduced set of events used by the Event Search Engine. These content analysis tools have already been useful in providing information on station coverage of the referenced events and basic statistical, information on events in the research datasets. The tools can also provide researchers with a quick way to find interesting and useful events within the research datasets. The tools could also be used as a means to review reference event datasets as part of a dataset delivery verification process. There has also been an effort to explore the usefulness of commercially available web-based software to help with this problem. The advantages of using off-the-shelf software applications, such as Oracle's WebDB, to manipulate, customize and manage research data are being investigated. These types of applications are being examined to provide access to large integrated data sets for regional seismic research in Asia. All of these software tools would provide the researcher with unprecedented power without having to learn the intricacies and complexities of relational database systems.« less
Fault Diagnosis of Demountable Disk-Drum Aero-Engine Rotor Using Customized Multiwavelet Method.
Chen, Jinglong; Wang, Yu; He, Zhengjia; Wang, Xiaodong
2015-10-23
The demountable disk-drum aero-engine rotor is an important piece of equipment that greatly impacts the safe operation of aircraft. However, assembly looseness or crack fault has led to several unscheduled breakdowns and serious accidents. Thus, condition monitoring and fault diagnosis technique are required for identifying abnormal conditions. Customized ensemble multiwavelet method for aero-engine rotor condition identification, using measured vibration data, is developed in this paper. First, customized multiwavelet basis function with strong adaptivity is constructed via symmetric multiwavelet lifting scheme. Then vibration signal is processed by customized ensemble multiwavelet transform. Next, normalized information entropy of multiwavelet decomposition coefficients is computed to directly reflect and evaluate the condition. The proposed approach is first applied to fault detection of an experimental aero-engine rotor. Finally, the proposed approach is used in an engineering application, where it successfully identified the crack fault of a demountable disk-drum aero-engine rotor. The results show that the proposed method possesses excellent performance in fault detection of aero-engine rotor. Moreover, the robustness of the multiwavelet method against noise is also tested and verified by simulation and field experiments.
76 FR 62387 - Public User ID Badging
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-07
... additional information regarding online access cards or user training should be directed to Douglas Salser... issues online access cards to customers who wish to use the electronic search systems at the Public Search Facility. Customers may obtain an online access card by completing the application at the Public...
Chang, Chia-Chi; Chen, Hui-Yun
2009-02-01
Mass customization is a strategy that has been adopted by companies to tailor their products in order to match customer needs more precisely. Therefore, to fully capture the value of mass customization, it is crucial to explore how customers react to mass customization. In previous studies, an implied premise has been that consumers are keen to embrace customized products, and this assumption has also been treated by firms as a prerequisite for successful mass customization strategies. However, an undesirable complexity may result from difficult configuration processes that may intimidate and confuse some customers. Hence, this study explores strategies that marketers can employ to facilitate the customization process. Specifically, this study investigates how to enhance customer satisfaction and purchase decision toward customized products by providing cues compatible with the product category. It is hypothesized that for search products, customers rely more on intrinsic cues when making configuration decisions. On the other hand, for experience products, customers perceive extrinsic cues to be more valuable in assisting them to make configuration decisions. The results suggest that consumers tend to respond more favorably toward customized search products when intrinsic cues are provided than when extrinsic or irrelevant ones are provided. In contrast, when customizing experience products, customers tend to depend more on extrinsic cues than on intrinsic or irrelevant ones.
Review: Polymeric-Based 3D Printing for Tissue Engineering.
Wu, Geng-Hsi; Hsu, Shan-Hui
Three-dimensional (3D) printing, also referred to as additive manufacturing, is a technology that allows for customized fabrication through computer-aided design. 3D printing has many advantages in the fabrication of tissue engineering scaffolds, including fast fabrication, high precision, and customized production. Suitable scaffolds can be designed and custom-made based on medical images such as those obtained from computed tomography. Many 3D printing methods have been employed for tissue engineering. There are advantages and limitations for each method. Future areas of interest and progress are the development of new 3D printing platforms, scaffold design software, and materials for tissue engineering applications.
A Method for Search Engine Selection using Thesaurus for Selective Meta-Search Engine
NASA Astrophysics Data System (ADS)
Goto, Shoji; Ozono, Tadachika; Shintani, Toramatsu
In this paper, we propose a new method for selecting search engines on WWW for selective meta-search engine. In selective meta-search engine, a method is needed that would enable selecting appropriate search engines for users' queries. Most existing methods use statistical data such as document frequency. These methods may select inappropriate search engines if a query contains polysemous words. In this paper, we describe an search engine selection method based on thesaurus. In our method, a thesaurus is constructed from documents in a search engine and is used as a source description of the search engine. The form of a particular thesaurus depends on the documents used for its construction. Our method enables search engine selection by considering relationship between terms and overcomes the problems caused by polysemous words. Further, our method does not have a centralized broker maintaining data, such as document frequency for all search engines. As a result, it is easy to add a new search engine, and meta-search engines become more scalable with our method compared to other existing methods.
2013-06-01
U.S. ARMY CORPS OF ENGINEERS Building Overhead Costs into Projects and Customers ’ Views on Information Provided...Overhead Costs into Projects and Customers ’ Views on Information Provided 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...and Customers ’ Views on Information Provided Why GAO Did This Study The Corps spends billions of dollars annually on projects in its Civil Works
Reputation Management and Content Control: An Analysis of Radiation Oncologists' Digital Identities.
Prabhu, Arpan V; Kim, Christopher; De Guzman, Eison; Zhao, Eric; Madill, Evan; Cohen, Jonathan; Hansberry, David R; Agarwal, Nitin; Heron, Dwight E; Beriwal, Sushil
2017-12-01
Google is the most popular search engine in the United States, and patients are increasingly relying on online webpages to seek information about individual physicians. This study aims to characterize what patients find when they search for radiation oncologists online. The Centers for Medicare and Medicaid Services (CMS) Physician Comparable Downloadable File was used to identify all Medicare-participating radiation oncologists in the United States and Puerto Rico. Each radiation oncologist was characterized by medical school education, year of graduation, city of practice, gender, and affiliation with an academic institution. Using a custom Google-based search engine, up to the top 10 search results for each physician were extracted and categorized as relating to: (1) physician, hospital, or health care system; (2) third-party; (3) social media; (4) academic journal articles; or (5) other. Among all health care providers in the United States within CMS, 4443 self-identified as being radiation oncologists and yielded 40,764 search results. Of those, 1161 (26.1%) and 3282 (73.9%) were classified as academic and nonacademic radiation oncologists, respectively. At least 1 search result was obtained for 4398 physicians (99.0%). Physician, hospital, and health care-controlled websites (16,006; 39.3%) and third-party websites (10,494; 25.7%) were the 2 most often observed domain types. Social media platforms accounted for 2729 (6.7%) hits, and peer-reviewed academic journal websites accounted for 1397 (3.4%) results. About 6.8% and 6.7% of the top 10 links were social media websites for academic and nonacademic radiation oncologists, respectively. Most radiation oncologists lack self-controlled online content when patients search within the first page of Google search results. With the strong presence of third-party websites and lack of social media, opportunities exist for radiation oncologists to increase their online presence to improve patient-provider communication and better the image of the overall field. We discuss strategies to improve online visibility. Copyright © 2017 Elsevier Inc. All rights reserved.
Influence of End Customer Exposure on Product Design within an Epistemic Game Environment
ERIC Educational Resources Information Center
Markovetz, Matthew R.; Clark, Renee M.; Swiecki, Zachari; Irgens, Golnaz Arastoopour; Chesler, Naomi C.; Shaffer, David W.; Bodnar, Cheryl A.
2017-01-01
Engineering product design requires both technical aptitude and an understanding of the nontechnical requirements in the marketplace, economic or otherwise. Engineering education has long focused on the technical side of product design, but there is increasing demand for market-aware engineers in industry. Market-awareness and customer-focus are…
ERIC Educational Resources Information Center
Bi, Youyi
2017-01-01
Human-centered design requires thorough understanding of people (e.g. customers, designers, engineers) in order to better satisfy the needs and expectations of all stakeholders in the design process. Designers are able to create better products by incorporating customers' subjective evaluations on products. Engineers can also build better tools…
Web Feet Guide to Search Engines: Finding It on the Net.
ERIC Educational Resources Information Center
Web Feet, 2001
2001-01-01
This guide to search engines for the World Wide Web discusses selecting the right search engine; interpreting search results; major search engines; online tutorials and guides; search engines for kids; specialized search tools for various subjects; and other specialized engines and gateways. (LRW)
A Discussion of the Software Quality Assurance Role
NASA Technical Reports Server (NTRS)
Kandt, Ronald Kirk
2010-01-01
The basic idea underlying this paper is that the conventional understanding of the role of a Software Quality Assurance (SQA) engineer is unduly limited. This is because few have asked who the customers of a SQA engineer are. Once you do this, you can better define what tasks a SQA engineer should perform, as well as identify the knowledge and skills that such a person should have. The consequence of doing this is that a SQA engineer can provide greater value to his or her customers. It is the position of this paper that a SQA engineer providing significant value to his or her customers must not only assume the role of an auditor, but also that of a software and systems engineer. This is because software engineers and their managers particularly value contributions that directly impact products and their development. These ideas are summarized as lessons learned, based on my experience at Jet Propulsion Laboratory (JPL).
Improving Customer Satisfaction in an R and D Environment
NASA Technical Reports Server (NTRS)
Alexander, Anita; Liou, Y. H. Andrew
1998-01-01
Satisfying customer needs is critical to the sustained competitive advantage of service suppliers. It is therefore important to understand the types of customer needs which, if fulfilled or exceeded, add value and contribute to overall customer satisfaction. This study identifies the needs of various research and development (R&D) customers who contract for engineering and design support services. The Quality Function Deployment (QFD) process was used to organize and translate each customer need into performance measures that, if implemented, can improve customer satisfaction. This study also provides specific performance measures that will more accurately guide the efforts of the engineering supplier. These organizations can either implement the QFD methodology presented herein or extract a few performance measures that are specific to the quality dimensions in need of improvement. Listening to 'what' customers talk about is a good first start.
Fault Diagnosis of Demountable Disk-Drum Aero-Engine Rotor Using Customized Multiwavelet Method
Chen, Jinglong; Wang, Yu; He, Zhengjia; Wang, Xiaodong
2015-01-01
The demountable disk-drum aero-engine rotor is an important piece of equipment that greatly impacts the safe operation of aircraft. However, assembly looseness or crack fault has led to several unscheduled breakdowns and serious accidents. Thus, condition monitoring and fault diagnosis technique are required for identifying abnormal conditions. Customized ensemble multiwavelet method for aero-engine rotor condition identification, using measured vibration data, is developed in this paper. First, customized multiwavelet basis function with strong adaptivity is constructed via symmetric multiwavelet lifting scheme. Then vibration signal is processed by customized ensemble multiwavelet transform. Next, normalized information entropy of multiwavelet decomposition coefficients is computed to directly reflect and evaluate the condition. The proposed approach is first applied to fault detection of an experimental aero-engine rotor. Finally, the proposed approach is used in an engineering application, where it successfully identified the crack fault of a demountable disk-drum aero-engine rotor. The results show that the proposed method possesses excellent performance in fault detection of aero-engine rotor. Moreover, the robustness of the multiwavelet method against noise is also tested and verified by simulation and field experiments. PMID:26512668
Shedlock, James; Frisque, Michelle; Hunt, Steve; Walton, Linda; Handler, Jonathan; Gillam, Michael
2010-04-01
How can the user's access to health information, especially full-text articles, be improved? The solution is building and evaluating the Health SmartLibrary (HSL). The setting is the Galter Health Sciences Library, Feinberg School of Medicine, Northwestern University. The HSL was built on web-based personalization and customization tools: My E-Resources, Stay Current, Quick Search, and File Cabinet. Personalization and customization data were tracked to show user activity with these value-added, online services. Registration data indicated that users were receptive to personalized resource selection and that the automated application of specialty-based, personalized HSLs was more frequently adopted than manual customization by users. Those who did customize customized My E-Resources and Stay Current more often than Quick Search and File Cabinet. Most of those who customized did so only once. Users did not always take advantage of the services designed to aid their library research experiences. When personalization is available at registration, users readily accepted it. Customization tools were used less frequently; however, more research is needed to determine why this was the case.
Integrating GIS, Archeology, and the Internet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sera White; Brenda Ringe Pace; Randy Lee
2004-08-01
At the Idaho National Engineering and Environmental Laboratory's (INEEL) Cultural Resource Management Office, a newly developed Data Management Tool (DMT) is improving management and long-term stewardship of cultural resources. The fully integrated system links an archaeological database, a historical database, and a research database to spatial data through a customized user interface using ArcIMS and Active Server Pages. Components of the new DMT are tailored specifically to the INEEL and include automated data entry forms for historic and prehistoric archaeological sites, specialized queries and reports that address both yearly and project-specific documentation requirements, and unique field recording forms. The predictivemore » modeling component increases the DMT’s value for land use planning and long-term stewardship. The DMT enhances the efficiency of archive searches, improving customer service, oversight, and management of the large INEEL cultural resource inventory. In the future, the DMT will facilitate data sharing with regulatory agencies, tribal organizations, and the general public.« less
Judicious use of custom development in an open source component architecture
NASA Astrophysics Data System (ADS)
Bristol, S.; Latysh, N.; Long, D.; Tekell, S.; Allen, J.
2014-12-01
Modern software engineering is not as much programming from scratch as innovative assembly of existing components. Seamlessly integrating disparate components into scalable, performant architecture requires sound engineering craftsmanship and can often result in increased cost efficiency and accelerated capabilities if software teams focus their creativity on the edges of the problem space. ScienceBase is part of the U.S. Geological Survey scientific cyberinfrastructure, providing data and information management, distribution services, and analysis capabilities in a way that strives to follow this pattern. ScienceBase leverages open source NoSQL and relational databases, search indexing technology, spatial service engines, numerous libraries, and one proprietary but necessary software component in its architecture. The primary engineering focus is cohesive component interaction, including construction of a seamless Application Programming Interface (API) across all elements. The API allows researchers and software developers alike to leverage the infrastructure in unique, creative ways. Scaling the ScienceBase architecture and core API with increasing data volume (more databases) and complexity (integrated science problems) is a primary challenge addressed by judicious use of custom development in the component architecture. Other data management and informatics activities in the earth sciences have independently resolved to a similar design of reusing and building upon established technology and are working through similar issues for managing and developing information (e.g., U.S. Geoscience Information Network; NASA's Earth Observing System Clearing House; GSToRE at the University of New Mexico). Recent discussions facilitated through the Earth Science Information Partners are exploring potential avenues to exploit the implicit relationships between similar projects for explicit gains in our ability to more rapidly advance global scientific cyberinfrastructure.
Ilic, D; Bessell, T L; Silagy, C A; Green, S
2003-03-01
The Internet provides consumers with access to online health information; however, identifying relevant and valid information can be problematic. Our objectives were firstly to investigate the efficiency of search-engines, and then to assess the quality of online information pertaining to androgen deficiency in the ageing male (ADAM). Keyword searches were performed on nine search-engines (four general and five medical) to identify website information regarding ADAM. Search-engine efficiency was compared by percentage of relevant websites obtained via each search-engine. The quality of information published on each website was assessed using the DISCERN rating tool. Of 4927 websites searched, 47 (1.44%) and 10 (0.60%) relevant websites were identified by general and medical search-engines respectively. The overall quality of online information on ADAM was poor. The quality of websites retrieved using medical search-engines did not differ significantly from those retrieved by general search-engines. Despite the poor quality of online information relating to ADAM, it is evident that medical search-engines are no better than general search-engines in sourcing consumer information relevant to ADAM.
Engineering performance metrics
NASA Astrophysics Data System (ADS)
Delozier, R.; Snyder, N.
1993-03-01
Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.
Next-Generation Search Engines for Information Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet; Hook, Leslie A; Palanisamy, Giri
In the recent years, there have been significant advancements in the areas of scientific data management and retrieval techniques, particularly in terms of standards and protocols for archiving data and metadata. Scientific data is rich, and spread across different places. In order to integrate these pieces together, a data archive and associated metadata should be generated. Data should be stored in a format that can be retrievable and more importantly it should be in a format that will continue to be accessible as technology changes, such as XML. While general-purpose search engines (such as Google or Bing) are useful formore » finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. One such system, Mercury, a metadata harvesting, data discovery, and access system, built for researchers to search to, share and obtain spatiotemporal data used across a range of climate and ecological sciences. Mercury is open-source toolset, backend built on Java and search capability is supported by the some popular open source search libraries such as SOLR and LUCENE. Mercury harvests the structured metadata and key data from several data providing servers around the world and builds a centralized index. The harvested files are indexed against SOLR search API consistently, so that it can render search capabilities such as simple, fielded, spatial and temporal searches across a span of projects ranging from land, atmosphere, and ocean ecology. Mercury also provides data sharing capabilities using Open Archive Initiatives Protocol for Metadata Handling (OAI-PMH). In this paper we will discuss about the best practices for archiving data and metadata, new searching techniques, efficient ways of data retrieval and information display.« less
A Literature Synthesis of Health Promotion Research in Salons and Barbershops
Linnan, Laura A.; D’Angelo, Heather; Harrington, Cherise B.
2015-01-01
Context Barbershops and beauty salons are located in all communities and frequented by diverse groups of people, making them key settings for addressing health disparities. No studies have reviewed the growing body of literature describing studies promoting health in these settings. This review summarized the literature related to promoting health within barbershops and beauty salons to inform future approaches that target diverse populations in similar settings. Evidence Acquisition We identified and reviewed published research articles describing formative research, recruitment, and health-related interventions set in beauty salons and barbershops. PubMed and other secondary search engines were searched in 2010 and again in 2013 for English-language papers indexed from 1990 through August 2013. The search yielded 110 articles, 68 of which were formerly reviewed, and 54 were eligible for inclusion. Evidence Synthesis Included articles were categorized as formative research (n=27), recruitment (n=7), or intervention (n=20). Formative research studies showed that owners, barbers/stylists, and their customers were willing participants, clarifying the feasibility of promoting health in these settings. Recruitment studies demonstrated that salon/shop owners will join research studies and can enroll customers. Among intervention studies, level of stylist/barber involvement was categorized. More than 73.3% of intervention studies demonstrated statistically significant results, targeting mostly racial/ethnic minority groups and focusing on a variety of health topics. Conclusions Barbershops and beauty salons are promising settings for reaching populations most at risk for health disparities. Although these results are encouraging, more rigorous research and evaluation of future salon- and barbershop-based interventions are needed. PMID:24768037
Park, So-Hyun; Ham, Sunny; Lee, Min-A
2012-10-01
Quality function deployment (QFD) is a product development technique that translates customer requirements into activities for the development of products and services. This study utilizes QFD to identify American customer's requirements for bulgogi, a popular Korean dish among international customers, and how to fulfill those requirements. A customer survey and an expert opinion survey were conducted for US customers. The top five customer requirements for bulgogi were identified as taste, freshness, flavor, tenderness, and juiciness; ease of purchase was included in the place of tenderness after calculating the weight requirements. Eighteen engineering characteristics were developed, and a 'localization of bulgogi menu' is strongly related to the other characteristics as well. The results from the calculation of relative importance of engineering characteristics identified that the 'control of marinating time', 'localization of bulgogi menu', 'improvement of cooking and serving process', 'development of recipe by parts of beef', and 'use of various seasonings' were the highest contributors to the overall improvement of bulgogi. The relative importance of engineering characteristics, correlation, and technical difficulties are ranked and integrated to develop the most effective strategy. The findings are discussed relative to industry implications. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Planning Approach of Engineering Characteristics Based on QFD-TRIZ Integrated
NASA Astrophysics Data System (ADS)
Liu, Shang; Shi, Dongyan; Zhang, Ying
Traditional QFD planning method compromises contradictions between engineering characteristics to achieve higher customer satisfaction. However, this compromise trade-off can not eliminate the contradictions existing among the engineering characteristics which limited the overall customer satisfaction. QFD (Quality function deployment) integrated with TRIZ (the Russian acronym of the Theory of Inventive Problem Solving) becomes hot research recently for TRIZ can be used to solve contradictions between engineering characteristics which construct the roof of HOQ (House of quality). But, the traditional QFD planning approach is not suitable for QFD integrated with TRIZ for that TRIZ requires emphasizing the contradictions between engineering characteristics at problem definition stage instead of compromising trade-off. So, a new planning approach based on QFD / TRIZ integration is proposed in this paper, which based on the consideration of the correlation matrix of engineering characteristics and customer satisfaction on the basis of cost. The proposed approach suggests that TRIZ should be applied to solve contradictions at the first step, and the correlation matrix of engineering characteristics should be amended at the second step, and at next step IFR (ideal final result) must be validated, then planning execute. An example is used to illustrate the proposed approach. The application indicated that higher customer satisfaction can be met and the contradictions between the characteristic parameters are eliminated.
Adjacency and Proximity Searching in the Science Citation Index and Google
2005-01-01
major database search engines , including commercial S&T database search engines (e.g., Science Citation Index (SCI), Engineering Compendex (EC...PubMed, OVID), Federal agency award database search engines (e.g., NSF, NIH, DOE, EPA, as accessed in Federal R&D Project Summaries), Web search Engines (e.g...searching. Some database search engines allow strict constrained co- occurrence searching as a user option (e.g., OVID, EC), while others do not (e.g., SCI
Achieving Sub-Second Search in the CMR
NASA Astrophysics Data System (ADS)
Gilman, J.; Baynes, K.; Pilone, D.; Mitchell, A. E.; Murphy, K. J.
2014-12-01
The Common Metadata Repository (CMR) is the next generation Earth Science Metadata catalog for NASA's Earth Observing data. It joins together the holdings from the EOS Clearing House (ECHO) and the Global Change Master Directory (GCMD), creating a unified, authoritative source for EOSDIS metadata. The CMR allows ingest in many different formats while providing consistent search behavior and retrieval in any supported format. Performance is a critical component of the CMR, ensuring improved data discovery and client interactivity. The CMR delivers sub-second search performance for any of the common query conditions (including spatial) across hundreds of millions of metadata granules. It also allows the addition of new metadata concepts such as visualizations, parameter metadata, and documentation. The CMR's goals presented many challenges. This talk will describe the CMR architecture, design, and innovations that were made to achieve its goals. This includes: * Architectural features like immutability and backpressure. * Data management techniques such as caching and parallel loading that give big performance gains. * Open Source and COTS tools like Elasticsearch search engine. * Adoption of Clojure, a functional programming language for the Java Virtual Machine. * Development of a custom spatial search plugin for Elasticsearch and why it was necessary. * Introduction of a unified model for metadata that maps every supported metadata format to a consistent domain model.
Coping with Variability in Model-Based Systems Engineering: An Experience in Green Energy
NASA Astrophysics Data System (ADS)
Trujillo, Salvador; Garate, Jose Miguel; Lopez-Herrejon, Roberto Erick; Mendialdua, Xabier; Rosado, Albert; Egyed, Alexander; Krueger, Charles W.; de Sosa, Josune
Model-Based Systems Engineering (MBSE) is an emerging engineering discipline whose driving motivation is to provide support throughout the entire system life cycle. MBSE not only addresses the engineering of software systems but also their interplay with physical systems. Quite frequently, successful systems need to be customized to cater for the concrete and specific needs of customers, end-users, and other stakeholders. To effectively meet this demand, it is vital to have in place mechanisms to cope with the variability, the capacity to change, that such customization requires. In this paper we describe our experience in modeling variability using SysML, a leading MBSE language, for developing a product line of wind turbine systems used for the generation of electricity.
Start Your Engines: Surfing with Search Engines for Kids.
ERIC Educational Resources Information Center
Byerly, Greg; Brodie, Carolyn S.
1999-01-01
Suggests that to be an effective educator and user of the Web it is essential to know the basics about search engines. Presents tips for using search engines. Describes several search engines for children and young adults, as well as some general filtered search engines for children. (AEF)
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false [Reserved] 162.41 Section 162.41 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Treatment of Seized Merchandise § 162.41 [Reserved] ...
A New Chemistry Course for Non-Chemistry Majors.
ERIC Educational Resources Information Center
Ariel, Magda; And Others
1982-01-01
A two-semester basic chemistry course for nonchemistry engineering majors is described. First semester provides introductory chemistry for freshmen while second semester is "customer-oriented," based on a departmental choice of three out of six independent modules. For example, aeronautical engineering "customers" would select…
NASA Astrophysics Data System (ADS)
Radha, J.; Indhira, K.; Chandrasekaran, V. M.
2017-11-01
A group arrival feedback retrial queue with k optional stages of service and orbital search policy is studied. Any arriving group of customer finds the server free, one from the group enters into the first stage of service and the rest of the group join into the orbit. After completion of the i th stage of service, the customer under service may have the option to choose (i+1)th stage of service with θi probability, with pI probability may join into orbit as feedback customer or may leave the system with {q}i=≤ft\\{\\begin{array}{l}1-{p}i-{θ }i,i=1,2,\\cdots k-1\\ 1-{p}i,i=k\\end{array}\\right\\} probability. Busy server may get to breakdown due to the arrival of negative customers and the service channel will fail for a short interval of time. At the completion of service or repair, the server searches for the customer in the orbit (if any) with probability α or remains idle with probability 1-α. By using the supplementary variable method, steady state probability generating function for system size, some system performance measures are discussed.
19 CFR 162.70 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Applicability. 162.70 Section 162.70 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Special Procedures for Certain Violations § 162.70 Applicability...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Definitions. 162.71 Section 162.71 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Special Procedures for Certain Violations § 162.71 Definitions...
Adding a visualization feature to web search engines: it's time.
Wong, Pak Chung
2008-01-01
It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.
Introducing Products to DoD Using Specifications and Standards
2011-08-18
to utilize the Product Introduction Tool. Search ~Favorites .S » Links ~Customize Links ~ EDS-NMCI ~Free Hotmail Product Introduction Process User...the Product Introduction Tool. Search ~Favorites .S » Links ~Customize Links ~ EDS-NMCI ~Free Hotmail Product Introduction Process User Pol icy...Links i1 EDS-NMCI ~ Free Hotmail i] I] Go ldentitify Categories/Subcategories Identify the category/subcategory that most closely covers your
ERIC Educational Resources Information Center
Garman, Nancy
1999-01-01
Describes common options and features to consider in evaluating which meta search engine will best meet a searcher's needs. Discusses number and names of engines searched; other sources and specialty engines; search queries; other search options; and results options. (AEF)
Shedlock, James; Frisque, Michelle; Hunt, Steve; Walton, Linda; Handler, Jonathan; Gillam, Michael
2010-01-01
Question: How can the user's access to health information, especially full-text articles, be improved? The solution is building and evaluating the Health SmartLibrary (HSL). Setting: The setting is the Galter Health Sciences Library, Feinberg School of Medicine, Northwestern University. Method: The HSL was built on web-based personalization and customization tools: My E-Resources, Stay Current, Quick Search, and File Cabinet. Personalization and customization data were tracked to show user activity with these value-added, online services. Main Results: Registration data indicated that users were receptive to personalized resource selection and that the automated application of specialty-based, personalized HSLs was more frequently adopted than manual customization by users. Those who did customize customized My E-Resources and Stay Current more often than Quick Search and File Cabinet. Most of those who customized did so only once. Conclusion: Users did not always take advantage of the services designed to aid their library research experiences. When personalization is available at registration, users readily accepted it. Customization tools were used less frequently; however, more research is needed to determine why this was the case. PMID:20428276
DOT National Transportation Integrated Search
2000-03-01
The Customs Service faces a major challenge in effectively carrying out its drug interdiction and trade enforcement missions while facilitating the flow of cargo and persons into the United States. To carry out its mission, Customs inspectors are aut...
19 CFR 162.63 - Arrests and seizures.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Arrests and seizures. 162.63 Section 162.63 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Controlled Substances, Narcotics, and Marihuana § 162...
19 CFR 162.63 - Arrests and seizures.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 2 2011-04-01 2011-04-01 false Arrests and seizures. 162.63 Section 162.63 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Controlled Substances, Narcotics, and Marihuana § 162...
19 CFR 162.61 - Importing and exporting controlled substances.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Importing and exporting controlled substances. 162.61 Section 162.61 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Controlled Substances, Narcotics, and...
19 CFR 162.78 - Presentations responding to prepenalty notice.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Presentations responding to prepenalty notice. 162.78 Section 162.78 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Special Procedures for Certain...
19 CFR 162.79 - Determination as to violation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Determination as to violation. 162.79 Section 162.79 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Special Procedures for Certain Violations...
19 CFR 162.74 - Prior disclosure.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Prior disclosure. 162.74 Section 162.74 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Special Procedures for Certain Violations § 162.74 Prior...
Helping Students Choose Tools To Search the Web.
ERIC Educational Resources Information Center
Cohen, Laura B.; Jacobson, Trudi E.
2000-01-01
Describes areas where faculty members can aid students in making intelligent use of the Web in their research. Differentiates between subject directories and search engines. Describes an engine's three components: spider, index, and search engine. Outlines two misconceptions: that Yahoo! is a search engine and that search engines contain all the…
Grooker, KartOO, Addict-o-Matic and More: Really Different Search Engines
ERIC Educational Resources Information Center
Descy, Don E.
2009-01-01
There are hundreds of unique search engines in the United States and thousands of unique search engines around the world. If people get into search engines designed just to search particular web sites, the number is in the hundreds of thousands. This article looks at: (1) clustering search engines, such as KartOO (www.kartoo.com) and Grokker…
15 CFR 760.3 - Exceptions to prohibitions.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., a U.S. contractor building an industrial facility in boycotting country Y is asked by B, a resident... agency of boycotting country Y to build a pipeline. Y requests A to suggest qualified engineering firms... conducts its operations, to identify qualified engineering firms to its customers so that its customers may...
15 CFR 760.3 - Exceptions to prohibitions.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., a U.S. contractor building an industrial facility in boycotting country Y is asked by B, a resident... agency of boycotting country Y to build a pipeline. Y requests A to suggest qualified engineering firms... conducts its operations, to identify qualified engineering firms to its customers so that its customers may...
Integrated resource scheduling in a distributed scheduling environment
NASA Technical Reports Server (NTRS)
Zoch, David; Hall, Gardiner
1988-01-01
The Space Station era presents a highly-complex multi-mission planning and scheduling environment exercised over a highly distributed system. In order to automate the scheduling process, customers require a mechanism for communicating their scheduling requirements to NASA. A request language that a remotely-located customer can use to specify his scheduling requirements to a NASA scheduler, thus automating the customer-scheduler interface, is described. This notation, Flexible Envelope-Request Notation (FERN), allows the user to completely specify his scheduling requirements such as resource usage, temporal constraints, and scheduling preferences and options. The FERN also contains mechanisms for representing schedule and resource availability information, which are used in the inter-scheduler inconsistency resolution process. Additionally, a scheduler is described that can accept these requests, process them, generate schedules, and return schedule and resource availability information to the requester. The Request-Oriented Scheduling Engine (ROSE) was designed to function either as an independent scheduler or as a scheduling element in a network of schedulers. When used in a network of schedulers, each ROSE communicates schedule and resource usage information to other schedulers via the FERN notation, enabling inconsistencies to be resolved between schedulers. Individual ROSE schedules are created by viewing the problem as a constraint satisfaction problem with a heuristically guided search strategy.
MetaSEEk: a content-based metasearch engine for images
NASA Astrophysics Data System (ADS)
Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu
1997-12-01
Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.
[Advanced online search techniques and dedicated search engines for physicians].
Nahum, Yoav
2008-02-01
In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines.
19 CFR 162.80 - Liability for duties; liquidation of entries.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Liability for duties; liquidation of entries. 162.80 Section 162.80 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Special Procedures for Certain...
19 CFR 162.49 - Forfeiture by court decree.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Forfeiture by court decree. 162.49 Section 162.49 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Treatment of Seized Merchandise § 162.49 Forfeiture by...
19 CFR 162.44 - Release on payment of appraised value.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Release on payment of appraised value. 162.44 Section 162.44 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Treatment of Seized Merchandise § 162...
19 CFR 162.50 - Forfeiture by court decree: Disposition.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Forfeiture by court decree: Disposition. 162.50 Section 162.50 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Treatment of Seized Merchandise § 162...
19 CFR 162.62 - Permissible controlled substances on vessels, aircraft, and individuals.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Permissible controlled substances on vessels, aircraft, and individuals. 162.62 Section 162.62 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE...
19 CFR 162.47 - Claim for property subject to summary forfeiture.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Claim for property subject to summary forfeiture. 162.47 Section 162.47 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Treatment of Seized...
19 CFR 162.79a - Other notice.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Other notice. 162.79a Section 162.79a Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Special Procedures for Certain Violations § 162.79a Other notice...
Noesis: Ontology based Scoped Search Engine and Resource Aggregator for Atmospheric Science
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Movva, S.; Li, X.; Cherukuri, P.; Graves, S.
2006-12-01
The goal for search engines is to return results that are both accurate and complete. The search engines should find only what you really want and find everything you really want. Search engines (even meta search engines) lack semantics. The basis for search is simply based on string matching between the user's query term and the resource database and the semantics associated with the search string is not captured. For example, if an atmospheric scientist is searching for "pressure" related web resources, most search engines return inaccurate results such as web resources related to blood pressure. In this presentation Noesis, which is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities will be described. Noesis uses domain ontologies to help the user scope the search query to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. Noesis also serves as a resource aggregator. It categorizes the search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user.
Search Engines: Gateway to a New ``Panopticon''?
NASA Astrophysics Data System (ADS)
Kosta, Eleni; Kalloniatis, Christos; Mitrou, Lilian; Kavakli, Evangelia
Nowadays, Internet users are depending on various search engines in order to be able to find requested information on the Web. Although most users feel that they are and remain anonymous when they place their search queries, reality proves otherwise. The increasing importance of search engines for the location of the desired information on the Internet usually leads to considerable inroads into the privacy of users. The scope of this paper is to study the main privacy issues with regard to search engines, such as the anonymisation of search logs and their retention period, and to examine the applicability of the European data protection legislation to non-EU search engine providers. Ixquick, a privacy-friendly meta search engine will be presented as an alternative to privacy intrusive existing practices of search engines.
Design of a high-speed digital processing element for parallel simulation
NASA Technical Reports Server (NTRS)
Milner, E. J.; Cwynar, D. S.
1983-01-01
A prototype of a custom designed computer to be used as a processing element in a multiprocessor based jet engine simulator is described. The purpose of the custom design was to give the computer the speed and versatility required to simulate a jet engine in real time. Real time simulations are needed for closed loop testing of digital electronic engine controls. The prototype computer has a microcycle time of 133 nanoseconds. This speed was achieved by: prefetching the next instruction while the current one is executing, transporting data using high speed data busses, and using state of the art components such as a very large scale integration (VLSI) multiplier. Included are discussions of processing element requirements, design philosophy, the architecture of the custom designed processing element, the comprehensive instruction set, the diagnostic support software, and the development status of the custom design.
Jones, Andrew R; Siepen, Jennifer A; Hubbard, Simon J; Paton, Norman W
2009-03-01
LC-MS experiments can generate large quantities of data, for which a variety of database search engines are available to make peptide and protein identifications. Decoy databases are becoming widely used to place statistical confidence in result sets, allowing the false discovery rate (FDR) to be estimated. Different search engines produce different identification sets so employing more than one search engine could result in an increased number of peptides (and proteins) being identified, if an appropriate mechanism for combining data can be defined. We have developed a search engine independent score, based on FDR, which allows peptide identifications from different search engines to be combined, called the FDR Score. The results demonstrate that the observed FDR is significantly different when analysing the set of identifications made by all three search engines, by each pair of search engines or by a single search engine. Our algorithm assigns identifications to groups according to the set of search engines that have made the identification, and re-assigns the score (combined FDR Score). The combined FDR Score can differentiate between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine.
on AddThis.com... Fuel Properties Search Fuel Properties Comparison Create a custom chart comparing Custom Chart Fuel Chart Icon Download the complete fuel comparison chart. More fuel information
The Theory of Planned Behaviour Applied to Search Engines as a Learning Tool
ERIC Educational Resources Information Center
Liaw, Shu-Sheng
2004-01-01
Search engines have been developed for helping learners to seek online information. Based on theory of planned behaviour approach, this research intends to investigate the behaviour of using search engines as a learning tool. After factor analysis, the results suggest that perceived satisfaction of search engine, search engines as an information…
Crawling The Web for Libre: Selecting, Integrating, Extending and Releasing Open Source Software
NASA Astrophysics Data System (ADS)
Truslove, I.; Duerr, R. E.; Wilcox, H.; Savoie, M.; Lopez, L.; Brandt, M.
2012-12-01
Libre is a project developed by the National Snow and Ice Data Center (NSIDC). Libre is devoted to liberating science data from its traditional constraints of publication, location, and findability. Libre embraces and builds on the notion of making knowledge freely available, and both Creative Commons licensed content and Open Source Software are crucial building blocks for, as well as required deliverable outcomes of the project. One important aspect of the Libre project is to discover cryospheric data published on the internet without prior knowledge of the location or even existence of that data. Inspired by well-known search engines and their underlying web crawling technologies, Libre has explored tools and technologies required to build a search engine tailored to allow users to easily discover geospatial data related to the polar regions. After careful consideration, the Libre team decided to base its web crawling work on the Apache Nutch project (http://nutch.apache.org). Nutch is "an open source web-search software project" written in Java, with good documentation, a significant user base, and an active development community. Nutch was installed and configured to search for the types of data of interest, and the team created plugins to customize the default Nutch behavior to better find and categorize these data feeds. This presentation recounts the Libre team's experiences selecting, using, and extending Nutch, and working with the Nutch user and developer community. We will outline the technical and organizational challenges faced in order to release the project's software as Open Source, and detail the steps actually taken. We distill these experiences into a set of heuristics and recommendations for using, contributing to, and releasing Open Source Software.
Spiders and Worms and Crawlers, Oh My: Searching on the World Wide Web.
ERIC Educational Resources Information Center
Eagan, Ann; Bender, Laura
Searching on the world wide web can be confusing. A myriad of search engines exist, often with little or no documentation, and many of these search engines work differently from the standard search engines people are accustomed to using. Intended for librarians, this paper defines search engines, directories, spiders, and robots, and covers basics…
Dynamics of a macroscopic model characterizing mutualism of search engines and web sites
NASA Astrophysics Data System (ADS)
Wang, Yuanshi; Wu, Hong
2006-05-01
We present a model to describe the mutualism relationship between search engines and web sites. In the model, search engines and web sites benefit from each other while the search engines are derived products of the web sites and cannot survive independently. Our goal is to show strategies for the search engines to survive in the internet market. From mathematical analysis of the model, we show that mutualism does not always result in survival. We show various conditions under which the search engines would tend to extinction, persist or grow explosively. Then by the conditions, we deduce a series of strategies for the search engines to survive in the internet market. We present conditions under which the initial number of consumers of the search engines has little contribution to their persistence, which is in agreement with the results in previous works. Furthermore, we show novel conditions under which the initial value plays an important role in the persistence of the search engines and deduce new strategies. We also give suggestions for the web sites to cooperate with the search engines in order to form a win-win situation.
19 CFR 162.92 - Notice of seizure.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Notice of seizure. 162.92 Section 162.92 Customs... (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Civil Asset Forfeiture Reform Act § 162.92 Notice of seizure. (a) Generally. Customs will send written notice of seizure as provided in this section to all known interested...
19 CFR 162.73 - Penalties under section 592, Tariff Act of 1930, as amended.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Penalties under section 592, Tariff Act of 1930, as amended. 162.73 Section 162.73 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Special...
Utilization of a radiology-centric search engine.
Sharpe, Richard E; Sharpe, Megan; Siegel, Eliot; Siddiqui, Khan
2010-04-01
Internet-based search engines have become a significant component of medical practice. Physicians increasingly rely on information available from search engines as a means to improve patient care, provide better education, and enhance research. Specialized search engines have emerged to more efficiently meet the needs of physicians. Details about the ways in which radiologists utilize search engines have not been documented. The authors categorized every 25th search query in a radiology-centric vertical search engine by radiologic subspecialty, imaging modality, geographic location of access, time of day, use of abbreviations, misspellings, and search language. Musculoskeletal and neurologic imagings were the most frequently searched subspecialties. The least frequently searched were breast imaging, pediatric imaging, and nuclear medicine. Magnetic resonance imaging and computed tomography were the most frequently searched modalities. A majority of searches were initiated in North America, but all continents were represented. Searches occurred 24 h/day in converted local times, with a majority occurring during the normal business day. Misspellings and abbreviations were common. Almost all searches were performed in English. Search engine utilization trends are likely to mirror trends in diagnostic imaging in the region from which searches originate. Internet searching appears to function as a real-time clinical decision-making tool, a research tool, and an educational resource. A more thorough understanding of search utilization patterns can be obtained by analyzing phrases as actually entered as well as the geographic location and time of origination. This knowledge may contribute to the development of more efficient and personalized search engines.
An Exploratory Survey of Student Perspectives Regarding Search Engines
ERIC Educational Resources Information Center
Alshare, Khaled; Miller, Don; Wenger, James
2005-01-01
This study explored college students' perceptions regarding their use of search engines. The main objective was to determine how frequently students used various search engines, whether advanced search features were used, and how many search engines were used. Various factors that might influence student responses were examined. Results showed…
The Use of Web Search Engines in Information Science Research.
ERIC Educational Resources Information Center
Bar-Ilan, Judit
2004-01-01
Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…
Science Education at Fermilab Program Search
JavaScript is Turned Off or Not Supported in Your Browser. To search for programs go to the Non -Javascript Search or turn on Javascript and reload this page. Programs | Science Adventures | Calendar | Undergraduates Fermilab Ed Site Search Google Custom Search Programs: Introducing You to the World of Science
Using Internet Search Engines to Obtain Medical Information: A Comparative Study
Wang, Liupu; Wang, Juexin; Wang, Michael; Li, Yong; Liang, Yanchun
2012-01-01
Background The Internet has become one of the most important means to obtain health and medical information. It is often the first step in checking for basic information about a disease and its treatment. The search results are often useful to general users. Various search engines such as Google, Yahoo!, Bing, and Ask.com can play an important role in obtaining medical information for both medical professionals and lay people. However, the usability and effectiveness of various search engines for medical information have not been comprehensively compared and evaluated. Objective To compare major Internet search engines in their usability of obtaining medical and health information. Methods We applied usability testing as a software engineering technique and a standard industry practice to compare the four major search engines (Google, Yahoo!, Bing, and Ask.com) in obtaining health and medical information. For this purpose, we searched the keyword breast cancer in Google, Yahoo!, Bing, and Ask.com and saved the results of the top 200 links from each search engine. We combined nonredundant links from the four search engines and gave them to volunteer users in an alphabetical order. The volunteer users evaluated the websites and scored each website from 0 to 10 (lowest to highest) based on the usefulness of the content relevant to breast cancer. A medical expert identified six well-known websites related to breast cancer in advance as standards. We also used five keywords associated with breast cancer defined in the latest release of Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) and analyzed their occurrence in the websites. Results Each search engine provided rich information related to breast cancer in the search results. All six standard websites were among the top 30 in search results of all four search engines. Google had the best search validity (in terms of whether a website could be opened), followed by Bing, Ask.com, and Yahoo!. The search results highly overlapped between the search engines, and the overlap between any two search engines was about half or more. On the other hand, each search engine emphasized various types of content differently. In terms of user satisfaction analysis, volunteer users scored Bing the highest for its usefulness, followed by Yahoo!, Google, and Ask.com. Conclusions Google, Yahoo!, Bing, and Ask.com are by and large effective search engines for helping lay users get health and medical information. Nevertheless, the current ranking methods have some pitfalls and there is room for improvement to help users get more accurate and useful information. We suggest that search engine users explore multiple search engines to search different types of health information and medical knowledge for their own needs and get a professional consultation if necessary. PMID:22672889
Using Internet search engines to obtain medical information: a comparative study.
Wang, Liupu; Wang, Juexin; Wang, Michael; Li, Yong; Liang, Yanchun; Xu, Dong
2012-05-16
The Internet has become one of the most important means to obtain health and medical information. It is often the first step in checking for basic information about a disease and its treatment. The search results are often useful to general users. Various search engines such as Google, Yahoo!, Bing, and Ask.com can play an important role in obtaining medical information for both medical professionals and lay people. However, the usability and effectiveness of various search engines for medical information have not been comprehensively compared and evaluated. To compare major Internet search engines in their usability of obtaining medical and health information. We applied usability testing as a software engineering technique and a standard industry practice to compare the four major search engines (Google, Yahoo!, Bing, and Ask.com) in obtaining health and medical information. For this purpose, we searched the keyword breast cancer in Google, Yahoo!, Bing, and Ask.com and saved the results of the top 200 links from each search engine. We combined nonredundant links from the four search engines and gave them to volunteer users in an alphabetical order. The volunteer users evaluated the websites and scored each website from 0 to 10 (lowest to highest) based on the usefulness of the content relevant to breast cancer. A medical expert identified six well-known websites related to breast cancer in advance as standards. We also used five keywords associated with breast cancer defined in the latest release of Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) and analyzed their occurrence in the websites. Each search engine provided rich information related to breast cancer in the search results. All six standard websites were among the top 30 in search results of all four search engines. Google had the best search validity (in terms of whether a website could be opened), followed by Bing, Ask.com, and Yahoo!. The search results highly overlapped between the search engines, and the overlap between any two search engines was about half or more. On the other hand, each search engine emphasized various types of content differently. In terms of user satisfaction analysis, volunteer users scored Bing the highest for its usefulness, followed by Yahoo!, Google, and Ask.com. Google, Yahoo!, Bing, and Ask.com are by and large effective search engines for helping lay users get health and medical information. Nevertheless, the current ranking methods have some pitfalls and there is room for improvement to help users get more accurate and useful information. We suggest that search engine users explore multiple search engines to search different types of health information and medical knowledge for their own needs and get a professional consultation if necessary.
Issues and solutions for storage, retrieval, and searching of MPEG-7 documents
NASA Astrophysics Data System (ADS)
Chang, Yuan-Chi; Lo, Ming-Ling; Smith, John R.
2000-10-01
The ongoing MPEG-7 standardization activity aims at creating a standard for describing multimedia content in order to facilitate the interpretation of the associated information content. Attempting to address a broad range of applications, MPEG-7 has defined a flexible framework consisting of Descriptors, Description Schemes, and Description Definition Language. Descriptors and Description Schemes describe features, structure and semantics of multimedia objects. They are written in the Description Definition Language (DDL). In the most recent revision, DDL applies XML (Extensible Markup Language) Schema with MPEG-7 extensions. DDL has constructs that support inclusion, inheritance, reference, enumeration, choice, sequence, and abstract type of Description Schemes and Descriptors. In order to enable multimedia systems to use MPEG-7, a number of important problems in storing, retrieving and searching MPEG-7 documents need to be solved. This paper reports on initial finding on issues and solutions of storing and accessing MPEG-7 documents. In particular, we discuss the benefits of using a virtual document management framework based on XML Access Server (XAS) in order to bridge the MPEG-7 multimedia applications and database systems. The need arises partly because MPEG-7 descriptions need customized storage schema, indexing and search engines. We also discuss issues arising in managing dependence and cross-description scheme search.
NASA Indexing Benchmarks: Evaluating Text Search Engines
NASA Technical Reports Server (NTRS)
Esler, Sandra L.; Nelson, Michael L.
1997-01-01
The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.
Customer Satisfaction with Air Force Civil Engineering Support
1988-09-01
Regulation aS-l states, "No other base organization directly affects the living environment of every person on base as does the BCE (Base Civil Engineering...accounting, engineering, and legal firms; personal services such as housekeeping, barbering, and recreational services; and most of the nonprofit areas of...a single package via Lear jet to keep a promise to a customer" (Lele, 1987: 45). Frito Lay maintains a 10,000 person sales force in what is a
ERIC Educational Resources Information Center
Rushton, Erin E.; Kelehan, Martha Daisy; Strong, Marcy A.
2008-01-01
Search engine use is one of the most popular online activities. According to a recent OCLC report, nearly all students start their electronic research using a search engine instead of the library Web site. Instead of viewing search engines as competition, however, librarians at Binghamton University Libraries decided to employ search engine…
Multi-Function Gas Fired Heat Pump
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abu-Heiba, Ahmad; Vineyard, Edward Allan
2015-11-01
The aim of this project was to design a residential fuel fired heat pump and further improve efficiency in collaboration with an industry partner – Southwest Gas, the developer of the Nextaire commercial rooftop fuel-fired heat pump. Work started in late 2010. After extensive search for suitable engines, one manufactured by Marathon was selected. Several prototypes were designed and built over the following four years. Design changes were focused on lowering the cost of components and the cost of manufacturing. The design evolved to a final one that yielded the lowest cost. The final design also incorporates noise and vibrationmore » reduction measures that were verified to be effective through a customer survey. ETL certification is currently (as of November 2015) underway. Southwest Gas is currently in talks with GTI to reach an agreement through which GTI will assess the commercial viability and potential of the heat pump. Southwest Gas is searching for investors to manufacture the heat pump and introduce it to the market.« less
Teen smoking cessation help via the Internet: a survey of search engines.
Edwards, Christine C; Elliott, Sean P; Conway, Terry L; Woodruff, Susan I
2003-07-01
The objective of this study was to assess Web sites related to teen smoking cessation on the Internet. Seven Internet search engines were searched using the keywords teen quit smoking. The top 20 hits from each search engine were reviewed and categorized. The keywords teen quit smoking produced between 35 and 400,000 hits depending on the search engine. Of 140 potential hits, 62% were active, unique sites; 85% were listed by only one search engine; and 40% focused on cessation. Findings suggest that legitimate on-line smoking cessation help for teens is constrained by search engine choice and the amount of time teens spend looking through potential sites. Resource listings should be updated regularly. Smoking cessation Web sites need to be picked up on multiple search engine searches. Further evaluation of smoking cessation Web sites need to be conducted to identify the most effective help for teens.
[Development of domain specific search engines].
Takai, T; Tokunaga, M; Maeda, K; Kaminuma, T
2000-01-01
As cyber space exploding in a pace that nobody has ever imagined, it becomes very important to search cyber space efficiently and effectively. One solution to this problem is search engines. Already a lot of commercial search engines have been put on the market. However these search engines respond with such cumbersome results that domain specific experts can not tolerate. Using a dedicate hardware and a commercial software called OpenText, we have tried to develop several domain specific search engines. These engines are for our institute's Web contents, drugs, chemical safety, endocrine disruptors, and emergent response for chemical hazard. These engines have been on our Web site for testing.
A unified architecture for biomedical search engines based on semantic web technologies.
Jalali, Vahid; Matash Borujerdi, Mohammad Reza
2011-04-01
There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.
Jones, Andrew R.; Siepen, Jennifer A.; Hubbard, Simon J.; Paton, Norman W.
2010-01-01
Tandem mass spectrometry, run in combination with liquid chromatography (LC-MS/MS), can generate large numbers of peptide and protein identifications, for which a variety of database search engines are available. Distinguishing correct identifications from false positives is far from trivial because all data sets are noisy, and tend to be too large for manual inspection, therefore probabilistic methods must be employed to balance the trade-off between sensitivity and specificity. Decoy databases are becoming widely used to place statistical confidence in results sets, allowing the false discovery rate (FDR) to be estimated. It has previously been demonstrated that different MS search engines produce different peptide identification sets, and as such, employing more than one search engine could result in an increased number of peptides being identified. However, such efforts are hindered by the lack of a single scoring framework employed by all search engines. We have developed a search engine independent scoring framework based on FDR which allows peptide identifications from different search engines to be combined, called the FDRScore. We observe that peptide identifications made by three search engines are infrequently false positives, and identifications made by only a single search engine, even with a strong score from the source search engine, are significantly more likely to be false positives. We have developed a second score based on the FDR within peptide identifications grouped according to the set of search engines that have made the identification, called the combined FDRScore. We demonstrate by searching large publicly available data sets that the combined FDRScore can differentiate between between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine. PMID:19253293
ISO 9000 and/or Systems Engineering Capability Maturity Model?
NASA Technical Reports Server (NTRS)
Gholston, Sampson E.
2002-01-01
For businesses and organizations to remain competitive today they must have processes and systems in place that will allow them to first identify customer needs and then develop products/processes that will meet or exceed the customers needs and expectations. Customer needs, once identified, are normally stated as requirements. Designers can then develop products/processes that will meet these requirements. Several functions, such as quality management and systems engineering management are used to assist product development teams in the development process. Both functions exist in all organizations and both have a similar objective, which is to ensure that developed processes will meet customer requirements. Are efforts in these organizations being duplicated? Are both functions needed by organizations? What are the similarities and differences between the functions listed above? ISO 9000 is an international standard of goods and services. It sets broad requirements for the assurance of quality and for management's involvement. It requires organizations to document the processes and to follow these documented processes. ISO 9000 gives customers assurance that the suppliers have control of the process for product development. Systems engineering can broadly be defined as a discipline that seeks to ensure that all requirements for a system are satisfied throughout the life of the system by preserving their interrelationship. The key activities of systems engineering include requirements analysis, functional analysis/allocation, design synthesis and verification, and system analysis and control. The systems engineering process, when followed properly, will lead to higher quality products, lower cost products, and shorter development cycles. The System Engineering Capability Maturity Model (SE-CMM) will allow companies to measure their system engineering capability and continuously improve those capabilities. ISO 9000 and SE-CMM seem to have a similar objective, which is to document the organization's processes and certify to potential customers the capability of a supplier to control the processes that determine the quality of the product or services being produced. The remaining sections of this report examine the differences and similarities between ISO 9000 and SE-CMM and make recommendations for implementation.
ERIC Educational Resources Information Center
El Guemmat, Kamal; Ouahabi, Sara
2018-01-01
The objective of this article is to analyze the searching and indexing techniques of educational search engines' implementation while treating future challenges. Educational search engines could greatly help in the effectiveness of e-learning if used correctly. However, these engines have several gaps which influence the performance of e-learning…
Drexel at TREC 2014 Federated Web Search Track
2014-11-01
of its input RS results. 1. INTRODUCTION Federated Web Search is the task of searching multiple search engines simultaneously and combining their...or distributed properly[5]. The goal of RS is then, for a given query, to select only the most promising search engines from all those available. Most...result pages of 149 search engines . 4000 queries are used in building the sample set. As a part of the Vertical Selection task, search engines are
NASA Technical Reports Server (NTRS)
Russell, Yvonne; Falsetti, Christine M.
1991-01-01
Customer requirements are presented through three viewgraphs. One graph presents the range of services, which include requirements management, network engineering, operations, and applications support. Another viewgraph presents the project planning process. The third viewgraph presents the programs and/or projects actively supported including life sciences, earth science and applications, solar system exploration, shuttle flight engineering, microgravity science, space physics, and astrophysics.
Challenges in engineering large customized bone constructs.
Forrestal, David P; Klein, Travis J; Woodruff, Maria A
2017-06-01
The ability to treat large tissue defects with customized, patient-specific scaffolds is one of the most exciting applications in the tissue engineering field. While an increasing number of modestly sized tissue engineering solutions are making the transition to clinical use, successfully scaling up to large scaffolds with customized geometry is proving to be a considerable challenge. Managing often conflicting requirements of cell placement, structural integrity, and a hydrodynamic environment supportive of cell culture throughout the entire thickness of the scaffold has driven the continued development of many techniques used in the production, culturing, and characterization of these scaffolds. This review explores a range of technologies and methods relevant to the design and manufacture of large, anatomically accurate tissue-engineered scaffolds with a focus on the interaction of manufactured scaffolds with the dynamic tissue culture fluid environment. Biotechnol. Bioeng. 2017;114: 1129-1139. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
A Search Relevance Algorithm for Weather Effects Products
2006-12-29
accessed) are often search engines [4] [5]. This suggests that people are navigating the internet by searching and not through the traditional...geographic location. Unlike traditional search engines a Federated Search Engine does not scour all the data available and return matches. Instead...gold standard in search engines . However, its ranking system is based, largely, on a measure of interconnectedness. A page that is referenced more
19 CFR 162.73a - Penalties under section 593A, Tariff Act of 1930, as amended.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Penalties under section 593A, Tariff Act of 1930, as amended. 162.73a Section 162.73a Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Special...
Current Searching Methodology and Retrieval Issues: An Assessment
2008-03-01
searching that are used by search engines are discussed. They are: full text searching, i.e., the searching of unstructured data, and metadata searching...also found among search engines ; however, it is the popularity of full text searching that has changed the road map to information access. The...other hand, information seekers’ willingness, or lack of, to learn the multiple search engines ’ capabilities may diminish their search results
NASA Technical Reports Server (NTRS)
Olson, E. M.
1986-01-01
Presently, there are many difficulties associated with implementing application specific custom or semi-custom (standard cell based) integrated circuits (ICs) into JPL flight projects. One of the primary difficulties is developing prototype semi-custom integrated circuits for use and evaluation in engineering prototype flight hardware. The prototype semi-custom ICs must be extremely cost-effective and yet still representative of flight qualifiable versions of the design. A second difficulty is encountered in the transport of the design from engineering prototype quality to flight quality. Normally, flight quality integrated circuits have stringent quality standards, must be radiation resistant and should consume minimal power. It is often not necessary or cost effective, however, to impose such stringent quality standards on engineering models developed for systems analysis in controlled lab environments. This article presents work originally initiated for ground based applications that also addresses these two problems. Furthermore, this article suggests a method that has been shown successful in prototyping flight quality semi-custom ICs through the Metal Oxide Semiconductor Implementation Service (MOSIS) program run by the University of Southern California's Information Sciences Institute. The method has been used successfully to design and fabricate through the MOSIS three different semi-custom prototype CMOS p-well chips. The three designs make use of the work presented and were designed consistent with design techniques and structures that are flight qualifiable, allowing one hour transfer of the design from engineering model status to flight qualifiable foundry-ready status through methods outlined in this article.
Systems engineering for very large systems
NASA Technical Reports Server (NTRS)
Lewkowicz, Paul E.
1993-01-01
Very large integrated systems have always posed special problems for engineers. Whether they are power generation systems, computer networks or space vehicles, whenever there are multiple interfaces, complex technologies or just demanding customers, the challenges are unique. 'Systems engineering' has evolved as a discipline in order to meet these challenges by providing a structured, top-down design and development methodology for the engineer. This paper attempts to define the general class of problems requiring the complete systems engineering treatment and to show how systems engineering can be utilized to improve customer satisfaction and profit ability. Specifically, this work will focus on a design methodology for the largest of systems, not necessarily in terms of physical size, but in terms of complexity and interconnectivity.
Systems engineering for very large systems
NASA Astrophysics Data System (ADS)
Lewkowicz, Paul E.
Very large integrated systems have always posed special problems for engineers. Whether they are power generation systems, computer networks or space vehicles, whenever there are multiple interfaces, complex technologies or just demanding customers, the challenges are unique. 'Systems engineering' has evolved as a discipline in order to meet these challenges by providing a structured, top-down design and development methodology for the engineer. This paper attempts to define the general class of problems requiring the complete systems engineering treatment and to show how systems engineering can be utilized to improve customer satisfaction and profit ability. Specifically, this work will focus on a design methodology for the largest of systems, not necessarily in terms of physical size, but in terms of complexity and interconnectivity.
Foraging patterns in online searches.
Wang, Xiangwen; Pleimling, Michel
2017-03-01
Nowadays online searches are undeniably the most common form of information gathering, as witnessed by billions of clicks generated each day on search engines. In this work we describe online searches as foraging processes that take place on the semi-infinite line. Using a variety of quantities like probability distributions and complementary cumulative distribution functions of step length and waiting time as well as mean square displacements and entropies, we analyze three different click-through logs that contain the detailed information of millions of queries submitted to search engines. Notable differences between the different logs reveal an increased efficiency of the search engines. In the language of foraging, the newer logs indicate that online searches overwhelmingly yield local searches (i.e., on one page of links provided by the search engines), whereas for the older logs the foraging processes are a combination of local searches and relocation phases that are power law distributed. Our investigation of click logs of search engines therefore highlights the presence of intermittent search processes (where phases of local explorations are separated by power law distributed relocation jumps) in online searches. It follows that good search engines enable the users to find the information they are looking for through a local exploration of a single page with search results, whereas for poor search engine users are often forced to do a broader exploration of different pages.
Foraging patterns in online searches
NASA Astrophysics Data System (ADS)
Wang, Xiangwen; Pleimling, Michel
2017-03-01
Nowadays online searches are undeniably the most common form of information gathering, as witnessed by billions of clicks generated each day on search engines. In this work we describe online searches as foraging processes that take place on the semi-infinite line. Using a variety of quantities like probability distributions and complementary cumulative distribution functions of step length and waiting time as well as mean square displacements and entropies, we analyze three different click-through logs that contain the detailed information of millions of queries submitted to search engines. Notable differences between the different logs reveal an increased efficiency of the search engines. In the language of foraging, the newer logs indicate that online searches overwhelmingly yield local searches (i.e., on one page of links provided by the search engines), whereas for the older logs the foraging processes are a combination of local searches and relocation phases that are power law distributed. Our investigation of click logs of search engines therefore highlights the presence of intermittent search processes (where phases of local explorations are separated by power law distributed relocation jumps) in online searches. It follows that good search engines enable the users to find the information they are looking for through a local exploration of a single page with search results, whereas for poor search engine users are often forced to do a broader exploration of different pages.
ERIC Educational Resources Information Center
Choi, Gi Woong; Pursel, Barton K.; Stubbs, Chris
2017-01-01
Interest towards implementing educational gaming into courses within higher education continues to increase, but it requires extensive amounts of resources to create individual games for each course. This paper is a description of a university's effort to create a custom educational game engine to streamline the game development process within the…
The ATPG Attack for Reverse Engineering of Combinational Hybrid Custom-Programmable Circuits
2017-03-23
The ATPG Attack for Reverse Engineering of Combinational Hybrid Custom- Programmable Circuits Raza Shafiq Hamid Mahmoodi Houman Homayoun Hassan... programmable circuits. While functionality of programmable cells are only known to trusted parties, effective techniques for activation and propagation...of the cells are introduced. The ATPG attack carefully studies dependency of programmable cells to develop their (partial) truth tables. Results
NASA Astrophysics Data System (ADS)
Medini, Khaled
2018-01-01
The increase of individualised customer demands and tough competition in the manufacturing sector gave rise to more customer-centric operations management such as products and services (mass) customisation. Mass customisation (MC), which inherits the 'economy of scale' from mass production (MP), aims to meet specific customer demands with near MP efficiency. Such an overarching concept has multiple impacts on operations management. This requires highly qualified and multi-skilled engineers who are well prepared for managing MC. Therefore, this concept should be properly addressed by engineering education curricula which needs to keep up with the emerging business trends. This paper introduces a novel course about MC and variety in operations management which recalls several Experiential Learning (EL) practices consistently with the principle of an active learning. The paper aims to analyse to which extent EL can improve the efficiency of the teaching methods and the retention rate in the context of operations management. The proposed course is given to engineering students whose' perceptions are collected using semi-structured questionnaires and analysed quantitatively and qualitatively. The paper highlights the relevance (i) of teaching MC, and (ii) of active learning in engineering education, through the specific application in the domain of MC.
Cyberdrugs: a cross-sectional study of online pharmacies characteristics.
Orizio, Grazia; Schulz, Peter; Domenighini, Serena; Caimi, Luigi; Rosati, Cristina; Rubinelli, Sara; Gelatti, Umberto
2009-08-01
As e-commerce and online pharmacies (OPs) arose, the potential impact of the Internet on the world of health shifted from merely the spread of information to a real opportunity to acquire health services directly. Aim of the study was to investigate the offer of prescription drugs in OPs, analysing their characteristics, using the content analysis method. The research performed using the Google search engine led to an analysis of 118 online pharmacies. Only 51 (43.2%) of them stated their precise location. Ninety-six (81.4%) online pharmacies did not require a medical prescription from the customer's physician. Online pharmacies rise complex issues in terms of patient-doctor relationship, consumer empowerment, drug quality, regulation and public health implications.
The PO.DAAC Portal and its use of the Drupal Framework
NASA Astrophysics Data System (ADS)
Alarcon, C.; Huang, T.; Bingham, A.; Cosic, S.
2011-12-01
The Physical Oceanography Distributed Active Archive Center portal (http://podaac.jpl.nasa.gov) is the primary interface for discovering and accessing oceanographic datasets collected from the vantage point of space. In addition, it provides information about NASA's satellite missions and operational activities at the data center. Recently the portal underwent a major redesign and deployment utilizing the Drupal framework. The Drupal framework was chosen as the platform for the portal due to its flexibility, open source community, and modular infrastructure. The portal features efficient content addition and management, mailing lists, forums, role based access control, and a faceted dataset browse capability. The dataset browsing was built as a custom Drupal module and integrates with a SOLR search engine.
Sato, Kuniya; Ooba, Masahiro; Takagi, Tomohiko; Furukawa, Zengo; Komiya, Seiichi; Yaegashi, Rihito
2013-12-01
Agile software development gains requirements from the direct discussion with customers and the development staff each time, and the customers evaluate the appropriateness of the requirement. If the customers divide the complicated requirement into individual requirements, the engineer who is in charge of software development can understand it easily. This is called division of requirement. However, the customers do not understand how much and how to divide the requirements. This paper proposes the method to divide a complicated requirement into individual requirements. Also, it shows the development of requirement specification editor which can describe individual requirements. The engineer who is in charge of software development can understand requirements easily.
Locality in Search Engine Queries and Its Implications for Caching
2001-05-01
in the question of whether caching might be effective for search engines as well. They study two real search engine traces by examining query...locality and its implications for caching. The two search engines studied are Vivisimo and Excite. Their trace analysis results show that queries have
1994-05-01
LOGISTICS MANAGEMENT INSTITUTE An Approach for Meeting Customer Standards Under Executive Order 12862 Summary Executive Order 12862, Setting...search Centers all operate and manage wind tunnels for both NASA and indus- try customers . Nonetheless, a separate wind-tunnel process should be...could include the man- ager of the process, selected members of the manager’s staff, a key customer , and a survey expert. The manager and staff would
Variability of patient spine education by Internet search engine.
Ghobrial, George M; Mehdi, Angud; Maltenfort, Mitchell; Sharan, Ashwini D; Harrop, James S
2014-03-01
Patients are increasingly reliant upon the Internet as a primary source of medical information. The educational experience varies by search engine, search term, and changes daily. There are no tools for critical evaluation of spinal surgery websites. To highlight the variability between common search engines for the same search terms. To detect bias, by prevalence of specific kinds of websites for certain spinal disorders. Demonstrate a simple scoring system of spinal disorder website for patient use, to maximize the quality of information exposed to the patient. Ten common search terms were used to query three of the most common search engines. The top fifty results of each query were tabulated. A negative binomial regression was performed to highlight the variation across each search engine. Google was more likely than Bing and Yahoo search engines to return hospital ads (P=0.002) and more likely to return scholarly sites of peer-reviewed lite (P=0.003). Educational web sites, surgical group sites, and online web communities had a significantly higher likelihood of returning on any search, regardless of search engine, or search string (P=0.007). Likewise, professional websites, including hospital run, industry sponsored, legal, and peer-reviewed web pages were less likely to be found on a search overall, regardless of engine and search string (P=0.078). The Internet is a rapidly growing body of medical information which can serve as a useful tool for patient education. High quality information is readily available, provided that the patient uses a consistent, focused metric for evaluating online spine surgery information, as there is a clear variability in the way search engines present information to the patient. Published by Elsevier B.V.
A rank-based Prediction Algorithm of Learning User's Intention
NASA Astrophysics Data System (ADS)
Shen, Jie; Gao, Ying; Chen, Cang; Gong, HaiPing
Internet search has become an important part in people's daily life. People can find many types of information to meet different needs through search engines on the Internet. There are two issues for the current search engines: first, the users should predetermine the types of information they want and then change to the appropriate types of search engine interfaces. Second, most search engines can support multiple kinds of search functions, each function has its own separate search interface. While users need different types of information, they must switch between different interfaces. In practice, most queries are corresponding to various types of information results. These queries can search the relevant results in various search engines, such as query "Palace" contains the websites about the introduction of the National Palace Museum, blog, Wikipedia, some pictures and video information. This paper presents a new aggregative algorithm for all kinds of search results. It can filter and sort the search results by learning three aspects about the query words, search results and search history logs to achieve the purpose of detecting user's intention. Experiments demonstrate that this rank-based method for multi-types of search results is effective. It can meet the user's search needs well, enhance user's satisfaction, provide an effective and rational model for optimizing search engines and improve user's search experience.
NASA Astrophysics Data System (ADS)
Hepp, Martin
E-Commerce on the basis of current Web technology has created fierce competition with a strong focus on price. Despite a huge variety of offerings and diversity in the individual preferences of consumers, current Web search fosters a very early reduction of the search space to just a few commodity makes and models. As soon as this reduction has taken place, search is reduced to flat price comparison. This is unfortunate for the manufacturers and vendors, because their individual value proposition for a particular customer may get lost in the course of communication over the Web, and it is unfortunate for the customer, because he/she may not get the most utility for the money based on her/his preference function. A key limitation is that consumers cannot search using a consolidated view on all alternative offers across the Web. In this talk, I will (1) analyze the technical effects of products and services search on the Web that cause this mismatch between supply and demand, (2) evaluate how the GoodRelations vocabulary and the current Web of Data movement can improve the situation, (3) give a brief hands-on demonstration, and (4) sketch business models for the various market participants.
Engineering the on-axis intensity of Bessel beam by a feedback tuning loop
NASA Astrophysics Data System (ADS)
Li, Runze; Yu, Xianghua; Yang, Yanlong; Peng, Tong; Yao, Baoli; Zhang, Chunmin; Ye, Tong
2018-02-01
The Bessel beam belongs to a typical class of non-diffractive optical fields that are characterized by their invariant focal profiles along the propagation direction. However, ideal Bessel beams only rigorously exist in theory; Bessel beams generated in the lab are quasi-Bessel beams with finite focal extensions and varying intensity profiles along the propagation axis. The ability to engineer the on-axis intensity profile to the desired shape is essential for many applications. Here we demonstrate an iterative optimization-based approach to engineering the on-axis intensity of Bessel beams. The genetic algorithm is used to demonstrate this approach. Starting with a traditional axicon phase mask, in the design process, the computed on-axis beam profile is fed into a feedback tuning loop of an iterative optimization process, which searches for an optimal radial phase distribution that can generate a generalized Bessel beam with the desired onaxis intensity profile. The experimental implementation involves a fine-tuning process that adjusts the originally targeted profile so that the optimization process can optimize the phase mask to yield an improved on-axis profile. Our proposed method has been demonstrated in engineering several zeroth-order Bessel beams with customized on-axis profiles. High accuracy and high energy throughput merit its use in many applications.
Evaluation of Proteomic Search Engines for the Analysis of Histone Modifications
2015-01-01
Identification of histone post-translational modifications (PTMs) is challenging for proteomics search engines. Including many histone PTMs in one search increases the number of candidate peptides dramatically, leading to low search speed and fewer identified spectra. To evaluate database search engines on identifying histone PTMs, we present a method in which one kind of modification is searched each time, for example, unmodified, individually modified, and multimodified, each search result is filtered with false discovery rate less than 1%, and the identifications of multiple search engines are combined to obtain confident results. We apply this method for eight search engines on histone data sets. We find that two search engines, pFind and Mascot, identify most of the confident results at a reasonable speed, so we recommend using them to identify histone modifications. During the evaluation, we also find some important aspects for the analysis of histone modifications. Our evaluation of different search engines on identifying histone modifications will hopefully help those who are hoping to enter the histone proteomics field. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium with the data set identifier PXD001118. PMID:25167464
Evaluation of proteomic search engines for the analysis of histone modifications.
Yuan, Zuo-Fei; Lin, Shu; Molden, Rosalynn C; Garcia, Benjamin A
2014-10-03
Identification of histone post-translational modifications (PTMs) is challenging for proteomics search engines. Including many histone PTMs in one search increases the number of candidate peptides dramatically, leading to low search speed and fewer identified spectra. To evaluate database search engines on identifying histone PTMs, we present a method in which one kind of modification is searched each time, for example, unmodified, individually modified, and multimodified, each search result is filtered with false discovery rate less than 1%, and the identifications of multiple search engines are combined to obtain confident results. We apply this method for eight search engines on histone data sets. We find that two search engines, pFind and Mascot, identify most of the confident results at a reasonable speed, so we recommend using them to identify histone modifications. During the evaluation, we also find some important aspects for the analysis of histone modifications. Our evaluation of different search engines on identifying histone modifications will hopefully help those who are hoping to enter the histone proteomics field. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium with the data set identifier PXD001118.
ERIC Educational Resources Information Center
Nistor, Nicolae; Dehne, Anina; Drews, Frank Thomas
2010-01-01
In search of methods that improve the efficiency of teaching and training in organizations, several authors point out that mass customization (MC) is a principle that covers individual needs of knowledge and skills and, at the same time limits the development costs of customized training to those of mass training. MC is proven and established in…
Customer satisfaction with patient care: "Where's the Beef?".
Vukmir, Rade B
2006-01-01
This was an attempt to present an analysis of the literature examining objective information concerning the subject of customer service, as it applies to the current medical practice. Hopefully this information will be synthesized to generate a cogent approach to correlate customer service with quality. Articles were obtained by an English language search of MEDLINE from January 1976 to July 2005. This computerized search was supplemented with literature from the author's personal collection of peer reviewed articles on customer service in a medical setting. This information was presented in a qualitative fashion. There is a significant lack of objective data correlating customer service objectives, patient satisfaction, and quality of care. Patients present predominantly for the convenience of emergency department care. Specifics of satisfaction are directed to the timing, and amount of "caring." Demographic correlates including symptom presentation, practice style, location, and physician issues directly impact on satisfaction. It is most helpful to develop a productive plan for the "difficult patient" emphasizing communication and empathy. The current emergency medicine customer service dilemmas are a complex interaction of both patient and physician factors specifically targeting both efficiency and patient satisfaction. Awareness of these issues can help to maximize efficiency, minimize subsequent medicolegal risk and improve patient care.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Procedure. 191.142 Section 191.142 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) DRAWBACK Foreign-Built Jet Aircraft Engines Processed in the United States § 191.142 Procedure...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 2 2011-04-01 2011-04-01 false Procedure. 191.142 Section 191.142 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) DRAWBACK Foreign-Built Jet Aircraft Engines Processed in the United States § 191.142 Procedure...
DockoMatic 2.0: high throughput inverse virtual screening and homology modeling.
Bullock, Casey; Cornia, Nic; Jacob, Reed; Remm, Andrew; Peavey, Thomas; Weekes, Ken; Mallory, Chris; Oxford, Julia T; McDougal, Owen M; Andersen, Timothy L
2013-08-26
DockoMatic is a free and open source application that unifies a suite of software programs within a user-friendly graphical user interface (GUI) to facilitate molecular docking experiments. Here we describe the release of DockoMatic 2.0; significant software advances include the ability to (1) conduct high throughput inverse virtual screening (IVS); (2) construct 3D homology models; and (3) customize the user interface. Users can now efficiently setup, start, and manage IVS experiments through the DockoMatic GUI by specifying receptor(s), ligand(s), grid parameter file(s), and docking engine (either AutoDock or AutoDock Vina). DockoMatic automatically generates the needed experiment input files and output directories and allows the user to manage and monitor job progress. Upon job completion, a summary of results is generated by Dockomatic to facilitate interpretation by the user. DockoMatic functionality has also been expanded to facilitate the construction of 3D protein homology models using the Timely Integrated Modeler (TIM) wizard. The wizard TIM provides an interface that accesses the basic local alignment search tool (BLAST) and MODELER programs and guides the user through the necessary steps to easily and efficiently create 3D homology models for biomacromolecular structures. The DockoMatic GUI can be customized by the user, and the software design makes it relatively easy to integrate additional docking engines, scoring functions, or third party programs. DockoMatic is a free comprehensive molecular docking software program for all levels of scientists in both research and education.
Modelling and Simulation of Search Engine
NASA Astrophysics Data System (ADS)
Nasution, Mahyuddin K. M.
2017-01-01
The best tool currently used to access information is a search engine. Meanwhile, the information space has its own behaviour. Systematically, an information space needs to be familiarized with mathematics so easily we identify the characteristics associated with it. This paper reveal some characteristics of search engine based on a model of document collection, which are then estimated the impact on the feasibility of information. We reveal some of characteristics of search engine on the lemma and theorem about singleton and doubleton, then computes statistically characteristic as simulating the possibility of using search engine. In this case, Google and Yahoo. There are differences in the behaviour of both search engines, although in theory based on the concept of documents collection.
New generation of the multimedia search engines
NASA Astrophysics Data System (ADS)
Mijes Cruz, Mario Humberto; Soto Aldaco, Andrea; Maldonado Cano, Luis Alejandro; López Rodríguez, Mario; Rodríguez Vázqueza, Manuel Antonio; Amaya Reyes, Laura Mariel; Cano Martínez, Elizabeth; Pérez Rosas, Osvaldo Gerardo; Rodríguez Espejo, Luis; Flores Secundino, Jesús Abimelek; Rivera Martínez, José Luis; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Sánchez Valenzuela, Juan Carlos; Montoya Obeso, Abraham; Ramírez Acosta, Alejandro Álvaro
2016-09-01
Current search engines are based upon search methods that involve the combination of words (text-based search); which has been efficient until now. However, the Internet's growing demand indicates that there's more diversity on it with each passing day. Text-based searches are becoming limited, as most of the information on the Internet can be found in different types of content denominated multimedia content (images, audio files, video files). Indeed, what needs to be improved in current search engines is: search content, and precision; as well as an accurate display of expected search results by the user. Any search can be more precise if it uses more text parameters, but it doesn't help improve the content or speed of the search itself. One solution is to improve them through the characterization of the content for the search in multimedia files. In this article, an analysis of the new generation multimedia search engines is presented, focusing the needs according to new technologies. Multimedia content has become a central part of the flow of information in our daily life. This reflects the necessity of having multimedia search engines, as well as knowing the real tasks that it must comply. Through this analysis, it is shown that there are not many search engines that can perform content searches. The area of research of multimedia search engines of new generation is a multidisciplinary area that's in constant growth, generating tools that satisfy the different needs of new generation systems.
The Effectiveness of Web Search Engines to Index New Sites from Different Countries
ERIC Educational Resources Information Center
Pirkola, Ari
2009-01-01
Introduction: Investigates how effectively Web search engines index new sites from different countries. The primary interest is whether new sites are indexed equally or whether search engines are biased towards certain countries. If major search engines show biased coverage it can be considered a significant economic and political problem because…
Taming the Information Jungle with WWW Search Engines.
ERIC Educational Resources Information Center
Repman, Judi; And Others
1997-01-01
Because searching the Web with different engines often produces different results, the best strategy is to learn how each engine works. Discusses comparing search engines; qualities to consider (ease of use, relevance of hits, and speed); and six of the most popular search tools (Yahoo, Magellan. InfoSeek, Alta Vista, Lycos, and Excite). Lists…
MIRASS: medical informatics research activity support system using information mashup network.
Kiah, M L M; Zaidan, B B; Zaidan, A A; Nabi, Mohamed; Ibraheem, Rabiu
2014-04-01
The advancement of information technology has facilitated the automation and feasibility of online information sharing. The second generation of the World Wide Web (Web 2.0) enables the collaboration and sharing of online information through Web-serving applications. Data mashup, which is considered a Web 2.0 platform, plays an important role in information and communication technology applications. However, few ideas have been transformed into education and research domains, particularly in medical informatics. The creation of a friendly environment for medical informatics research requires the removal of certain obstacles in terms of search time, resource credibility, and search result accuracy. This paper considers three glitches that researchers encounter in medical informatics research; these glitches include the quality of papers obtained from scientific search engines (particularly, Web of Science and Science Direct), the quality of articles from the indices of these search engines, and the customizability and flexibility of these search engines. A customizable search engine for trusted resources of medical informatics was developed and implemented through data mashup. Results show that the proposed search engine improves the usability of scientific search engines for medical informatics. Pipe search engine was found to be more efficient than other engines.
Chemical Information in Scirus and BASE (Bielefeld Academic Search Engine)
ERIC Educational Resources Information Center
Bendig, Regina B.
2009-01-01
The author sought to determine to what extent the two search engines, Scirus and BASE (Bielefeld Academic Search Engines), would be useful to first-year university students as the first point of searching for chemical information. Five topics were searched and the first ten records of each search result were evaluated with regard to the type of…
Home | www.charlescountymd.gov
Customer Survey Mobile Services Contact Search form Search Search Main menu Home Businesses Tourism Animal Shelter Water and Sewer Billing Mobile Friendly Services Opioid Abuse & Overdose Prevention OFFICIAL WEBSITE OF THE CHARLES COUNTY GOVERNMENT 200 Baltimore Street | La Plata, Maryland 20646 Mobile
Database Search Engines: Paradigms, Challenges and Solutions.
Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc
2016-01-01
The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.
Wu, G; Li, J
1999-01-01
Identifying and accessing reliable, relevant consumer health information rapidly on the Internet may challenge the health sciences librarian and layperson alike. In this study, seven search engines are compared using representative consumer health topics for their content relevancy, system features, and attributes. The paper discusses evaluation criteria; systematically compares relevant results; analyzes performance in terms of the strengths and weaknesses of the search engines; and illustrates effective search engine selection, search formulation, and strategies. PMID:10550031
Islamic Extremists Love the Internet
2009-04-03
down on the West. Terrorists’ Use of Search Engines In order to find a particular blog, extremists use search engines such as Bloglines...BlogScope, and Technorati to search blog contents. Technorati, which is among the most popular blog search engines , provides current information on...of mid- January 2009 is tracking over 31.78 million blogs with 579.86 million posts.49 Other ways the terrorists use Web search engines are to
Combinatorial Fusion Analysis for Meta Search Information Retrieval
NASA Astrophysics Data System (ADS)
Hsu, D. Frank; Taksa, Isak
Leading commercial search engines are built as single event systems. In response to a particular search query, the search engine returns a single list of ranked search results. To find more relevant results the user must frequently try several other search engines. A meta search engine was developed to enhance the process of multi-engine querying. The meta search engine queries several engines at the same time and fuses individual engine results into a single search results list. The fusion of multiple search results has been shown (mostly experimentally) to be highly effective. However, the question of why and how the fusion should be done still remains largely unanswered. In this chapter, we utilize the combinatorial fusion analysis proposed by Hsu et al. to analyze combination and fusion of multiple sources of information. A rank/score function is used in the design and analysis of our framework. The framework provides a better understanding of the fusion phenomenon in information retrieval. For example, to improve the performance of the combined multiple scoring systems, it is necessary that each of the individual scoring systems has relatively high performance and the individual scoring systems are diverse. Additionally, we illustrate various applications of the framework using two examples from the information retrieval domain.
ROBOTICS IN HAZARDOUS ENVIRONMENTS - REAL DEPLOYMENTS BY THE SAVANNAH RIVER NATIONAL LABORATORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriikku, E.; Tibrea, S.; Nance, T.
The Research & Development Engineering (R&DE) section in the Savannah River National Laboratory (SRNL) engineers, integrates, tests, and supports deployment of custom robotics, systems, and tools for use in radioactive, hazardous, or inaccessible environments. Mechanical and electrical engineers, computer control professionals, specialists, machinists, welders, electricians, and mechanics adapt and integrate commercially available technology with in-house designs, to meet the needs of Savannah River Site (SRS), Department of Energy (DOE), and other governmental agency customers. This paper discusses five R&DE robotic and remote system projects.
Guide to Regulated Facilities in ECHO | ECHO | US EPA
There are multiple ways ECHO can be used to search compliance data. By default, ECHO searches focus on larger, more regulated facilities. Each search page allows users to search a more comprehensive group of facilities by electing to search for minor or smaller facilities. Information is presented that explains the types and approximate numbers of facilities that are included in searches when the default and custom options are used.
Search Engines on the World Wide Web.
ERIC Educational Resources Information Center
Walster, Dian
1997-01-01
Discusses search engines and provides methods for determining what resources are searched, the quality of the information, and the algorithms used that will improve the use of search engines on the World Wide Web, online public access catalogs, and electronic encyclopedias. Lists strategies for conducting searches and for learning about the latest…
The Mercury System: Embedding Computation into Disk Drives
2004-08-20
enabling technologies to build extremely fast data search engines . We do this by moving the search closer to the data, and performing it in hardware...engine searches in parallel across a disk or disk surface 2. System Parallelism: Searching is off-loaded to search engines and main processor can
19 CFR 191.144 - Refund of duties.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Refund of duties. 191.144 Section 191.144 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) DRAWBACK Foreign-Built Jet Aircraft Engines Processed in the United States § 191.144 Refund of...
19 CFR 191.144 - Refund of duties.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 2 2011-04-01 2011-04-01 false Refund of duties. 191.144 Section 191.144 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) DRAWBACK Foreign-Built Jet Aircraft Engines Processed in the United States § 191.144 Refund of...
[Biomedical information on the internet using search engines. A one-year trial].
Corrao, Salvatore; Leone, Francesco; Arnone, Sabrina
2004-01-01
The internet is a communication medium and content distributor that provide information in the general sense but it could be of great utility regarding as the search and retrieval of biomedical information. Search engines represent a great deal to rapidly find information on the net. However, we do not know whether general search engines and meta-search ones are reliable in order to find useful and validated biomedical information. The aim of our study was to verify the reproducibility of a search by key-words (pediatric or evidence) using 9 international search engines and 1 meta-search engine at the baseline and after a one year period. We analysed the first 20 citations as output of each searching. We evaluated the formal quality of Web-sites and their domain extensions. Moreover, we compared the output of each search at the start of this study and after a one year period and we considered as a criterion of reliability the number of Web-sites cited again. We found some interesting results that are reported throughout the text. Our findings point out an extreme dynamicity of the information on the Web and, for this reason, we advice a great caution when someone want to use search and meta-search engines as a tool for searching and retrieve reliable biomedical information. On the other hand, some search and meta-search engines could be very useful as a first step searching for defining better a search and, moreover, for finding institutional Web-sites too. This paper allows to know a more conscious approach to the internet biomedical information universe.
Alternative Fuels Data Center: Vehicle Search
ZeroTruck Search Engines and Hybrid Systems For medium- and heavy-duty vehicles: Engine & Power Sources Hydraulic hybrid Hybrid - CNG Hybrid - Diesel Electric Hybrid - LNG Hybrid Search x Pick Engine Fuel Natural Gas Propane Electric Plug-in Hybrid Electric Hydraulic hybrid Hybrid Search x Pick Engine Fuel
Getting to the top of Google: search engine optimization.
Maley, Catherine; Baum, Neil
2010-01-01
Search engine optimization is the process of making your Web site appear at or near the top of popular search engines such as Google, Yahoo, and MSN. This is not done by luck or knowing someone working for the search engines but by understanding the process of how search engines select Web sites for placement on top or on the first page. This article will review the process and provide methods and techniques to use to have your site rated at the top or very near the top.
IntegromeDB: an integrated system and biological search engine.
Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia
2012-01-19
With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.
Steady-State Cycle Deck Launcher Developed for Numerical Propulsion System Simulation
NASA Technical Reports Server (NTRS)
VanDrei, Donald E.
1997-01-01
One of the objectives of NASA's High Performance Computing and Communications Program's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to reduce the time and cost of generating aerothermal numerical representations of engines, called customer decks. These customer decks, which are delivered to airframe companies by various U.S. engine companies, numerically characterize an engine's performance as defined by the particular U.S. airframe manufacturer. Until recently, all numerical models were provided with a Fortran-compatible interface in compliance with the Society of Automotive Engineers (SAE) document AS681F, and data communication was performed via a standard, labeled common structure in compliance with AS681F. Recently, the SAE committee began to develop a new standard: AS681G. AS681G addresses multiple language requirements for customer decks along with alternative data communication techniques. Along with the SAE committee, the NPSS Steady-State Cycle Deck project team developed a standard Application Program Interface (API) supported by a graphical user interface. This work will result in Aerospace Recommended Practice 4868 (ARP4868). The Steady-State Cycle Deck work was validated against the Energy Efficient Engine customer deck, which is publicly available. The Energy Efficient Engine wrapper was used not only to validate ARP4868 but also to demonstrate how to wrap an existing customer deck. The graphical user interface for the Steady-State Cycle Deck facilitates the use of the new standard and makes it easier to design and analyze a customer deck. This software was developed following I. Jacobson's Object-Oriented Design methodology and is implemented in C++. The AS681G standard will establish a common generic interface for U.S. engine companies and airframe manufacturers. This will lead to more accurate cycle models, quicker model generation, and faster validation leading to specifications. The standard will facilitate cooperative work between industry and NASA. The NPSS Steady-State Cycle Deck team released a batch version of the Steady-State Cycle Deck in March 1996. Version 1.1 was released in June 1996. During fiscal 1997, NPSS accepted enhancements and modifications to the Steady-State Cycle Deck launcher. Consistent with NPSS' commercialization plan, these modifications will be done by a third party that can provide long-term software support.
Searching Lexis and Westlaw: Part III.
ERIC Educational Resources Information Center
Franklin, Carl
1986-01-01
This last installment in a three-part series covers several important areas in the searching of legal information: online (group) training and customer service, documentation (search manuals and other aids), account representatives, microcomputer software, and pricing. Advantages and drawbacks of both the LEXIS and WESTLAW databases are noted.…
19 CFR 162.79b - Recovery of actual loss of duties, taxes and fees or actual loss of revenue.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Recovery of actual loss of duties, taxes and fees or actual loss of revenue. 162.79b Section 162.79b Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE...
Combining results of multiple search engines in proteomics.
Shteynberg, David; Nesvizhskii, Alexey I; Moritz, Robert L; Deutsch, Eric W
2013-09-01
A crucial component of the analysis of shotgun proteomics datasets is the search engine, an algorithm that attempts to identify the peptide sequence from the parent molecular ion that produced each fragment ion spectrum in the dataset. There are many different search engines, both commercial and open source, each employing a somewhat different technique for spectrum identification. The set of high-scoring peptide-spectrum matches for a defined set of input spectra differs markedly among the various search engine results; individual engines each provide unique correct identifications among a core set of correlative identifications. This has led to the approach of combining the results from multiple search engines to achieve improved analysis of each dataset. Here we review the techniques and available software for combining the results of multiple search engines and briefly compare the relative performance of these techniques.
Combining Results of Multiple Search Engines in Proteomics*
Shteynberg, David; Nesvizhskii, Alexey I.; Moritz, Robert L.; Deutsch, Eric W.
2013-01-01
A crucial component of the analysis of shotgun proteomics datasets is the search engine, an algorithm that attempts to identify the peptide sequence from the parent molecular ion that produced each fragment ion spectrum in the dataset. There are many different search engines, both commercial and open source, each employing a somewhat different technique for spectrum identification. The set of high-scoring peptide-spectrum matches for a defined set of input spectra differs markedly among the various search engine results; individual engines each provide unique correct identifications among a core set of correlative identifications. This has led to the approach of combining the results from multiple search engines to achieve improved analysis of each dataset. Here we review the techniques and available software for combining the results of multiple search engines and briefly compare the relative performance of these techniques. PMID:23720762
Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access
NASA Astrophysics Data System (ADS)
Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.
2008-12-01
Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.
Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet
2008-01-01
Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfacesmore » then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.« less
ERIC Educational Resources Information Center
Hock, Randolph
This book aims to facilitate more effective and efficient use of World Wide Web search engines by helping the reader: know the basic structure of the major search engines; become acquainted with those attributes (features, benefits, options, content, etc.) that search engines have in common and where they differ; know the main strengths and…
Research on Agriculture Domain Meta-Search Engine System
NASA Astrophysics Data System (ADS)
Xie, Nengfu; Wang, Wensheng
The rapid growth of agriculture web information brings a fact that search engine can not return a satisfied result for users’ queries. In this paper, we propose an agriculture domain search engine system, called ADSE, that can obtains results by an advance interface to several searches and aggregates them. We also discuss two key technologies: agriculture information determination and engine.
Seqcrawler: biological data indexing and browsing platform.
Sallou, Olivier; Bretaudeau, Anthony; Roult, Aurelien
2012-07-24
Seqcrawler takes its roots in software like SRS or Lucegene. It provides an indexing platform to ease the search of data and meta-data in biological banks and it can scale to face the current flow of data. While many biological bank search tools are available on the Internet, mainly provided by large organizations to search their data, there is a lack of free and open source solutions to browse one's own set of data with a flexible query system and able to scale from a single computer to a cloud system. A personal index platform will help labs and bioinformaticians to search their meta-data but also to build a larger information system with custom subsets of data. The software is scalable from a single computer to a cloud-based infrastructure. It has been successfully tested in a private cloud with 3 index shards (pieces of index) hosting ~400 millions of sequence information (whole GenBank, UniProt, PDB and others) for a total size of 600 GB in a fault tolerant architecture (high-availability). It has also been successfully integrated with software to add extra meta-data from blast results to enhance users' result analysis. Seqcrawler provides a complete open source search and store solution for labs or platforms needing to manage large amount of data/meta-data with a flexible and customizable web interface. All components (search engine, visualization and data storage), though independent, share a common and coherent data system that can be queried with a simple HTTP interface. The solution scales easily and can also provide a high availability infrastructure.
Using Internet search engines to estimate word frequency.
Blair, Irene V; Urland, Geoffrey R; Ma, Jennifer E
2002-05-01
The present research investigated Internet search engines as a rapid, cost-effective alternative for estimating word frequencies. Frequency estimates for 382 words were obtained and compared across four methods: (1) Internet search engines, (2) the Kucera and Francis (1967) analysis of a traditional linguistic corpus, (3) the CELEX English linguistic database (Baayen, Piepenbrock, & Gulikers, 1995), and (4) participant ratings of familiarity. The results showed that Internet search engines produced frequency estimates that were highly consistent with those reported by Kucera and Francis and those calculated from CELEX, highly consistent across search engines, and very reliable over a 6-month period of time. Additional results suggested that Internet search engines are an excellent option when traditional word frequency analyses do not contain the necessary data (e.g., estimates for forenames and slang). In contrast, participants' familiarity judgments did not correspond well with the more objective estimates of word frequency. Researchers are advised to use search engines with large databases (e.g., AltaVista) to ensure the greatest representativeness of the frequency estimates.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M
2011-07-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I.; Marcotte, Edward M.
2011-01-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for all possible PSMs and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for all detected proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses. PMID:21488652
The Gaze of the Perfect Search Engine: Google as an Infrastructure of Dataveillance
NASA Astrophysics Data System (ADS)
Zimmer, M.
Web search engines have emerged as a ubiquitous and vital tool for the successful navigation of the growing online informational sphere. The goal of the world's largest search engine, Google, is to "organize the world's information and make it universally accessible and useful" and to create the "perfect search engine" that provides only intuitive, personalized, and relevant results. While intended to enhance intellectual mobility in the online sphere, this chapter reveals that the quest for the perfect search engine requires the widespread monitoring and aggregation of a users' online personal and intellectual activities, threatening the values the perfect search engines were designed to sustain. It argues that these search-based infrastructures of dataveillance contribute to a rapidly emerging "soft cage" of everyday digital surveillance, where they, like other dataveillance technologies before them, contribute to the curtailing of individual freedom, affect users' sense of self, and present issues of deep discrimination and social justice.
IntegromeDB: an integrated system and biological search engine
2012-01-01
Background With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Description Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. Conclusions The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback. PMID:22260095
Searching the Internet for information on prostate cancer screening: an assessment of quality.
Ilic, Dragan; Risbridger, Gail; Green, Sally
2004-07-01
To identify how on-line information relating to prostate cancer screening (PCS) is best sourced, whether through general, medical, or meta-search engines, and to assess the quality of that information. Websites providing information about PCS were searched across 15 search engines representing three distinct types: general, medical, and meta-search engines. The quality of on-line information was assessed using the DISCERN quality assessment tool. Quality performance characteristics were analyzed by performing Mann-Whitney U tests. Search engine efficiency was measured by each search query as a percentage of the relevant websites included for analysis from the total returned and analyzed by performing Kruskal-Wallis analysis of variance. Of 6690 websites reviewed, 84 unique websites were identified as providing information relevant to PCS. General and meta-search engines were significantly more efficient at retrieving relevant information on PCS compared with medical search engines. The quality of information was variable, with most of a poor standard. Websites that provided referral links to other resources and a citation of evidence provided a significantly better quality of information. In contrast, websites offering a direct service were more likely to provide a significantly poorer quality of information. The current lack of a clear consensus on guidelines and recommendation in published data is also reflected by the variable quality of information found on-line. Specialized medical search engines were no more likely to retrieve relevant, high-quality information than general or meta-search engines.
Re-engineering pre-employment check-up systems: a model for improving health services.
Rateb, Said Abdel Hakim; El Nouman, Azza Abdel Razek; Rateb, Moshira Abdel Hakim; Asar, Mohamed Naguib; El Amin, Ayman Mohammed; Gad, Saad abdel Aziz; Mohamed, Mohamed Salah Eldin
2011-01-01
The purpose of this paper is to develop a model for improving health services provided by the pre-employment medical fitness check-up system affiliated to Egypt's Health Insurance Organization (HIO). Operations research, notably system re-engineering, is used in six randomly selected centers and findings before and after re-engineering are compared. The re-engineering model follows a systems approach, focusing on three areas: structure, process and outcome. The model is based on six main components: electronic booking, standardized check-up processes, protected medical documents, advanced archiving through an electronic content management (ECM) system, infrastructure development, and capacity building. The model originates mainly from customer needs and expectations. The centers' monthly customer flow increased significantly after re-engineering. The mean time spent per customer cycle improved after re-engineering--18.3 +/- 5.5 minutes as compared to 48.8 +/- 14.5 minutes before. Appointment delay was also significantly decreased from an average 18 to 6.2 days. Both beneficiaries and service providers were significantly more satisfied with the services after re-engineering. The model proves that re-engineering program costs are exceeded by increased revenue. Re-engineering in this study involved multiple structure and process elements. The literature review did not reveal similar re-engineering healthcare packages. Therefore, each element was compared separately. This model is highly recommended for improving service effectiveness and efficiency. This research is the first in Egypt to apply the re-engineering approach to public health systems. Developing user-friendly models for service improvement is an added value.
Samadzadeh, Gholam Reza; Rigi, Tahereh; Ganjali, Ali Reza
2013-01-01
Surveying valuable and most recent information from internet, has become vital for researchers and scholars, because every day, thousands and perhaps millions of scientific works are brought out as digital resources which represented by internet and researchers can't ignore this great resource to find related documents for their literature search, which may not be found in any library. With regard to variety of documents presented on the internet, search engines are one of the most effective search tools for finding information. The aim of this study is to evaluate the three criteria, recall, preciseness and importance of the four search engines which are PubMed, Science Direct, Google Scholar and federated search of Iranian National Medical Digital Library in addiction (prevention and treatment) to select the most effective search engine for offering the best literature research. This research was a cross-sectional study by which four popular search engines in medical sciences were evaluated. To select keywords, medical subject heading (Mesh) was used. We entered given keywords in the search engines and after searching, 10 first entries were evaluated. Direct observation was used as a mean for data collection and they were analyzed by descriptive statistics (number, percent number and mean) and inferential statistics, One way analysis of variance (ANOVA) and post hoc Tukey in Spss. 15 statistical software. P Value < 0.05 was considered statistically significant. Results have shown that the search engines had different operations with regard to the evaluated criteria. Since P Value was 0.004 < 0.05 for preciseness and was 0.002 < 0.05 for importance, it shows significant difference among search engines. PubMed, Science Direct and Google Scholar were the best in recall, preciseness and importance respectively. As literature research is one of the most important stages of research, it's better for researchers, especially Substance-Related Disorders scholars to use different search engines with the best recall, preciseness and importance in that subject field to reach desirable results while searching and they don't depend on just one search engine.
Samadzadeh, Gholam Reza; Rigi, Tahereh; Ganjali, Ali Reza
2013-01-01
Background Surveying valuable and most recent information from internet, has become vital for researchers and scholars, because every day, thousands and perhaps millions of scientific works are brought out as digital resources which represented by internet and researchers can’t ignore this great resource to find related documents for their literature search, which may not be found in any library. With regard to variety of documents presented on the internet, search engines are one of the most effective search tools for finding information. Objectives The aim of this study is to evaluate the three criteria, recall, preciseness and importance of the four search engines which are PubMed, Science Direct, Google Scholar and federated search of Iranian National Medical Digital Library in addiction (prevention and treatment) to select the most effective search engine for offering the best literature research. Materials and Methods This research was a cross-sectional study by which four popular search engines in medical sciences were evaluated. To select keywords, medical subject heading (Mesh) was used. We entered given keywords in the search engines and after searching, 10 first entries were evaluated. Direct observation was used as a mean for data collection and they were analyzed by descriptive statistics (number, percent number and mean) and inferential statistics, One way analysis of variance (ANOVA) and post hoc Tukey in Spss. 15 statistical software. P Value < 0.05 was considered statistically significant. Results Results have shown that the search engines had different operations with regard to the evaluated criteria. Since P Value was 0.004 < 0.05 for preciseness and was 0.002 < 0.05 for importance, it shows significant difference among search engines. PubMed, Science Direct and Google Scholar were the best in recall, preciseness and importance respectively. Conclusions As literature research is one of the most important stages of research, it's better for researchers, especially Substance-Related Disorders scholars to use different search engines with the best recall, preciseness and importance in that subject field to reach desirable results while searching and they don’t depend on just one search engine. PMID:24971257
Search Engine Liability for Copyright Infringement
NASA Astrophysics Data System (ADS)
Fitzgerald, B.; O'Brien, D.; Fitzgerald, A.
The chapter provides a broad overview to the topic of search engine liability for copyright infringement. In doing so, the chapter examines some of the key copyright law principles and their application to search engines. The chapter also provides a discussion of some of the most important cases to be decided within the courts of the United States, Australia, China and Europe regarding the liability of search engines for copyright infringement. Finally, the chapter will conclude with some thoughts for reform, including how copyright law can be amended in order to accommodate and realise the great informative power which search engines have to offer society.
Gas Station Pricing Game: A Lesson in Engineering Economics and Business Strategies.
ERIC Educational Resources Information Center
Sin, Aaron; Center, Alfred M.
2002-01-01
Describes an educational game designed for engineering majors that demonstrates engineering economics and business strategies, specifically the concepts of customer perception of product value, convenience, and price differentiation. (YDS)
The Cold Dark Matter Search test stand warm electronics card
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hines, Bruce; /Colorado U., Denver; Hansen, Sten
A card which does the signal processing for four SQUID amplifiers and two charge sensitive channels is described. The card performs the same functions as is presently done with two custom 9U x 280mm Eurocard modules, a commercial multi-channel VME digitizer, a PCI to GPIB interface, a PCI to VME interface and a custom built linear power supply. By integrating these functions onto a single card and using the power over Ethernet standard, the infrastructure requirements for instrumenting a Cold Dark Matter Search (CDMS) detector test stand are significantly reduced.
Caro-Rojas, Rosa Angela; Eslava-Schmalbach, Javier H
2005-01-01
To compare the information obtained from the Medline database using Internet commercial search engines with that obtained from a compact disc (Medline-CD). An agreement study was carried out based on 101 clinical scenarios provided by specialists in internal medicine, pharmacy, gynaecology-obstetrics, surgery and paediatrics. 175 search strategies were employed using the connector AND plus text within quotation marks. The search was limited to 1991-1999. Internet search-engines were selected by common criteria. Identical search strategies were independently applied to and masked from Internet search engines, as well as the Medline-CD. 3,488 articles were obtained using 129 search strategies. Agreement with the Medline-CD was 54% for PubMed, 57% for Gateway, 54% for Medscape and 65% for BioMedNet. The highest agreement rate for a given speciality (paediatrics) was 78.1% for BioMedNet, having greater -/- than +/+ agreement. Even though free access to Medline has encouraged the boom and growth of evidence-based medicine, these results must be considered within the context of which search engine was selected for doing the searches. The Internet search engines studied showed a poor agreement with the Medline-CD, the rate of agreement differing according to speciality, thus significantly affecting searches and their reproducibility. Software designed for conducting Medline database searches, including the Medline-CD, must be standardised and validated.
Defining and Exposing Privacy Issues with Social Media
2012-06-11
Twitter, and Linked In[ I 0). VI. SEARCH ENGINES In addition to social networking sites, search engines pose new issues to privacy. As...networking, search engines , and storing personal information online in general have been accepted worldwide due to the benefits they provide. Social...networking provides even more communication in an information-demanding age, allowing users to interact across great distances. Search engines allow
2011-09-01
search engines to find information. Most commercial search engines (Google, Yahoo, Bing, etc.) provide their indexing and search services...at no cost. The DoD can achieve large gains at a small cost by making public documents available to search engines . This can be achieved through the...were organized on the website dodreports.com. The results of this research revealed improvement gains of 8-20% for finding reports through commercial search engines during the first six months of
Can people find patient decision aids on the Internet?
Morris, Debra; Drake, Elizabeth; Saarimaki, Anton; Bennett, Carol; O'Connor, Annette
2008-12-01
To determine if people could find patient decision aids (PtDAs) on the Internet using the most popular general search engines. We chose five medical conditions for which English language PtDAs were available from at least three different developers. The search engines used were: Google (www.google.com), Yahoo! (www.yahoo.com), and MSN (www.msn.com). For each condition and search engine we ran six searches using a combination of search terms. We coded all non-sponsored Web pages that were linked from the first page of the search results. Most first page results linked to informational Web pages about the condition, only 16% linked to PtDAs. PtDAs were more readily found for the breast cancer surgery decision (our searches found seven of the nine developers). The searches using Yahoo and Google search engines were more likely to find PtDAs. The following combination of search terms: condition, treatment, decision (e.g. breast cancer surgery decision) was most successful across all search engines (29%). While some terms and search engines were more successful, few resulted in direct links to PtDAs. Finding PtDAs would be improved with use of standardized labelling, providing patients with specific Web site addresses or access to an independent PtDA clearinghouse.
Code of Federal Regulations, 2010 CFR
2010-04-01
... (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Civil Asset Forfeiture Reform Act § 162.91 Exemptions. The provisions of this subpart will apply to all seizures of property for civil forfeiture made by Customs and Border Protection or Immigration and Customs Enforcement officers except for those seizures of property...
Carter, Tony
2007-01-01
To build this process it is necessary to consult customers for preferences, build familiarity and knowledge to build a relationship and conduct business in a customized fashion. The process takes every opportunity to build customer satisfaction with each customer contact. It is an important process to have, since customers today are more demanding, sophisticated, educated and comfortable speaking to the company as an equal (Belk, 2003). Customers have more customized expectations so they want to be reached as individuals (Raymond and Tanner, 1994). Also, a disproportionate search for new business is costly. The cost to cultivate new customers is more than maintaining existing customers (Cathcart, 1990). Other reasons that customer retention is necessary is because many unhappy customers will never buy again from a company that dissatisfied them and they will communicate their displeasure to other people. These dissatisfied customers may not even convey their displeasure but without saying anything just stop doing business with that company, which may keep them unaware for some time that there is any problem (Cathcart, 1990).
Dong, Peng; Wong, Ling Ling; Ng, Sarah; Loh, Marie; Mondry, Adrian
2004-12-10
Critically Appraised Topics (CATs) are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM). The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. While achieving an identical recall, the meta-search engine showed a precision of 77.26% (+/-14.45) compared to the individual search engines' 52.65% (+/-12.0) (p < 0.001). The results demonstrate the validity of the CAT Crawler meta-search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians.
Dong, Peng; Wong, Ling Ling; Ng, Sarah; Loh, Marie; Mondry, Adrian
2004-01-01
Background Critically Appraised Topics (CATs) are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM). The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. Methods A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. Results While achieving an identical recall, the meta-search engine showed a precision of 77.26% (±14.45) compared to the individual search engines' 52.65% (±12.0) (p < 0.001). Conclusion The results demonstrate the validity of the CAT Crawler meta-search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians. PMID:15588311
On-line searching: costly or cost effective? A marketing perspective.
Dunn, R G; Boyle, H F
1984-05-01
The value of acquiring and using information is not well understood. Decisions to purchase information are made on the basis of the perceived need for the information, the anticipated benefit of using it, and the price. The current pricing of on-line information services, which emphasizes the connect hour as the unit of price, does not relate the price of a search to the value of a search, and the education programs of on-line vendors and database suppliers concentrate on the mechanics of information retrieval rather than on the application of information to the customer's problem. The on-line information industry needs to adopt a strong marketing orientation that focuses on the needs of customers rather than the needs of suppliers or vendors.
CASIS Fact Sheet: Hardware and Facilities
NASA Technical Reports Server (NTRS)
Solomon, Michael R.; Romero, Vergel
2016-01-01
Vencore is a proven information solutions, engineering, and analytics company that helps our customers solve their most complex challenges. For more than 40 years, we have designed, developed and delivered mission-critical solutions as our customers' trusted partner. The Engineering Services Contract, or ESC, provides engineering and design services to the NASA organizations engaged in development of new technologies at the Kennedy Space Center. Vencore is the ESC prime contractor, with teammates that include Stinger Ghaffarian Technologies, Sierra Lobo, Nelson Engineering, EASi, and Craig Technologies. The Vencore team designs and develops systems and equipment to be used for the processing of space launch vehicles, spacecraft, and payloads. We perform flight systems engineering for spaceflight hardware and software; develop technologies that serve NASA's mission requirements and operations needs for the future. Our Flight Payload Support (FPS) team at Kennedy Space Center (KSC) provides engineering, development, and certification services as well as payload integration and management services to NASA and commercial customers. Our main objective is to assist principal investigators (PIs) integrate their science experiments into payload hardware for research aboard the International Space Station (ISS), commercial spacecraft, suborbital vehicles, parabolic flight aircrafts, and ground-based studies. Vencore's FPS team is AS9100 certified and a recognized implementation partner for the Center for Advancement of Science in Space (CASIS
Mining and integration of pathway diagrams from imaging data.
Kozhenkov, Sergey; Baitaluk, Michael
2012-03-01
Pathway diagrams from PubMed and World Wide Web (WWW) contain valuable highly curated information difficult to reach without tools specifically designed and customized for the biological semantics and high-content density of the images. There is currently no search engine or tool that can analyze pathway images, extract their pathway components (molecules, genes, proteins, organelles, cells, organs, etc.) and indicate their relationships. Here, we describe a resource of pathway diagrams retrieved from article and web-page images through optical character recognition, in conjunction with data mining and data integration methods. The recognized pathways are integrated into the BiologicalNetworks research environment linking them to a wealth of data available in the BiologicalNetworks' knowledgebase, which integrates data from >100 public data sources and the biomedical literature. Multiple search and analytical tools are available that allow the recognized cellular pathways, molecular networks and cell/tissue/organ diagrams to be studied in the context of integrated knowledge, experimental data and the literature. BiologicalNetworks software and the pathway repository are freely available at www.biologicalnetworks.org. Supplementary data are available at Bioinformatics online.
An open-source, mobile-friendly search engine for public medical knowledge.
Samwald, Matthias; Hanbury, Allan
2014-01-01
The World Wide Web has become an important source of information for medical practitioners. To complement the capabilities of currently available web search engines we developed FindMeEvidence, an open-source, mobile-friendly medical search engine. In a preliminary evaluation, the quality of results from FindMeEvidence proved to be competitive with those from TRIP Database, an established, closed-source search engine for evidence-based medicine.
An Analysis of the Field Service Function of Selected Electronics Firms
1992-01-01
Customer Engineer Evaluation ............... 175 Customer Satisfaction ...................... 176 Customer Complaints ......................... 176...system to assure satisfaction of requirements for operation, maintenance, and repair of products; -- the establishment of a responsive, efficient, and cost...research. This dissertation addresses some of the identified research needs and provides a contribution to the field service body of knowledge. By analyzing
Surveying the Numeric Databanks.
ERIC Educational Resources Information Center
O'Leary, Mick
1987-01-01
Describes six leading numeric databank services and compares them with bibliographic databases in terms of customers' needs, search software, pricing arrangements, and the role of the search specialist. A listing of the locations of the numeric databanks discussed is provided. (CLB)
A Collaboration in Support of LBA Science and Data Exchange: Beija-flor and EOS-WEBSTER
NASA Astrophysics Data System (ADS)
Schloss, A. L.; Gentry, M. J.; Keller, M.; Rhyne, T.; Moore, B.
2001-12-01
The University of New Hampshire (UNH) has developed a Web-based tool that makes data, information, products, and services concerning terrestrial ecological and hydrological processes available to the Earth Science community. Our WEB-based System for Terrestrial Ecosystem Research (EOS-WEBSTER) provides a GIS-oriented interface to select, subset, reformat and download three main types of data: selected NASA Earth Observing System (EOS) remotely sensed data products, results from a suite of ecosystem and hydrological models, and geographic reference data. The Large Scale Biosphere-Atmosphere Experiment in Amazonia Project (LBA) has implemented a search engine, Beija-flor, that provides a centralized access point to data sets acquired for and produced by LBA researchers. The metadata in the Beija-flor index describe the content of the data sets and contain links to data distributed around the world. The query system returns a list of data sets that meet the search criteria of the user. A common problem when a user of a system like Beija-flor wants data products located within another system is that users are required to re-specify information, such as spatial coordinates, in the other system. This poster describes methodology by which Beija-flor generates a unique URL containing the requested search parameters and passes the information to EOS-WEBSTER, thus making the interactive services and large diverse data holdings in EOS-WEBSTER directly available to Beija-flor users. This "Calling Card" is used by EOS-WEBSTER to generate on-demand custom products tailored to each Beija-flor request. Through a collaborative effort, we have demonstrated the ability to integrate project-specific search engines such as Beija-flor with the products and services of large data systems such as EOS-WEBSTER, to provide very specific information products with a minimal amount of additional programming. This methodology has the potential to greatly facilitate research data exchange by enhancing the interoperability of diverse data systems beyond the two described here.
Design and development of the Waukesha Custom Engine Control Air/Fuel Module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moss, D.W.
1996-12-31
The Waukesha Custom Engine Control Air/Fuel Module (AFM) is designed to control the air-fuel ratio for all Waukesha carbureted, gaseous fueled, industrial engine. The AFM is programmed with a personal computer to run in one of four control modes: catalyst, best power, best economy, or lean-burn. One system can control naturally aspirated, turbocharged, in-line or vee engines. The basic system consists of an oxygen sensing system, intake manifold pressure transducer, electronic control module, actuator and exhaust thermocouple. The system permits correct operation of Waukesha engines in spite of changes in fuel pressure or temperature, engine load or speed, and fuelmore » composition. The system utilizes closed loop control and is centered about oxygen sensing technology. An innovative approach to applying oxygen sensors to industrial engines provides very good performance, greatly prolongs sensor life, and maintains sensor accuracy. Design considerations and operating results are given for application of the system to stationary, industrial engines operating on fuel gases of greatly varying composition.« less
The Search for Extension: 7 Steps to Help People Find Research-Based Information on the Internet
ERIC Educational Resources Information Center
Hill, Paul; Rader, Heidi B.; Hino, Jeff
2012-01-01
For Extension's unbiased, research-based content to be found by people searching the Internet, it needs to be organized in a way conducive to the ranking criteria of a search engine. With proper web design and search engine optimization techniques, Extension's content can be found, recognized, and properly indexed by search engines and…
Publications - Search Help | Alaska Division of Geological & Geophysical
main content Publications Search Help General Hints The search engine will retrieve those publications publication's title is known, enter those words in the title input box. The search engine will look for all of .). Publication Year The search engine will retrieve all publication years by default. Select one publication year
EXADS - EXPERT SYSTEM FOR AUTOMATED DESIGN SYNTHESIS
NASA Technical Reports Server (NTRS)
Rogers, J. L.
1994-01-01
The expert system called EXADS was developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. Because of the general purpose nature of ADS, it is difficult for a nonexpert to select the best choice of strategy, optimizer, and one-dimensional search options from the one hundred or so combinations that are available. EXADS aids engineers in determining the best combination based on their knowledge of the problem and the expert knowledge previously stored by experts who developed ADS. EXADS is a customized application of the AESOP artificial intelligence program (the general version of AESOP is available separately from COSMIC. The ADS program is also available from COSMIC.) The expert system consists of two main components. The knowledge base contains about 200 rules and is divided into three categories: constrained, unconstrained, and constrained treated as unconstrained. The EXADS inference engine is rule-based and makes decisions about a particular situation using hypotheses (potential solutions), rules, and answers to questions drawn from the rule base. EXADS is backward-chaining, that is, it works from hypothesis to facts. The rule base was compiled from sources such as literature searches, ADS documentation, and engineer surveys. EXADS will accept answers such as yes, no, maybe, likely, and don't know, or a certainty factor ranging from 0 to 10. When any hypothesis reaches a confidence level of 90% or more, it is deemed as the best choice and displayed to the user. If no hypothesis is confirmed, the user can examine explanations of why the hypotheses failed to reach the 90% level. The IBM PC version of EXADS is written in IQ-LISP for execution under DOS 2.0 or higher with a central memory requirement of approximately 512K of 8 bit bytes. This program was developed in 1986.
Searching for Information Online: Using Big Data to Identify the Concerns of Potential Army Recruits
2016-01-01
software. For instance, such Internet search engines as Google or Yahoo! often gather anonymized data regarding the topics that people search for, as...suggesting that these and other information needs may be fur- ther reflected in usage of online search engines . Google makes aggregated and anonymized...Internet search engines such as Google or Yahoo! often gather anonymized data regarding the topics that people search for, as well as the date and
Index Relativity and Patron Search Strategy.
ERIC Educational Resources Information Center
Allison, DeeAnn; Childers Scott
2002-01-01
Describes a study at the University of Nebraska-Lincoln that compared searches in two different keyword indexes with similar content where search results were dependent on search strategy quality, search engine execution, and content. Results showed search engine execution had an impact on the number of matches and that users ignored search help…
40 CFR 1045.105 - What exhaust emission standards must my sterndrive/inboard engines meet?
Code of Federal Regulations, 2011 CFR
2011-07-01
... the fuel type on which the engines in the engine family are designed to operate. You must meet the... data, such as data from research engines or similar engine models that are already in production. Your... for the engine or its components, and any relevant customer design specifications. Your demonstration...
40 CFR 1045.105 - What exhaust emission standards must my sterndrive/inboard engines meet?
Code of Federal Regulations, 2012 CFR
2012-07-01
... the fuel type on which the engines in the engine family are designed to operate. You must meet the... data, such as data from research engines or similar engine models that are already in production. Your... for the engine or its components, and any relevant customer design specifications. Your demonstration...
40 CFR 1045.105 - What exhaust emission standards must my sterndrive/inboard engines meet?
Code of Federal Regulations, 2014 CFR
2014-07-01
... the fuel type on which the engines in the engine family are designed to operate. You must meet the... data, such as data from research engines or similar engine models that are already in production. Your... for the engine or its components, and any relevant customer design specifications. Your demonstration...
40 CFR 1045.105 - What exhaust emission standards must my sterndrive/inboard engines meet?
Code of Federal Regulations, 2013 CFR
2013-07-01
... the fuel type on which the engines in the engine family are designed to operate. You must meet the... data, such as data from research engines or similar engine models that are already in production. Your... for the engine or its components, and any relevant customer design specifications. Your demonstration...
40 CFR 1045.105 - What exhaust emission standards must my sterndrive/inboard engines meet?
Code of Federal Regulations, 2010 CFR
2010-07-01
... the fuel type on which the engines in the engine family are designed to operate. You must meet the... data, such as data from research engines or similar engine models that are already in production. Your... for the engine or its components, and any relevant customer design specifications. Your demonstration...
An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.
Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J
2002-01-01
Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.
Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.
Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile
2016-01-01
This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.
Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems
NASA Astrophysics Data System (ADS)
Berrick, S. W.; Lynnes, C.
2007-12-01
The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed a number of reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple, Scalable, Script-based Science Processor (S4P); an online data visualization and analysis system (Giovanni); and the radically simple and fast data search tool, Mirador. These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust, interoperable, and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems, the emphasis on value-added customer service, and continual cost reduction pressures. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor in the success of S4P and S4PM, which are now available to the open source community under the NASA Open Source Agreement.
Müller, H-M; Van Auken, K M; Li, Y; Sternberg, P W
2018-03-09
The biomedical literature continues to grow at a rapid pace, making the challenge of knowledge retrieval and extraction ever greater. Tools that provide a means to search and mine the full text of literature thus represent an important way by which the efficiency of these processes can be improved. We describe the next generation of the Textpresso information retrieval system, Textpresso Central (TPC). TPC builds on the strengths of the original system by expanding the full text corpus to include the PubMed Central Open Access Subset (PMC OA), as well as the WormBase C. elegans bibliography. In addition, TPC allows users to create a customized corpus by uploading and processing documents of their choosing. TPC is UIMA compliant, to facilitate compatibility with external processing modules, and takes advantage of Lucene indexing and search technology for efficient handling of millions of full text documents. Like Textpresso, TPC searches can be performed using keywords and/or categories (semantically related groups of terms), but to provide better context for interpreting and validating queries, search results may now be viewed as highlighted passages in the context of full text. To facilitate biocuration efforts, TPC also allows users to select text spans from the full text and annotate them, create customized curation forms for any data type, and send resulting annotations to external curation databases. As an example of such a curation form, we describe integration of TPC with the Noctua curation tool developed by the Gene Ontology (GO) Consortium. Textpresso Central is an online literature search and curation platform that enables biocurators and biomedical researchers to search and mine the full text of literature by integrating keyword and category searches with viewing search results in the context of the full text. It also allows users to create customized curation interfaces, use those interfaces to make annotations linked to supporting evidence statements, and then send those annotations to any database in the world. Textpresso Central URL: http://www.textpresso.org/tpc.
Internet Search Engines - Fluctuations in Document Accessibility.
ERIC Educational Resources Information Center
Mettrop, Wouter; Nieuwenhuysen, Paul
2001-01-01
Reports an empirical investigation of the consistency of retrieval through Internet search engines. Evaluates 13 engines: AltaVista, EuroFerret, Excite, HotBot, InfoSeek, Lycos, MSN, NorthernLight, Snap, WebCrawler, and three national Dutch engines: Ilse, Search.nl and Vindex. The focus is on a characteristic related to size: the degree of…
DockoMatic 2.0: High Throughput Inverse Virtual Screening and Homology Modeling
Bullock, Casey; Cornia, Nic; Jacob, Reed; Remm, Andrew; Peavey, Thomas; Weekes, Ken; Mallory, Chris; Oxford, Julia T.; McDougal, Owen M.; Andersen, Timothy L.
2013-01-01
DockoMatic is a free and open source application that unifies a suite of software programs within a user-friendly Graphical User Interface (GUI) to facilitate molecular docking experiments. Here we describe the release of DockoMatic 2.0; significant software advances include the ability to: (1) conduct high throughput Inverse Virtual Screening (IVS); (2) construct 3D homology models; and (3) customize the user interface. Users can now efficiently setup, start, and manage IVS experiments through the DockoMatic GUI by specifying a receptor(s), ligand(s), grid parameter file(s), and docking engine (either AutoDock or AutoDock Vina). DockoMatic automatically generates the needed experiment input files and output directories, and allows the user to manage and monitor job progress. Upon job completion, a summary of results is generated by Dockomatic to facilitate interpretation by the user. DockoMatic functionality has also been expanded to facilitate the construction of 3D protein homology models using the Timely Integrated Modeler (TIM) wizard. The wizard TIM provides an interface that accesses the basic local alignment search tool (BLAST) and MODELLER programs, and guides the user through the necessary steps to easily and efficiently create 3D homology models for biomacromolecular structures. The DockoMatic GUI can be customized by the user, and the software design makes it relatively easy to integrate additional docking engines, scoring functions, or third party programs. DockoMatic is a free comprehensive molecular docking software program for all levels of scientists in both research and education. PMID:23808933
19 CFR 12.73 - Motor vehicle and engine compliance with Federal antipollution emission requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 19 Customs Duties 1 2012-04-01 2012-04-01 false Motor vehicle and engine compliance with Federal... Vehicles, Motor Vehicle Engines and Nonroad Engines Under the Clean Air Act, As Amended § 12.73 Motor vehicle and engine compliance with Federal antipollution emission requirements. (a) Applicability of EPA...
19 CFR 12.73 - Motor vehicle and engine compliance with Federal antipollution emission requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 1 2013-04-01 2013-04-01 false Motor vehicle and engine compliance with Federal... Vehicles, Motor Vehicle Engines and Nonroad Engines Under the Clean Air Act, As Amended § 12.73 Motor vehicle and engine compliance with Federal antipollution emission requirements. (a) Applicability of EPA...
19 CFR 12.73 - Motor vehicle and engine compliance with Federal antipollution emission requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 1 2014-04-01 2014-04-01 false Motor vehicle and engine compliance with Federal... Vehicles, Motor Vehicle Engines and Nonroad Engines Under the Clean Air Act, As Amended § 12.73 Motor vehicle and engine compliance with Federal antipollution emission requirements. (a) Applicability of EPA...
Do Librarians Really Do That? Or Providing Custom, Fee-Based Services.
ERIC Educational Resources Information Center
Whitmore, Susan; Heekin, Janet
This paper describes some of the fee-based, custom services provided by National Institutes of Health (NIH) Library to NIH staff, including knowledge management, clinical liaisons, specialized database searching, bibliographic database development, Web resource guide development, and journal management. The first section discusses selecting the…
Person-Environment Congruence as a Predictor of Customer Service Performance.
ERIC Educational Resources Information Center
Fritzsche, Barbara A.; Powell, Amy B.; Hoffman, Russell
1999-01-01
Customer service representatives (n=90) completed the Position Classification Inventory (PCI), Self-Directed Search, and a cognitive ability test. PCI was similar to the Dictionary of Holland Occupational Codes in predicting performance. Cognitive ability was not significantly correlated with performance. Person/environment fit was supported as a…
19 CFR 162.21 - Responsibility and authority for seizures.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Responsibility and authority for seizures. 162.21...; DEPARTMENT OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Seizures § 162.21 Responsibility and authority for seizures. (a) Seizures by Customs officers. Property may be seized, if available, by any...
Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba
2013-02-01
Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.
Application of Kansei engineering and data mining in the Thai ceramic manufacturing
NASA Astrophysics Data System (ADS)
Kittidecha, Chaiwat; Yamada, Koichi
2018-01-01
Ceramic is one of the highly competitive products in Thailand. Many Thai ceramic companies are attempting to know the customer needs and perceptions for making favorite products. To know customer needs is the target of designers and to develop a product that must satisfy customers. This research is applied Kansei Engineering (KE) and Data Mining (DM) into the customer driven product design process. KE can translate customer emotions into the product attributes. This method determines the relationships between customer feelings or Kansei words and the design attributes. Decision tree J48 and Class association rule which implemented through Waikato Environment for Knowledge Analysis (WEKA) software are used to generate a predictive model and to find the appropriate rules. In this experiment, the emotion scores were rated by 37 participants for training data and 16 participants for test data. 6 Kansei words were selected, namely, attractive, ease of drinking, ease of handing, quality, modern and durable. 10 mugs were selected as product samples. The results of this study indicate that the proposed models and rules can interpret the design product elements affecting the customer emotions. Finally, this study provides useful understanding for the application DM in KE and can be applied to a variety of design cases.
ECOTOX knowledgebase: Search features and customized reports
The ECOTOXicology knowledgebase (ECOTOX) is a comprehensive, publicly available knowledgebase developed and maintained by ORD/NHEERL. It is used for environmental toxicity data on aquatic life, terrestrial plants and wildlife. ECOTOX has the capability to refine and filter search...
An integrative fuzzy Kansei engineering and Kano model for logistics services
NASA Astrophysics Data System (ADS)
Hartono, M.; Chuan, T. K.; Prayogo, D. N.; Santoso, A.
2017-11-01
Nowadays, customer emotional needs (known as Kansei) in product and especially in services become a major concern. One of the emerging services is the logistics services. In obtaining a global competitive advantage, logistics services should understand and satisfy their customer affective impressions (Kansei). How to capture, model and analyze the customer emotions has been well structured by Kansei Engineering, equipped with Kano model to strengthen its methodology. However, its methodology lacks of the dynamics of customer perception. More specifically, there is a criticism of perceived scores on user preferences, in both perceived service quality and Kansei response, whether they represent an exact numerical value. Thus, this paper is proposed to discuss an approach of fuzzy Kansei in logistics service experiences. A case study in IT-based logistics services involving 100 subjects has been conducted. Its findings including the service gaps accompanied with prioritized improvement initiatives are discussed.
Propulsion Ground Testing with High Test Peroxide: Lessons Learned
NASA Technical Reports Server (NTRS)
Bruce, Robert; Taylor, Gary; Taliancich, Paula
2002-01-01
Propulsion Ground Testing with High Test Peroxide (85 to 98% concentration) began at the NASA John C. Stennis Space Center in calendar year 1998, when the E3 Test Facility was modified to accomodate hydrogen peroxide (H2O2) in order to suport the research and development testing of the USAF Upper Stage Flight Experiment rocket engine. Since that time, efforts have continued to provide actual and planned test services to various customers, both U.S. Government and Commercial, in the ground test of many test articles, ranging from gas generators, to catalyst beds, to turbomachinery, to main injectors, to combustion chambers, to integrated rocket engines, to integrated stages. Along this path, and over the past 4 years, there has been both the rediscovery of previously learned lessons, through literature search, archive review, and personal interviews, as well as the learning of many new lessons as new areas are explored and new endeavors are tried. This paper will summarize those lessons learned in an effort to broaden the knowledge base as High Test Peroxide is considered more widely for use in rocket propulsion applications.
Vukmir, Rade B
2006-01-01
This paper seeks to present an analysis of the literature examining objective information concerning the subject of customer service, as it applies to the current medical practice. Hopefully, this information will be synthesized to generate a cogent approach to correlate customer service with quality. Articles were obtained by an English language search of MEDLINE from January 1976 to July 2005. This computerized search was supplemented with literature from the author's personal collection of peer-reviewed articles on customer service in a medical setting. This information was presented in a qualitative fashion. There is a significant lack of objective data correlating customer service objectives, patient satisfaction and quality of care. Patients present predominantly for the convenience of emergency department care. Specifics of satisfaction are directed to the timing, and amount of "caring". Demographic correlates including symptom presentation, practice style, location and physician issues directly impact on satisfaction. It is most helpful to develop a productive plan for the "difficult patient", emphasizing communication and empathy. Profiling of the customer satisfaction experience is best accomplished by examining the specifics of satisfaction, nature of the ED patient, demographic profile, symptom presentation and physician interventions emphasizing communication--especially with the difficult patient. The current emergency medicine customer service dilemmas are a complex interaction of both patient and physician factors specifically targeting both efficiency and patient satisfaction. Awareness of these issues particular to the emergency patient can help to maximize efficiency, minimize subsequent medicolegal risk and improve patient care if a tailored management plan is formulated.
The Evolution of Web Searching.
ERIC Educational Resources Information Center
Green, David
2000-01-01
Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…
2016-03-01
well as the Yahoo search engine and a classic SearchKing HIST algorithm. The co-PI immersed herself in the sociology literature for the relevant...Google matrix, PageRank as well as the Yahoo search engine and a classic SearchKing HIST algorithm. The co-PI immersed herself in the sociology...The PI studied all mathematical literature he can find related to the Google search engine, Google matrix, PageRank as well as the Yahoo search
The Impact of Subject Indexes on Semantic Indeterminacy in Enterprise Document Retrieval
ERIC Educational Resources Information Center
Schymik, Gregory
2012-01-01
Ample evidence exists to support the conclusion that enterprise search is failing its users. This failure is costing corporate America billions of dollars every year. Most enterprise search engines are built using web search engines as their foundations. These search engines are optimized for web use and are inadequate when used inside the…
The effective use of search engines on the Internet.
Younger, P
This article explains how nurses can get the most out of researching information on the internet using the search engine Google. It also explores some of the other types of search engines that are available. Internet users are shown how to find text, images and reports and search within sites. Copyright issues are also discussed.
Practical Tips and Strategies for Finding Information on the Internet.
ERIC Educational Resources Information Center
Armstrong, Rhonda; Flanagan, Lynn
This paper presents the most important concepts and techniques to use in successfully searching the major World Wide Web search engines and directories, explains the basics of how search engines work, and describes what is included in their indexes. Following an introduction that gives an overview of Web directories and search engines, the first…
... do not need to use AND because the search engine automatically finds resources containing all of your search ... Use as a wildcard when you want the search engine to fill in the blank for you; you ...
PubMed vs. HighWire Press: a head-to-head comparison of two medical literature search engines.
Vanhecke, Thomas E; Barnes, Michael A; Zimmerman, Janet; Shoichet, Sandor
2007-09-01
PubMed and HighWire Press are both useful medical literature search engines available for free to anyone on the internet. We measured retrieval accuracy, number of results generated, retrieval speed, features and search tools on HighWire Press and PubMed using the quick search features of each. We found that using HighWire Press resulted in a higher likelihood of retrieving the desired article and higher number of search results than the same search on PubMed. PubMed was faster than HighWire Press in delivering search results regardless of search settings. There are considerable differences in search features between these two search engines.
Dermatological image search engines on the Internet: do they work?
Cutrone, M; Grimalt, R
2007-02-01
Atlases on CD-ROM first substituted the use of paediatric dermatology atlases printed on paper. This permitted a faster search and a practical comparison of differential diagnoses. The third step in the evolution of clinical atlases was the onset of the online atlas. Many doctors now use the Internet image search engines to obtain clinical images directly. The aim of this study was to test the reliability of the image search engines compared to the online atlases. We tested seven Internet image search engines with three paediatric dermatology diseases. In general, the service offered by the search engines is good, and continues to be free of charge. The coincidence between what we searched for and what we found was generally excellent, and contained no advertisements. Most Internet search engines provided similar results but some were more user friendly than others. It is not necessary to repeat the same research with Picsearch, Lycos and MSN, as the response would be the same; there is a possibility that they might share software. Image search engines are a useful, free and precise method to obtain paediatric dermatology images for teaching purposes. There is still the matter of copyright to be resolved. What are the legal uses of these 'free' images? How do we define 'teaching purposes'? New watermark methods and encrypted electronic signatures might solve these problems and answer these questions.
78 FR 35108 - Special Conditions: Eurocopter France, EC175B; Use of 30-Minute Power Rating
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-12
..., generally intended to be used for hovering at increased power for search and rescue missions. The applicable....gov , including any personal information the commenter provides. Using the search function of the... carrying 16 passengers and 2 crew members. Its initial customer base will be offshore oil and Search and...
PepArML: A Meta-Search Peptide Identification Platform
Edwards, Nathan J.
2014-01-01
The PepArML meta-search peptide identification platform provides a unified search interface to seven search engines; a robust cluster, grid, and cloud computing scheduler for large-scale searches; and an unsupervised, model-free, machine-learning-based result combiner, which selects the best peptide identification for each spectrum, estimates false-discovery rates, and outputs pepXML format identifications. The meta-search platform supports Mascot; Tandem with native, k-score, and s-score scoring; OMSSA; MyriMatch; and InsPecT with MS-GF spectral probability scores — reformatting spectral data and constructing search configurations for each search engine on the fly. The combiner selects the best peptide identification for each spectrum based on search engine results and features that model enzymatic digestion, retention time, precursor isotope clusters, mass accuracy, and proteotypic peptide properties, requiring no prior knowledge of feature utility or weighting. The PepArML meta-search peptide identification platform often identifies 2–3 times more spectra than individual search engines at 10% FDR. PMID:25663956
PIA: An Intuitive Protein Inference Engine with a Web-Based User Interface.
Uszkoreit, Julian; Maerkens, Alexandra; Perez-Riverol, Yasset; Meyer, Helmut E; Marcus, Katrin; Stephan, Christian; Kohlbacher, Oliver; Eisenacher, Martin
2015-07-02
Protein inference connects the peptide spectrum matches (PSMs) obtained from database search engines back to proteins, which are typically at the heart of most proteomics studies. Different search engines yield different PSMs and thus different protein lists. Analysis of results from one or multiple search engines is often hampered by different data exchange formats and lack of convenient and intuitive user interfaces. We present PIA, a flexible software suite for combining PSMs from different search engine runs and turning these into consistent results. PIA can be integrated into proteomics data analysis workflows in several ways. A user-friendly graphical user interface can be run either locally or (e.g., for larger core facilities) from a central server. For automated data processing, stand-alone tools are available. PIA implements several established protein inference algorithms and can combine results from different search engines seamlessly. On several benchmark data sets, we show that PIA can identify a larger number of proteins at the same protein FDR when compared to that using inference based on a single search engine. PIA supports the majority of established search engines and data in the mzIdentML standard format. It is implemented in Java and freely available at https://github.com/mpc-bioinformatics/pia.
Code of Federal Regulations, 2014 CFR
2014-04-01
... engines, ground flight simulators, parts, components, and subassemblies. 10.183 Section 10.183 Customs... Duty-free entry of civil aircraft, aircraft engines, ground flight simulators, parts, components, and... aircraft, aircraft engines, and ground flight simulators, including their parts, components, and...
Code of Federal Regulations, 2013 CFR
2013-04-01
... engines, ground flight simulators, parts, components, and subassemblies. 10.183 Section 10.183 Customs... Duty-free entry of civil aircraft, aircraft engines, ground flight simulators, parts, components, and... aircraft, aircraft engines, and ground flight simulators, including their parts, components, and...
Code of Federal Regulations, 2012 CFR
2012-04-01
... engines, ground flight simulators, parts, components, and subassemblies. 10.183 Section 10.183 Customs... Duty-free entry of civil aircraft, aircraft engines, ground flight simulators, parts, components, and... aircraft, aircraft engines, and ground flight simulators, including their parts, components, and...
Children's Search Engines from an Information Search Process Perspective.
ERIC Educational Resources Information Center
Broch, Elana
2000-01-01
Describes cognitive and affective characteristics of children and teenagers that may affect their Web searching behavior. Reviews literature on children's searching in online public access catalogs (OPACs) and using digital libraries. Profiles two Web search engines. Discusses some of the difficulties children have searching the Web, in the…
The Honeymoon Is Over: Leading the Way to Lasting Search Habits.
ERIC Educational Resources Information Center
Pierson, Melissa
1997-01-01
To become efficient Internet searchers, students and teachers need to learn online search skills. Discusses hierarchical subject directories (Yahoo) and search engines (Excite, Lycos, Alta Vista, HotBot); lists top search engines and their universal resource locators (URL). Provides examples of search strings; outlines search tips, and a…
Combining Search Engines for Comparative Proteomics
Tabb, David
2012-01-01
Many proteomics laboratories have found spectral counting to be an ideal way to recognize biomarkers that differentiate cohorts of samples. This approach assumes that proteins that differ in quantity between samples will generate different numbers of identifiable tandem mass spectra. Increasingly, researchers are employing multiple search engines to maximize the identifications generated from data collections. This talk evaluates four strategies to combine information from multiple search engines in comparative proteomics. The “Count Sum” model pools the spectra across search engines. The “Vote Counting” model combines the judgments from each search engine by protein. Two other models employ parametric and non-parametric analyses of protein-specific p-values from different search engines. We evaluated the four strategies in two different data sets. The ABRF iPRG 2009 study generated five LC-MS/MS analyses of “red” E. coli and five analyses of “yellow” E. coli. NCI CPTAC Study 6 generated five concentrations of Sigma UPS1 spiked into a yeast background. All data were identified with X!Tandem, Sequest, MyriMatch, and TagRecon. For both sample types, “Vote Counting” appeared to manage the diverse identification sets most effectively, yielding heightened discrimination as more search engines were added.
Optimal Server Scheduling to Maintain Constant Customer Waiting Times
1988-12-01
I I• I I I I I LCn CN OPTIMAL SERVER SCHEDUUNG TO MAINTAIN CONSTANT CUSTOMER WAITING TIMES THESIS Thomas J. Frey Captain UISAF AFIT/GOR/ENS/88D-7...hw bees appsewlf in ple rtan. cd = , ’ S 087 AFIT/GORMENS/8D-7 OPTIMAL SERVER SCHEDUUNG TO MAINTAIN~ CONSTANT CUSTOMER WAITING TIMES THESIS Thomas j...CONSTANT CUSTOMER WAITING TIMES THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology Air University In
An ontology-based search engine for protein-protein interactions
2010-01-01
Background Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. Results We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Conclusion Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology. PMID:20122195
An ontology-based search engine for protein-protein interactions.
Park, Byungkyu; Han, Kyungsook
2010-01-18
Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology.
Towards Identifying and Reducing the Bias of Disease Information Extracted from Search Engine Data
Huang, Da-Cang; Wang, Jin-Feng; Huang, Ji-Xia; Sui, Daniel Z.; Zhang, Hong-Yan; Hu, Mao-Gui; Xu, Cheng-Dong
2016-01-01
The estimation of disease prevalence in online search engine data (e.g., Google Flu Trends (GFT)) has received a considerable amount of scholarly and public attention in recent years. While the utility of search engine data for disease surveillance has been demonstrated, the scientific community still seeks ways to identify and reduce biases that are embedded in search engine data. The primary goal of this study is to explore new ways of improving the accuracy of disease prevalence estimations by combining traditional disease data with search engine data. A novel method, Biased Sentinel Hospital-based Area Disease Estimation (B-SHADE), is introduced to reduce search engine data bias from a geographical perspective. To monitor search trends on Hand, Foot and Mouth Disease (HFMD) in Guangdong Province, China, we tested our approach by selecting 11 keywords from the Baidu index platform, a Chinese big data analyst similar to GFT. The correlation between the number of real cases and the composite index was 0.8. After decomposing the composite index at the city level, we found that only 10 cities presented a correlation of close to 0.8 or higher. These cities were found to be more stable with respect to search volume, and they were selected as sample cities in order to estimate the search volume of the entire province. After the estimation, the correlation improved from 0.8 to 0.864. After fitting the revised search volume with historical cases, the mean absolute error was 11.19% lower than it was when the original search volume and historical cases were combined. To our knowledge, this is the first study to reduce search engine data bias levels through the use of rigorous spatial sampling strategies. PMID:27271698
Towards Identifying and Reducing the Bias of Disease Information Extracted from Search Engine Data.
Huang, Da-Cang; Wang, Jin-Feng; Huang, Ji-Xia; Sui, Daniel Z; Zhang, Hong-Yan; Hu, Mao-Gui; Xu, Cheng-Dong
2016-06-01
The estimation of disease prevalence in online search engine data (e.g., Google Flu Trends (GFT)) has received a considerable amount of scholarly and public attention in recent years. While the utility of search engine data for disease surveillance has been demonstrated, the scientific community still seeks ways to identify and reduce biases that are embedded in search engine data. The primary goal of this study is to explore new ways of improving the accuracy of disease prevalence estimations by combining traditional disease data with search engine data. A novel method, Biased Sentinel Hospital-based Area Disease Estimation (B-SHADE), is introduced to reduce search engine data bias from a geographical perspective. To monitor search trends on Hand, Foot and Mouth Disease (HFMD) in Guangdong Province, China, we tested our approach by selecting 11 keywords from the Baidu index platform, a Chinese big data analyst similar to GFT. The correlation between the number of real cases and the composite index was 0.8. After decomposing the composite index at the city level, we found that only 10 cities presented a correlation of close to 0.8 or higher. These cities were found to be more stable with respect to search volume, and they were selected as sample cities in order to estimate the search volume of the entire province. After the estimation, the correlation improved from 0.8 to 0.864. After fitting the revised search volume with historical cases, the mean absolute error was 11.19% lower than it was when the original search volume and historical cases were combined. To our knowledge, this is the first study to reduce search engine data bias levels through the use of rigorous spatial sampling strategies.
Engineering Your Job Search: A Job-Finding Resource for Engineering Professionals.
ERIC Educational Resources Information Center
1995
This guide, which is intended for engineering professionals, explains how to use up-to-date job search techniques to design and conduct an effective job hunt. The first 11 chapters discuss the following steps in searching for a job: handling a job loss; managing time and financial resources while conducting a full-time job search; using objective…
New Architectures for Presenting Search Results Based on Web Search Engines Users Experience
ERIC Educational Resources Information Center
Martinez, F. J.; Pastor, J. A.; Rodriguez, J. V.; Lopez, Rosana; Rodriguez, J. V., Jr.
2011-01-01
Introduction: The Internet is a dynamic environment which is continuously being updated. Search engines have been, currently are and in all probability will continue to be the most popular systems in this information cosmos. Method: In this work, special attention has been paid to the series of changes made to search engines up to this point,…
Query Transformations for Result Merging
2014-11-01
tors, term dependence, query expansion 1. INTRODUCTION Federated search deals with the problem of aggregating results from multiple search engines . The...invidual search engines are (i) typically focused on a particular domain or a particular corpus, (ii) employ diverse retrieval models, and (iii...determine which search engines are appropri- ate for addressing the information need (resource selection), and (ii) merging the results returned by
Quality Function Deployment for Large Systems
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1992-01-01
Quality Function Deployment (QFD) is typically applied to small subsystems. This paper describes efforts to extend QFD to large scale systems. It links QFD to the system engineering process, the concurrent engineering process, the robust design process, and the costing process. The effect is to generate a tightly linked project management process of high dimensionality which flushes out issues early to provide a high quality, low cost, and, hence, competitive product. A pre-QFD matrix linking customers to customer desires is described.
Na, Hyuntae; Lee, Seung-Yub; Üstündag, Ersan; ...
2013-01-01
This paper introduces a recent development and application of a noncommercial artificial neural network (ANN) simulator with graphical user interface (GUI) to assist in rapid data modeling and analysis in the engineering diffraction field. The real-time network training/simulation monitoring tool has been customized for the study of constitutive behavior of engineering materials, and it has improved data mining and forecasting capabilities of neural networks. This software has been used to train and simulate the finite element modeling (FEM) data for a fiber composite system, both forward and inverse. The forward neural network simulation precisely reduplicates FEM results several orders ofmore » magnitude faster than the slow original FEM. The inverse simulation is more challenging; yet, material parameters can be meaningfully determined with the aid of parameter sensitivity information. The simulator GUI also reveals that output node size for materials parameter and input normalization method for strain data are critical train conditions in inverse network. The successful use of ANN modeling and simulator GUI has been validated through engineering neutron diffraction experimental data by determining constitutive laws of the real fiber composite materials via a mathematically rigorous and physically meaningful parameter search process, once the networks are successfully trained from the FEM database.« less
Probabilistic consensus scoring improves tandem mass spectrometry peptide identification.
Nahnsen, Sven; Bertsch, Andreas; Rahnenführer, Jörg; Nordheim, Alfred; Kohlbacher, Oliver
2011-08-05
Database search is a standard technique for identifying peptides from their tandem mass spectra. To increase the number of correctly identified peptides, we suggest a probabilistic framework that allows the combination of scores from different search engines into a joint consensus score. Central to the approach is a novel method to estimate scores for peptides not found by an individual search engine. This approach allows the estimation of p-values for each candidate peptide and their combination across all search engines. The consensus approach works better than any single search engine across all different instrument types considered in this study. Improvements vary strongly from platform to platform and from search engine to search engine. Compared to the industry standard MASCOT, our approach can identify up to 60% more peptides. The software for consensus predictions is implemented in C++ as part of OpenMS, a software framework for mass spectrometry. The source code is available in the current development version of OpenMS and can easily be used as a command line application or via a graphical pipeline designer TOPPAS.
Repository-Based Software Engineering (RBSE) program
NASA Technical Reports Server (NTRS)
1992-01-01
Support of a software engineering program was provided in the following areas: client/customer liaison; research representation/outreach; and program support management. Additionally, a list of deliverables is presented.
Where to search top-K biomedical ontologies?
Oliveira, Daniela; Butt, Anila Sahar; Haller, Armin; Rebholz-Schuhmann, Dietrich; Sahay, Ratnesh
2018-03-20
Searching for precise terms and terminological definitions in the biomedical data space is problematic, as researchers find overlapping, closely related and even equivalent concepts in a single or multiple ontologies. Search engines that retrieve ontological resources often suggest an extensive list of search results for a given input term, which leads to the tedious task of selecting the best-fit ontological resource (class or property) for the input term and reduces user confidence in the retrieval engines. A systematic evaluation of these search engines is necessary to understand their strengths and weaknesses in different search requirements. We have implemented seven comparable Information Retrieval ranking algorithms to search through ontologies and compared them against four search engines for ontologies. Free-text queries have been performed, the outcomes have been judged by experts and the ranking algorithms and search engines have been evaluated against the expert-based ground truth (GT). In addition, we propose a probabilistic GT that is developed automatically to provide deeper insights and confidence to the expert-based GT as well as evaluating a broader range of search queries. The main outcome of this work is the identification of key search factors for biomedical ontologies together with search requirements and a set of recommendations that will help biomedical experts and ontology engineers to select the best-suited retrieval mechanism in their search scenarios. We expect that this evaluation will allow researchers and practitioners to apply the current search techniques more reliably and that it will help them to select the right solution for their daily work. The source code (of seven ranking algorithms), ground truths and experimental results are available at https://github.com/danielapoliveira/bioont-search-benchmark.
An approach in building a chemical compound search engine in oracle database.
Wang, H; Volarath, P; Harrison, R
2005-01-01
A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.
Finding My Needle in the Haystack: Effective Personalized Re-ranking of Search Results in Prospector
NASA Astrophysics Data System (ADS)
König, Florian; van Velsen, Lex; Paramythis, Alexandros
This paper provides an overview of Prospector, a personalized Internet meta-search engine, which utilizes a combination of ontological information, ratings-based models of user interests, and complementary theme-oriented group models to recommend (through re-ranking) search results obtained from an underlying search engine. Re-ranking brings “closer to the top” those items that are of particular interest to a user or have high relevance to a given theme. A user-based, real-world evaluation has shown that the system is effective in promoting results of interest, but lags behind Google in user acceptance, possibly due to the absence of features popularized by said search engine. Overall, users would consider employing a personalized search engine to perform searches with terms that require disambiguation and / or contextualization.
Practical and Efficient Searching in Proteomics: A Cross Engine Comparison
Paulo, Joao A.
2014-01-01
Background Analysis of large datasets produced by mass spectrometry-based proteomics relies on database search algorithms to sequence peptides and identify proteins. Several such scoring methods are available, each based on different statistical foundations and thereby not producing identical results. Here, the aim is to compare peptide and protein identifications using multiple search engines and examine the additional proteins gained by increasing the number of technical replicate analyses. Methods A HeLa whole cell lysate was analyzed on an Orbitrap mass spectrometer for 10 technical replicates. The data were combined and searched using Mascot, SEQUEST, and Andromeda. Comparisons were made of peptide and protein identifications among the search engines. In addition, searches using each engine were performed with incrementing number of technical replicates. Results The number and identity of peptides and proteins differed across search engines. For all three search engines, the differences in proteins identifications were greater than the differences in peptide identifications indicating that the major source of the disparity may be at the protein inference grouping level. The data also revealed that analysis of 2 technical replicates can increase protein identifications by up to 10-15%, while a third replicate results in an additional 4-5%. Conclusions The data emphasize two practical methods of increasing the robustness of mass spectrometry data analysis. The data show that 1) using multiple search engines can expand the number of identified proteins (union) and validate protein identifications (intersection), and 2) analysis of 2 or 3 technical replicates can substantially expand protein identifications. Moreover, information can be extracted from a dataset by performing database searching with different engines and performing technical repeats, which requires no additional sample preparation and effectively utilizes research time and effort. PMID:25346847
Shenker, Bennett S
2014-02-01
To validate a scoring system that evaluates the ability of Internet search engines to correctly predict diagnoses when symptoms are used as search terms. We developed a five point scoring system to evaluate the diagnostic accuracy of Internet search engines. We identified twenty diagnoses common to a primary care setting to validate the scoring system. One investigator entered the symptoms for each diagnosis into three Internet search engines (Google, Bing, and Ask) and saved the first five webpages from each search. Other investigators reviewed the webpages and assigned a diagnostic accuracy score. They rescored a random sample of webpages two weeks later. To validate the five point scoring system, we calculated convergent validity and test-retest reliability using Kendall's W and Spearman's rho, respectively. We used the Kruskal-Wallis test to look for differences in accuracy scores for the three Internet search engines. A total of 600 webpages were reviewed. Kendall's W for the raters was 0.71 (p<0.0001). Spearman's rho for test-retest reliability was 0.72 (p<0.0001). There was no difference in scores based on Internet search engine. We found a significant difference in scores based on the webpage's order on the Internet search engine webpage (p=0.007). Pairwise comparisons revealed higher scores in the first webpages vs. the fourth (corr p=0.009) and fifth (corr p=0.017). However, this significance was lost when creating composite scores. The five point scoring system to assess diagnostic accuracy of Internet search engines is a valid and reliable instrument. The scoring system may be used in future Internet research. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Practical and Efficient Searching in Proteomics: A Cross Engine Comparison.
Paulo, Joao A
2013-10-01
Analysis of large datasets produced by mass spectrometry-based proteomics relies on database search algorithms to sequence peptides and identify proteins. Several such scoring methods are available, each based on different statistical foundations and thereby not producing identical results. Here, the aim is to compare peptide and protein identifications using multiple search engines and examine the additional proteins gained by increasing the number of technical replicate analyses. A HeLa whole cell lysate was analyzed on an Orbitrap mass spectrometer for 10 technical replicates. The data were combined and searched using Mascot, SEQUEST, and Andromeda. Comparisons were made of peptide and protein identifications among the search engines. In addition, searches using each engine were performed with incrementing number of technical replicates. The number and identity of peptides and proteins differed across search engines. For all three search engines, the differences in proteins identifications were greater than the differences in peptide identifications indicating that the major source of the disparity may be at the protein inference grouping level. The data also revealed that analysis of 2 technical replicates can increase protein identifications by up to 10-15%, while a third replicate results in an additional 4-5%. The data emphasize two practical methods of increasing the robustness of mass spectrometry data analysis. The data show that 1) using multiple search engines can expand the number of identified proteins (union) and validate protein identifications (intersection), and 2) analysis of 2 or 3 technical replicates can substantially expand protein identifications. Moreover, information can be extracted from a dataset by performing database searching with different engines and performing technical repeats, which requires no additional sample preparation and effectively utilizes research time and effort.
Godin, Katelyn; Stapleton, Jackie; Kirkpatrick, Sharon I; Hanning, Rhona M; Leatherdale, Scott T
2015-10-22
Grey literature is an important source of information for large-scale review syntheses. However, there are many characteristics of grey literature that make it difficult to search systematically. Further, there is no 'gold standard' for rigorous systematic grey literature search methods and few resources on how to conduct this type of search. This paper describes systematic review search methods that were developed and applied to complete a case study systematic review of grey literature that examined guidelines for school-based breakfast programs in Canada. A grey literature search plan was developed to incorporate four different searching strategies: (1) grey literature databases, (2) customized Google search engines, (3) targeted websites, and (4) consultation with contact experts. These complementary strategies were used to minimize the risk of omitting relevant sources. Since abstracts are often unavailable in grey literature documents, items' abstracts, executive summaries, or table of contents (whichever was available) were screened. Screening of publications' full-text followed. Data were extracted on the organization, year published, who they were developed by, intended audience, goal/objectives of document, sources of evidence/resources cited, meals mentioned in the guidelines, and recommendations for program delivery. The search strategies for identifying and screening publications for inclusion in the case study review was found to be manageable, comprehensive, and intuitive when applied in practice. The four search strategies of the grey literature search plan yielded 302 potentially relevant items for screening. Following the screening process, 15 publications that met all eligibility criteria remained and were included in the case study systematic review. The high-level findings of the case study systematic review are briefly described. This article demonstrated a feasible and seemingly robust method for applying systematic search strategies to identify web-based resources in the grey literature. The search strategy we developed and tested is amenable to adaptation to identify other types of grey literature from other disciplines and answering a wide range of research questions. This method should be further adapted and tested in future research syntheses.
Input-independent, Scalable and Fast String Matching on the Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Chavarría-Miranda, Daniel; Maschhoff, Kristyn J
2009-05-25
String searching is at the core of many security and network applications like search engines, intrusion detection systems, virus scanners and spam filters. The growing size of on-line content and the increasing wire speeds push the need for fast, and often real- time, string searching solutions. For these conditions, many software implementations (if not all) targeting conventional cache-based microprocessors do not perform well. They either exhibit overall low performance or exhibit highly variable performance depending on the types of inputs. For this reason, real-time state of the art solutions rely on the use of either custom hardware or Field-Programmable Gatemore » Arrays (FPGAs) at the expense of overall system flexibility and programmability. This paper presents a software based implementation of the Aho-Corasick string searching algorithm on the Cray XMT multithreaded shared memory machine. Our so- lution relies on the particular features of the XMT architecture and on several algorith- mic strategies: it is fast, scalable and its performance is virtually content-independent. On a 128-processor Cray XMT, it reaches a scanning speed of ≈ 28 Gbps with a performance variability below 10 %. In the 10 Gbps performance range, variability is below 2.5%. By comparison, an Intel dual-socket, 8-core system running at 2.66 GHz achieves a peak performance which varies from 500 Mbps to 10 Gbps depending on the type of input and dictionary size.« less
Proton therapy for prostate cancer online: patient education or marketing?
Sadowski, Daniel J; Ellimoottil, Chandy S; Tejwani, Ajay; Gorbonos, Alex
2013-12-01
Proton therapy (PT) for prostate cancer is an expensive treatment with limited evidence of benefit over conventional radiotherapy. We sought to study whether online information on PT for prostate cancer was balanced and whether the website source influenced the content presented. We applied a systematic search process to identify 270 weblinks associated with PT for prostate cancer, categorized the websites by source, and filtered the results to 50 websites using predetermined criteria. We then used a customized version of the DISCERN instrument, a validated tool for assessing the quality of consumer health information, to evaluate the remaining websites for balance of content and description of risks, benefits and uncertainty. Depending on the search engine and key word used, proton center websites (PCWs) made up 10%-47% of the first 30 encountered links. In comparison, websites from academic and nonacademic medical centers without ownership stake in proton centers appeared much less frequently as a search result (0%-3%). PCWs scored lower on DISCERN questions compared to other sources for being balanced/unbiased (p < 0.001), mentioning areas of uncertainty (p < 0.001), and describing risks of PT (p < 0.001). PCWs scored higher for describing the benefits of treatment (p = 0.003). Patients should be aware that online information regarding PT for prostate cancer may represent marketing by proton centers rather than comprehensive and unbiased patient education. An awareness of these results will also better prepare clinicians to address the potential biases of patients with prostate cancer who search the Internet for health information.
Human Flesh Search Engine and Online Privacy.
Zhang, Yang; Gao, Hong
2016-04-01
Human flesh search engine can be a double-edged sword, bringing convenience on the one hand and leading to infringement of personal privacy on the other hand. This paper discusses the ethical problems brought about by the human flesh search engine, as well as possible solutions.
An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.
Zweigenbaum, P.; Darmoni, S. J.; Grabar, N.; Douyère, M.; Benichou, J.
2002-01-01
Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF. PMID:12463965
Toward building a comprehensive data mart
NASA Astrophysics Data System (ADS)
Boulware, Douglas; Salerno, John; Bleich, Richard; Hinman, Michael L.
2004-04-01
To uncover new relationships or patterns one must first build a corpus of data or what some call a data mart. How can we make sure we have collected all the pertinent data and have maximized coverage? There are hundreds of search engines that are available for use on the Internet today. Which one is best? Is one better for one problem and a second better for another? Are meta-search engines better than individual search engines? In this paper we look at one possible approach in developing a methodology to compare a number of search engines. Before we present this methodology, we first provide our motivation towards the need for increased coverage. We next investigate how we can obtain ground truth and what the ground truth can provide us in the way of some insight into the Internet and search engine capabilities. We then conclude our discussion by developing a methodology in which we compare a number of the search engines and how we can increase overall coverage and thus a more comprehensive data mart.
Total Quality: An Understanding and Application For Community, Junior, and Technical Colleges.
ERIC Educational Resources Information Center
Burgdorf, Augustus
1992-01-01
Total Quality (TQ), is a customer-oriented philosophy of management that utilizes total employee involvement in the relentless, daily search for improvement of product and service quality, through the use of statistical methods, employee teams, and performance management. In the TQ framework, "internal" customers are individuals within the…
Deployment of Recommender Systems: Operational and Strategic Issues
ERIC Educational Resources Information Center
Ghoshal, Abhijeet
2011-01-01
E-commerce firms are increasingly adopting recommendation systems to effectively target customers with products and services. The first essay examines the impact that improving a recommender system has on firms that deploy such systems. A market with customers heterogeneous in their search costs is considered. We find that in a monopoly, a firm…
Application of computer graphics in the design of custom orthopedic implants.
Bechtold, J E
1986-10-01
Implementation of newly developed computer modelling techniques and computer graphics displays and software have greatly aided the orthopedic design engineer and physician in creating a custom implant with good anatomic conformity in a short turnaround time. Further advances in computerized design and manufacturing will continue to simplify the development of custom prostheses and enlarge their niche in the joint replacement market.
Software Engineering Improvement Plan
NASA Technical Reports Server (NTRS)
2006-01-01
In performance of this task order, bd Systems personnel provided support to the Flight Software Branch and the Software Working Group through multiple tasks related to software engineering improvement and to activities of the independent Technical Authority (iTA) Discipline Technical Warrant Holder (DTWH) for software engineering. To ensure that the products, comments, and recommendations complied with customer requirements and the statement of work, bd Systems personnel maintained close coordination with the customer. These personnel performed work in areas such as update of agency requirements and directives database, software effort estimation, software problem reports, a web-based process asset library, miscellaneous documentation review, software system requirements, issue tracking software survey, systems engineering NPR, and project-related reviews. This report contains a summary of the work performed and the accomplishments in each of these areas.
The pond is wider than you think! Problems encountered when searching family practice literature.
Rosser, W. W.; Starkey, C.; Shaughnessy, R.
2000-01-01
OBJECTIVE: To explain differences in the results of literature searches in British general practice and North American family practice or family medicine. DESIGN: Comparative literature search. SETTING: The Department of Family and Community Medicine at the University of Toronto in Ontario. METHOD: Literature searches on MEDLINE demonstrated that certain search strategies ignored certain key words, depending on the search engine and the search terms chosen. Literature searches using the key words "general practice," "family practice," and "family medicine" combined with the topics "depression" and then "otitis media" were conducted in MEDLINE using four different Web-based search engines: Ovid, HealthGate, PubMed, and Internet Grateful Med. MAIN OUTCOME MEASURES: The number of MEDLINE references retrieved for both topics when searched with each of the three key words, "general practice," "family practice," and "family medicine" using each of the four search engines. RESULTS: For each topic, each search yielded very different articles. Some search engines did a better job of matching the term "general practice" to the terms "family medicine" and "family practice," and thus improved retrieval. The problem of language use extends to the variable use of terminology and differences in spelling between British and American English. CONCLUSION: We need to heighten awareness of literature search problems and the potential for duplication of research effort when some of the literature is ignored, and to suggest ways to overcome the deficiencies of the various search engines. Images Figure 1 Figure 2 PMID:10660792
Quality analysis of patient information about knee arthroscopy on the World Wide Web.
Sambandam, Senthil Nathan; Ramasamy, Vijayaraj; Priyanka, Priyanka; Ilango, Balakrishnan
2007-05-01
This study was designed to ascertain the quality of patient information available on the World Wide Web on the topic of knee arthroscopy. For the purpose of quality analysis, we used a pool of 232 search results obtained from 7 different search engines. We used a modified assessment questionnaire to assess the quality of these Web sites. This questionnaire was developed based on similar studies evaluating Web site quality and includes items on illustrations, accessibility, availability, accountability, and content of the Web site. We also compared results obtained with different search engines and tried to establish the best possible search strategy to attain the most relevant, authentic, and adequate information with minimum time consumption. For this purpose, we first compared 100 search results from the single most commonly used search engine (AltaVista) with the pooled sample containing 20 search results from each of the 7 different search engines. The search engines used were metasearch (Copernic and Mamma), general search (Google, AltaVista, and Yahoo), and health topic-related search engines (MedHunt and Healthfinder). The phrase "knee arthroscopy" was used as the search terminology. Excluding the repetitions, there were 117 Web sites available for quality analysis. These sites were analyzed for accessibility, relevance, authenticity, adequacy, and accountability by use of a specially designed questionnaire. Our analysis showed that most of the sites providing patient information on knee arthroscopy contained outdated information, were inadequate, and were not accountable. Only 16 sites were found to be providing reasonably good patient information and hence can be recommended to patients. Understandably, most of these sites were from nonprofit organizations and educational institutions. Furthermore, our study revealed that using multiple search engines increases patients' chances of obtaining more relevant information rather than using a single search engine. Our study shows the difficulties encountered by patients in obtaining information regarding knee arthroscopy and highlights the duty of knee surgeons in helping patients to identify the relevant and authentic information in the most efficient manner from the World Wide Web. This study highlights the importance of the role of orthopaedic surgeons in helping their patients to identify the best possible information on the World Wide Web.
EMERSE: The Electronic Medical Record Search Engine
Hanauer, David A.
2006-01-01
EMERSE (The Electronic Medical Record Search Engine) is an intuitive, powerful search engine for free-text documents in the electronic medical record. It offers multiple options for creating complex search queries yet has an interface that is easy enough to be used by those with minimal computer experience. EMERSE is ideal for retrospective chart reviews and data abstraction and may have potential for clinical care as well.
Tags Extarction from Spatial Documents in Search Engines
NASA Astrophysics Data System (ADS)
Borhaninejad, S.; Hakimpour, F.; Hamzei, E.
2015-12-01
Nowadays the selective access to information on the Web is provided by search engines, but in the cases which the data includes spatial information the search task becomes more complex and search engines require special capabilities. The purpose of this study is to extract the information which lies in spatial documents. To that end, we implement and evaluate information extraction from GML documents and a retrieval method in an integrated approach. Our proposed system consists of three components: crawler, database and user interface. In crawler component, GML documents are discovered and their text is parsed for information extraction; storage. The database component is responsible for indexing of information which is collected by crawlers. Finally the user interface component provides the interaction between system and user. We have implemented this system as a pilot system on an Application Server as a simulation of Web. Our system as a spatial search engine provided searching capability throughout the GML documents and thus an important step to improve the efficiency of search engines has been taken.
Smart internet search engine through 6W
NASA Astrophysics Data System (ADS)
Goehler, Stephen; Cader, Masud; Szu, Harold
2006-04-01
Current Internet search engine technology is limited in its ability to display necessary relevant information to the user. Yahoo, Google and Microsoft use lookup tables or indexes which limits the ability of users to find their desired information. While these companies have improved their results over the years by enhancing their existing technology and algorithms with specialized heuristics such as PageRank, there is a need for a next generation smart search engine that can effectively interpret the relevance of user searches and provide the actual information requested. This paper explores whether a smarter Internet search engine can effectively fulfill a user's needs through the use of 6W representations.
Development and tuning of an original search engine for patent libraries in medicinal chemistry.
Pasche, Emilie; Gobeill, Julien; Kreim, Olivier; Oezdemir-Zaech, Fatma; Vachon, Therese; Lovis, Christian; Ruch, Patrick
2014-01-01
The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks clearly increases the effectiveness of the system. We conclude that different search tasks demand different information retrieval engines' settings in order to yield optimal end-user retrieval.
Development and tuning of an original search engine for patent libraries in medicinal chemistry
2014-01-01
Background The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. Methods We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. Results The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. Conclusions We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks clearly increases the effectiveness of the system. We conclude that different search tasks demand different information retrieval engines' settings in order to yield optimal end-user retrieval. PMID:24564220
Web Search Studies: Multidisciplinary Perspectives on Web Search Engines
NASA Astrophysics Data System (ADS)
Zimmer, Michael
Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.
Accessibility, nature and quality of health information on the Internet: a survey on osteoarthritis.
Maloney, S; Ilic, D; Green, S
2005-03-01
This study aims to determine the quality and validity of information available on the Internet about osteoarthritis and to investigate the best way of sourcing this information. Keywords relevant to osteoarthritis were searched across 15 search engines representing medical, general and meta-search engines. Search engine efficiency was defined as the percentage of unique and relevant websites from all websites returned by each search engine. The quality of relevant information was appraised using the DISCERN tool and the concordance of the information offered by the website with the available evidence about osteoarthritis determined. A total of 3443 websites were retrieved, of which 344 were identified as unique and providing information relevant to osteoarthritis. The overall quality of website information was poor. There was no significant difference between types of search engine in sourcing relevant information; however, the information retrieved from medical search engines was of a higher quality. Fewer than a third of the websites identified as offering relevant information cited evidence to support their recommendations. Although the overall quality of website information about osteoarthritis was poor, medical search engines may provide consumers with the opportunity to source high-quality health information on the Internet. In the era of evidence-based medicine, one of the main obstacles to the Internet reaching its potential as a medical resource is the failure of websites to incorporate and attribute evidence-based information.
Determination of geographic variance in stroke prevalence using Internet search engine analytics.
Walcott, Brian P; Nahed, Brian V; Kahle, Kristopher T; Redjal, Navid; Coumans, Jean-Valery
2011-06-01
Previous methods to determine stroke prevalence, such as nationwide surveys, are labor-intensive endeavors. Recent advances in search engine query analytics have led to a new metric for disease surveillance to evaluate symptomatic phenomenon, such as influenza. The authors hypothesized that the use of search engine query data can determine the prevalence of stroke. The Google Insights for Search database was accessed to analyze anonymized search engine query data. The authors' search strategy utilized common search queries used when attempting either to identify the signs and symptoms of a stroke or to perform stroke education. The search logic was as follows: (stroke signs + stroke symptoms + mini stroke--heat) from January 1, 2005, to December 31, 2010. The relative number of searches performed (the interest level) for this search logic was established for all 50 states and the District of Columbia. A Pearson product-moment correlation coefficient was calculated from the statespecific stroke prevalence data previously reported. Web search engine interest level was available for all 50 states and the District of Columbia over the time period for January 1, 2005-December 31, 2010. The interest level was highest in Alabama and Tennessee (100 and 96, respectively) and lowest in California and Virginia (58 and 53, respectively). The Pearson correlation coefficient (r) was calculated to be 0.47 (p = 0.0005, 2-tailed). Search engine query data analysis allows for the determination of relative stroke prevalence. Further investigation will reveal the reliability of this metric to determine temporal pattern analysis and prevalence in this and other symptomatic diseases.
Modeling the customer in electronic commerce.
Helander, M G; Khalid, H M
2000-12-01
This paper reviews interface design of web pages for e-commerce. Different tasks in e-commerce are contrasted. A systems model is used to illustrate the information flow between three subsystems in e-commerce: store environment, customer, and web technology. A customer makes several decisions: to enter the store, to navigate, to purchase, to pay, and to keep the merchandize. This artificial environment must be designed so that it can support customer decision-making. To retain customers it must be pleasing and fun, and create a task with natural flow. Customers have different needs, competence and motivation, which affect decision-making. It may therefore be important to customize the design of the e-store environment. Future ergonomics research will have to investigate perceptual aspects, such as presentation of merchandize, and cognitive issues, such as product search and navigation, as well as decision making while considering various economic parameters. Five theories on e-commerce research are presented.
An Improved Forensic Science Information Search.
Teitelbaum, J
2015-01-01
Although thousands of search engines and databases are available online, finding answers to specific forensic science questions can be a challenge even to experienced Internet users. Because there is no central repository for forensic science information, and because of the sheer number of disciplines under the forensic science umbrella, forensic scientists are often unable to locate material that is relevant to their needs. The author contends that using six publicly accessible search engines and databases can produce high-quality search results. The six resources are Google, PubMed, Google Scholar, Google Books, WorldCat, and the National Criminal Justice Reference Service. Carefully selected keywords and keyword combinations, designating a keyword phrase so that the search engine will search on the phrase and not individual keywords, and prompting search engines to retrieve PDF files are among the techniques discussed. Copyright © 2015 Central Police University.
Interactive Information Organization: Techniques and Evaluation
2001-05-01
information search and access. Locating interesting information on the World Wide Web is the main task of on-line search engines . Such engines accept a...likelihood of being relevant to the user’s request. The majority of today’s Web search engines follow this scenario. The ordering of documents in the
Putting Google Scholar to the Test: A Preliminary Study
ERIC Educational Resources Information Center
Robinson, Mary L.; Wusteman, Judith
2007-01-01
Purpose: To describe a small-scale quantitative evaluation of the scholarly information search engine, Google Scholar. Design/methodology/approach: Google Scholar's ability to retrieve scholarly information was compared to that of three popular search engines: Ask.com, Google and Yahoo! Test queries were presented to all four search engines and…
Social media networking: YouTube and search engine optimization.
Jackson, Rem; Schneider, Andrew; Baum, Neil
2011-01-01
This is the third part of a three-part article on social media networking. This installment will focus on YouTube and search engine optimization. This article will explore the application of YouTube to the medical practice and how YouTube can help a practice retain its existing patients and attract new patients to the practice. The article will also describe the importance of search engine optimization and how to make your content appear on the first page of the search engines such as Google, Yahoo, and YouTube.
Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc
2017-09-13
Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.
Searching for American Indian Resources on the Internet.
ERIC Educational Resources Information Center
Pollack, Ira; Derby, Amy
This paper provides basic information on searching the Internet and lists World Wide Web sites containing resources for American Indian education. Comprehensive and topical Web directories, search engines, and meta-search engines are briefly described. Search strategies are discussed, and seven Web sites are listed that provide more advanced…
'Sciencenet'--towards a global search and share engine for all scientific knowledge.
Lütjohann, Dominic S; Shah, Asmi H; Christen, Michael P; Richter, Florian; Knese, Karsten; Liebel, Urban
2011-06-15
Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, 'Sciencenet', which facilitates rapid searching over this large data space. By 'bringing the search engine to the data', we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the 'AskMe' experiment publisher is written in Python 2.7, and the backend 'YaCy' search engine is based on Java 1.6.
PyQuant: A Versatile Framework for Analysis of Quantitative Mass Spectrometry Data*
Mitchell, Christopher J.; Kim, Min-Sik; Na, Chan Hyun; Pandey, Akhilesh
2016-01-01
Quantitative mass spectrometry data necessitates an analytical pipeline that captures the accuracy and comprehensiveness of the experiments. Currently, data analysis is often coupled to specific software packages, which restricts the analysis to a given workflow and precludes a more thorough characterization of the data by other complementary tools. To address this, we have developed PyQuant, a cross-platform mass spectrometry data quantification application that is compatible with existing frameworks and can be used as a stand-alone quantification tool. PyQuant supports most types of quantitative mass spectrometry data including SILAC, NeuCode, 15N, 13C, or 18O and chemical methods such as iTRAQ or TMT and provides the option of adding custom labeling strategies. In addition, PyQuant can perform specialized analyses such as quantifying isotopically labeled samples where the label has been metabolized into other amino acids and targeted quantification of selected ions independent of spectral assignment. PyQuant is capable of quantifying search results from popular proteomic frameworks such as MaxQuant, Proteome Discoverer, and the Trans-Proteomic Pipeline in addition to several standalone search engines. We have found that PyQuant routinely quantifies a greater proportion of spectral assignments, with increases ranging from 25–45% in this study. Finally, PyQuant is capable of complementing spectral assignments between replicates to quantify ions missed because of lack of MS/MS fragmentation or that were omitted because of issues such as spectra quality or false discovery rates. This results in an increase of biologically useful data available for interpretation. In summary, PyQuant is a flexible mass spectrometry data quantification platform that is capable of interfacing with a variety of existing formats and is highly customizable, which permits easy configuration for custom analysis. PMID:27231314
Patent urachus repair - slideshow
... Drugs & Supplements Videos & Tools About MedlinePlus Show Search Search MedlinePlus GO GO About MedlinePlus Site Map FAQs Customer Support Health Topics Drugs & Supplements Videos & Tools Español You Are Here: Home → Medical Encyclopedia → Patent urachus repair - series—Normal anatomy URL of this ...
Job Prospects in HVAC Engineering.
ERIC Educational Resources Information Center
Basta, Nicholas
1985-01-01
Although heating, ventilation, and air conditioning (HVAC) engineering degrees are not offered, there is a serious need for specialists and consultants in this area (since most have been trained as mechanical engineers). Opportunities exist for individuals possessing a customer-oriented attitude, with knowledge in computerized controls, innovative…
Electrical service reliability: the customer perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samsa, M.E.; Hub, K.A.; Krohm, G.C.
1978-09-01
Electric-utility-system reliability criteria have traditionally been established as a matter of utility policy or through long-term engineering practice, generally with no supportive customer cost/benefit analysis as justification. This report presents results of an initial study of the customer perspective toward electric-utility-system reliability, based on critical review of over 20 previous and ongoing efforts to quantify the customer's value of reliable electric service. A possible structure of customer classifications is suggested as a reasonable level of disaggregation for further investigation of customer value, and these groups are characterized in terms of their electricity use patterns. The values that customers assign tomore » reliability are discussed in terms of internal and external cost components. A list of options for effecting changes in customer service reliability is set forth, and some of the many policy issues that could alter customer-service reliability are identified.« less
A Full-Text-Based Search Engine for Finding Highly Matched Documents Across Multiple Categories
NASA Technical Reports Server (NTRS)
Nguyen, Hung D.; Steele, Gynelle C.
2016-01-01
This report demonstrates the full-text-based search engine that works on any Web-based mobile application. The engine has the capability to search databases across multiple categories based on a user's queries and identify the most relevant or similar. The search results presented here were found using an Android (Google Co.) mobile device; however, it is also compatible with other mobile phones.
EMERSE: The Electronic Medical Record Search Engine
Hanauer, David A.
2006-01-01
EMERSE (The Electronic Medical Record Search Engine) is an intuitive, powerful search engine for free-text documents in the electronic medical record. It offers multiple options for creating complex search queries yet has an interface that is easy enough to be used by those with minimal computer experience. EMERSE is ideal for retrospective chart reviews and data abstraction and may have potential for clinical care as well. PMID:17238560
Köhler, M J; Springer, S; Kaatz, M
2014-09-01
The volume of search engine queries about disease-relevant items reflects public interest and correlates with disease prevalence as proven by the example of flu (influenza). Other influences include media attention or holidays. The present work investigates if the seasonality of prevalence or symptom severity of dermatoses correlates with search engine query data. The relative weekly volume of dermatological relevant search terms was assessed by the online tool Google Trends for the years 2009-2013. For each item, the degree of seasonality was calculated via frequency analysis and a geometric approach. Many dermatoses show a marked seasonality, reflected by search engine query volumes. Unexpected seasonal variations of these queries suggest a previously unknown variability of the respective disease prevalence. Furthermore, using the example of allergic rhinitis, a close correlation of search engine query data with actual pollen count can be demonstrated. In many cases, search engine query data are appropriate to estimate seasonal variability in prevalence of common dermatoses. This finding may be useful for real-time analysis and formation of hypotheses concerning pathogenetic or symptom aggravating mechanisms and may thus contribute to improvement of diagnostics and prevention of skin diseases.
ERIC Educational Resources Information Center
Neff, Bonita Dostal
In 1990 Daniel Bellack advised in an article that business people should "give up the search for the fabled 'global customer.' There's no such thing. Instead, be sure you understand each local market--it's custom, culture, and, of course, language." Understanding each local market is a big challenge for the communication professional in…
19 CFR 162.93 - Failure to issue notice of seizure.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Failure to issue notice of seizure. 162.93 Section... OF THE TREASURY (CONTINUED) INSPECTION, SEARCH, AND SEIZURE Civil Asset Forfeiture Reform Act § 162.93 Failure to issue notice of seizure. If Customs does not send notice of a seizure of property in...
Forsetlund, Louise; Kirkehei, Ingvild; Harboe, Ingrid; Odgaard-Jensen, Jan
2012-01-01
This study aims to compare two different search methods for determining the scope of a requested systematic review or health technology assessment. The first method (called the Direct Search Method) included performing direct searches in the Cochrane Database of Systematic Reviews (CDSR), Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessments (HTA). Using the comparison method (called the NHS Search Engine) we performed searches by means of the search engine of the British National Health Service, NHS Evidence. We used an adapted cross-over design with a random allocation of fifty-five requests for systematic reviews. The main analyses were based on repeated measurements adjusted for the order in which the searches were conducted. The Direct Search Method generated on average fewer hits (48 percent [95 percent confidence interval {CI} 6 percent to 72 percent], had a higher precision (0.22 [95 percent CI, 0.13 to 0.30]) and more unique hits than when searching by means of the NHS Search Engine (50 percent [95 percent CI, 7 percent to 110 percent]). On the other hand, the Direct Search Method took longer (14.58 minutes [95 percent CI, 7.20 to 21.97]) and was perceived as somewhat less user-friendly than the NHS Search Engine (-0.60 [95 percent CI, -1.11 to -0.09]). Although the Direct Search Method had some drawbacks such as being more time-consuming and less user-friendly, it generated more unique hits than the NHS Search Engine, retrieved on average fewer references and fewer irrelevant results.
FindZebra: a search engine for rare diseases.
Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole
2013-06-01
The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The Business Case for Systems Engineering Study: Detailed Response Data
2012-11-01
of Carnegie Mellon University. DM -0000794 CMU/SEI-2012-SR-011 | i Table of Contents Acknowledgments ix Executive Summary xi Abstract xiii...providing an SRD25 upfront with crisp requirements. Customer/acquirer consistently talks about SE but never practices SE. Customer practices SE
How To Do Field Searching in Web Search Engines: A Field Trip.
ERIC Educational Resources Information Center
Hock, Ran
1998-01-01
Describes the field search capabilities of selected Web search engines (AltaVista, HotBot, Infoseek, Lycos, Yahoo!) and includes a chart outlining what fields (date, title, URL, images, audio, video, links, page depth) are searchable, where to go on the page to search them, the syntax required (if any), and how field search queries are entered.…
A Transcription Activator-Like Effector (TALE) Toolbox for Genome Engineering
Sanjana, Neville E.; Cong, Le; Zhou, Yang; Cunniff, Margaret M.; Feng, Guoping; Zhang, Feng
2013-01-01
Transcription activator-like effectors (TALEs) are a class of naturally occurring DNA binding proteins found in the plant pathogen Xanthomonas sp. The DNA binding domain of each TALE consists of tandem 34-amino acid repeat modules that can be rearranged according to a simple cipher to target new DNA sequences. Customized TALEs can be used for a wide variety of genome engineering applications, including transcriptional modulation and genome editing. Here we describe a toolbox for rapid construction of custom TALE transcription factors (TALE-TFs) and nucleases (TALENs) using a hierarchical ligation procedure. This toolbox facilitates affordable and rapid construction of custom TALE-TFs and TALENs within one week and can be easily scaled up to construct TALEs for multiple targets in parallel. We also provide details for testing the activity in mammalian cells of custom TALE-TFs and TALENs using, respectively, qRT-PCR and Surveyor nuclease. The TALE toolbox described here will enable a broad range of biological applications. PMID:22222791
Finding Information on the World Wide Web: The Retrieval Effectiveness of Search Engines.
ERIC Educational Resources Information Center
Pathak, Praveen; Gordon, Michael
1999-01-01
Describes a study that examined the effectiveness of eight search engines for the World Wide Web. Calculated traditional information-retrieval measures of recall and precision at varying numbers of retrieved documents to use as the bases for statistical comparisons of retrieval effectiveness. Also examined the overlap between search engines.…
EIIS: An Educational Information Intelligent Search Engine Supported by Semantic Services
ERIC Educational Resources Information Center
Huang, Chang-Qin; Duan, Ru-Lin; Tang, Yong; Zhu, Zhi-Ting; Yan, Yong-Jian; Guo, Yu-Qing
2011-01-01
The semantic web brings a new opportunity for efficient information organization and search. To meet the special requirements of the educational field, this paper proposes an intelligent search engine enabled by educational semantic support service, where three kinds of searches are integrated into Educational Information Intelligent Search (EIIS)…
MetaSpider: Meta-Searching and Categorization on the Web.
ERIC Educational Resources Information Center
Chen, Hsinchun; Fan, Haiyan; Chau, Michael; Zeng, Daniel
2001-01-01
Discusses the difficulty of locating relevant information on the Web and studies two approaches to addressing the low precision and poor presentation of search results: meta-search and document categorization. Introduces MetaSpider, a meta-search engine, and presents results of a user evaluation study that compared three search engines.…
26 CFR 1.460-2 - Long-term manufacturing contracts.
Code of Federal Regulations, 2011 CFR
2011-04-01
... specific customer, a taxpayer must consider the extent to which research, development, design, engineering... substantial amount of research, design, and engineering to produce, C determines that the equipment is a... produce, will be delivered to B in 2003. C determines that the research, design, engineering, retooling...
19 CFR 191.143 - Drawback entry.
Code of Federal Regulations, 2010 CFR
2010-04-01
... (CONTINUED) DRAWBACK Foreign-Built Jet Aircraft Engines Processed in the United States § 191.143 Drawback entry. (a) Filing of entry. Drawback entries covering these foreign-built jet aircraft engines shall be filed on Customs Form 7551, modified to show that the entry covers jet aircraft engines processed under...
19 CFR 191.143 - Drawback entry.
Code of Federal Regulations, 2011 CFR
2011-04-01
... (CONTINUED) DRAWBACK Foreign-Built Jet Aircraft Engines Processed in the United States § 191.143 Drawback entry. (a) Filing of entry. Drawback entries covering these foreign-built jet aircraft engines shall be filed on Customs Form 7551, modified to show that the entry covers jet aircraft engines processed under...
26 CFR 1.263A-1 - Uniform capitalization of costs.
Code of Federal Regulations, 2014 CFR
2014-04-01
... or facilities. (P) Engineering and design costs. Engineering and design costs include pre-production costs, such as costs attributable to research, experimental, engineering, and design activities (to the... customer demand. (9) Research and experimental expenditures. See section 263A(c)(2) for an exception for...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Monica
Monica Phillips discuss her role as an engineer at Savannah River National Laboratory. Her mission is to provide support to various customers on-site through engineered equipment and solutions, along with solving complex problems to help them meet their needs.
Short-term Internet search using makes people rely on search engines when facing unknown issues.
Wang, Yifan; Wu, Lingdan; Luo, Liang; Zhang, Yifen; Dong, Guangheng
2017-01-01
The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day's training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day's Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines.
Short-term Internet search using makes people rely on search engines when facing unknown issues
Wang, Yifan; Wu, Lingdan; Luo, Liang; Zhang, Yifen
2017-01-01
The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day’s training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day’s Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines. PMID:28441408
Algorithms for database-dependent search of MS/MS data.
Matthiesen, Rune
2013-01-01
The frequent used bottom-up strategy for identification of proteins and their associated modifications generate nowadays typically thousands of MS/MS spectra that normally are matched automatically against a protein sequence database. Search engines that take as input MS/MS spectra and a protein sequence database are referred as database-dependent search engines. Many programs both commercial and freely available exist for database-dependent search of MS/MS spectra and most of the programs have excellent user documentation. The aim here is therefore to outline the algorithm strategy behind different search engines rather than providing software user manuals. The process of database-dependent search can be divided into search strategy, peptide scoring, protein scoring, and finally protein inference. Most efforts in the literature have been put in to comparing results from different software rather than discussing the underlining algorithms. Such practical comparisons can be cluttered by suboptimal implementation and the observed differences are frequently caused by software parameters settings which have not been set proper to allow even comparison. In other words an algorithmic idea can still be worth considering even if the software implementation has been demonstrated to be suboptimal. The aim in this chapter is therefore to split the algorithms for database-dependent searching of MS/MS data into the above steps so that the different algorithmic ideas become more transparent and comparable. Most search engines provide good implementations of the first three data analysis steps mentioned above, whereas the final step of protein inference are much less developed for most search engines and is in many cases performed by an external software. The final part of this chapter illustrates how protein inference is built into the VEMS search engine and discusses a stand-alone program SIR for protein inference that can import a Mascot search result.
NASA Astrophysics Data System (ADS)
Castro-Otero, C.
2017-04-01
Very often small turbine manufacturers are requested to produce sizeable turbines, too large in terms of physical dimensions, power or designing capacity. In these cases clever alternative solutions should be found to meet customers’ needs. For instance: in the old times twin runner Francis turbines were an option instead of one large machine, or if a too large Pelton turbine cannot be manufactured or designed, a good option is to install a medium size Francis and a small Pelton. Likewise, a similar approach needs to be taken should the manufacturer be asked for a too large Kaplan. Facing this situation a good option is to install three or more small Kaplan turbines. This particular case was studied in depth and after all the considerations had been made, the following question arouse: Is this a way out for the manufacturer or is it really the best option for the customer? The choice made as a way out for the manufacturer became the best option for the customer and a success for both parties. This paper aims to encourage developers and engineering firms to search for more options than the traditional one to find the best option in plant design.
`Googling' Terrorists: Are Northern Irish Terrorists Visible on Internet Search Engines?
NASA Astrophysics Data System (ADS)
Reilly, P.
In this chapter, the analysis suggests that Northern Irish terrorists are not visible on Web search engines when net users employ conventional Internet search techniques. Editors of mass media organisations traditionally have had the ability to decide whether a terrorist atrocity is `newsworthy,' controlling the `oxygen' supply that sustains all forms of terrorism. This process, also known as `gatekeeping,' is often influenced by the norms of social responsibility, or alternatively, with regard to the interests of the advertisers and corporate sponsors that sustain mass media organisations. The analysis presented in this chapter suggests that Internet search engines can also be characterised as `gatekeepers,' albeit without the ability to shape the content of Websites before it reaches net users. Instead, Internet search engines give priority retrieval to certain Websites within their directory, pointing net users towards these Websites rather than others on the Internet. Net users are more likely to click on links to the more `visible' Websites on Internet search engine directories, these sites invariably being the highest `ranked' in response to a particular search query. A number of factors including the design of the Website and the number of links to external sites determine the `visibility' of a Website on Internet search engines. The study suggests that Northern Irish terrorists and their sympathisers are unlikely to achieve a greater degree of `visibility' online than they enjoy in the conventional mass media through the perpetration of atrocities. Although these groups may have a greater degree of freedom on the Internet to publicise their ideologies, they are still likely to be speaking to the converted or members of the press. Although it is easier to locate Northern Irish terrorist organisations on Internet search engines by linking in via ideology, ideological description searches, such as `Irish Republican' and `Ulster Loyalist,' are more likely to generate links pointing towards the sites of research institutes and independent media organisations than sites sympathetic to Northern Irish terrorist organisations. The chapter argues that Northern Irish terrorists are only visible on search engines if net users select the correct search terms.
Development of Health Information Search Engine Based on Metadata and Ontology
Song, Tae-Min; Jin, Dal-Lae
2014-01-01
Objectives The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Methods Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. Results A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Conclusions Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers. PMID:24872907
Development of health information search engine based on metadata and ontology.
Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae
2014-04-01
The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.
Till, Benedikt; Niederkrotenthaler, Thomas
2014-08-01
The Internet provides a variety of resources for individuals searching for suicide-related information. Structured content-analytic approaches to assess intercultural differences in web contents retrieved with method-related and help-related searches are scarce. We used the 2 most popular search engines (Google and Yahoo/Bing) to retrieve US-American and Austrian search results for the term suicide, method-related search terms (e.g., suicide methods, how to kill yourself, painless suicide, how to hang yourself), and help-related terms (e.g., suicidal thoughts, suicide help) on February 11, 2013. In total, 396 websites retrieved with US search engines and 335 websites from Austrian searches were analyzed with content analysis on the basis of current media guidelines for suicide reporting. We assessed the quality of websites and compared findings across search terms and between the United States and Austria. In both countries, protective outweighed harmful website characteristics by approximately 2:1. Websites retrieved with method-related search terms (e.g., how to hang yourself) contained more harmful (United States: P < .001, Austria: P < .05) and fewer protective characteristics (United States: P < .001, Austria: P < .001) compared to the term suicide. Help-related search terms (e.g., suicidal thoughts) yielded more websites with protective characteristics (United States: P = .07, Austria: P < .01). Websites retrieved with U.S. search engines generally had more protective characteristics (P < .001) than searches with Austrian search engines. Resources with harmful characteristics were better ranked than those with protective characteristics (United States: P < .01, Austria: P < .05). The quality of suicide-related websites obtained depends on the search terms used. Preventive efforts to improve the ranking of preventive web content, particularly regarding method-related search terms, seem necessary. © Copyright 2014 Physicians Postgraduate Press, Inc.
Scaffolds for Bone Tissue Engineering: State of the art and new perspectives.
Roseti, Livia; Parisi, Valentina; Petretta, Mauro; Cavallo, Carola; Desando, Giovanna; Bartolotti, Isabella; Grigolo, Brunella
2017-09-01
This review is intended to give a state of the art description of scaffold-based strategies utilized in Bone Tissue Engineering. Numerous scaffolds have been tested in the orthopedic field with the aim of improving cell viability, attachment, proliferation and homing, osteogenic differentiation, vascularization, host integration and load bearing. The main traits that characterize a scaffold suitable for bone regeneration concerning its biological requirements, structural features, composition, and types of fabrication are described in detail. Attention is then focused on conventional and Rapid Prototyping scaffold manufacturing techniques. Conventional manufacturing approaches are subtractive methods where parts of the material are removed from an initial block to achieve the desired shape. Rapid Prototyping techniques, introduced to overcome standard techniques limitations, are additive fabrication processes that manufacture the final three-dimensional object via deposition of overlying layers. An important improvement is the possibility to create custom-made products by means of computer assisted technologies, starting from patient's medical images. As a conclusion, it is highlighted that, despite its encouraging results, the clinical approach of Bone Tissue Engineering has not taken place on a large scale yet, due to the need of more in depth studies, its high manufacturing costs and the difficulty to obtain regulatory approval. PUBMED search terms utilized to write this review were: "Bone Tissue Engineering", "regenerative medicine", "bioactive scaffolds", "biomimetic scaffolds", "3D printing", "3D bioprinting", "vascularization" and "dentistry". Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, D. N.
2015-06-22
The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration whose purpose is to develop the software infrastructure needed to facilitate and empower the study of climate change on a global scale. ESGF’s architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces. The cornerstones of its interoperability are the peer-to-peer messaging, which is continuously exchanged among all nodes in the federation; a shared architecture for search and discovery; and a security infrastructure based on industry standards. ESGF integrates popular application engines available from the open-sourcemore » community with custom components (for data publishing, searching, user interface, security, and messaging) that were developed collaboratively by the team. The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP)—output used by the Intergovernmental Panel on Climate Change assessment reports. ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs of the global climate science community.« less
How the Kano model contributes to Kansei engineering in services.
Hartono, Markus; Chuan, Tan Kay
2011-11-01
Recent studies show that products and services hold great appeal if they are attractively designed to elicit emotional feelings from customers. Kansei engineering (KE) has good potential to provide a competitive advantage to those able to read and translate customer affect and emotion in actual product and services. This study introduces an integrative framework of the Kano model and KE, applied to services. The Kano model was used and inserted into KE to exhibit the relationship between service attribute performance and customer emotional response. Essentially, the Kano model categorises service attribute quality into three major groups (must-be [M], one-dimensional [O] and attractive [A]). The findings of a case study that involved 100 tourists who stayed in luxury 4- and 5-star hotels are presented. As a practical matter, this research provides insight on which service attributes deserve more attention with regard to their significant impact on customer emotional needs. STATEMENT OF RELEVANCE: Apart from cognitive evaluation, emotions and hedonism play a big role in service encounters. Through a focus on delighting qualities of service attributes, this research enables service providers and managers to establish the extent to which they prioritise their improvement efforts and to always satisfy their customer emotions beyond expectation.
Enhanced Product Generation at NASA Data Centers Through Grid Technology
NASA Technical Reports Server (NTRS)
Barkstrom, Bruce R.; Hinke, Thomas H.; Gavali, Shradha; Seufzer, William J.
2003-01-01
This paper describes how grid technology can support the ability of NASA data centers to provide customized data products. A combination of grid technology and commodity processors are proposed to provide the bandwidth necessary to perform customized processing of data, with customized data subsetting providing the initial example. This customized subsetting engine can be used to support a new type of subsetting, called phenomena-based subsetting, where data is subsetted based on its association with some phenomena, such as mesoscale convective systems or hurricanes. This concept is expanded to allow the phenomena to be detected in one type of data, with the subsetting requirements transmitted to the subsetting engine to subset a different type of data. The subsetting requirements are generated by a data mining system and transmitted to the subsetter in the form of an XML feature index that describes the spatial and temporal extent of the phenomena. For this work, a grid-based mining system called the Grid Miner is used to identify the phenomena and generate the feature index. This paper discusses the value of grid technology in facilitating the development of a high performance customized product processing and the coupling of a grid mining system to support phenomena-based subsetting.
Taking It to the Top: A Lesson in Search Engine Optimization
ERIC Educational Resources Information Center
Frydenberg, Mark; Miko, John S.
2011-01-01
Search engine optimization (SEO), the promoting of a Web site so it achieves optimal position with a search engine's rankings, is an important strategy for organizations and individuals in order to promote their brands online. Techniques for achieving SEO are relevant to students of marketing, computing, media arts, and other disciplines, and many…
Cataloging as a Customer Service: Applying Knowledge to Technology Tools.
ERIC Educational Resources Information Center
Konovalov, Yuri
1999-01-01
Discusses the increase in significance and importance of cataloging and authority control in the online environment of libraries to help improve both precision and recall of searches. Highlights include the inadequacies of keyword searching; corporate library experiences; and library management with foreign library branches and foreign language…
19 CFR 103.10 - Fees for services.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... each hour or fraction thereof. If a computer search is required because of the nature of the records... based on computer time and supplies necessary to comply with the request. (4) Searches requiring travel...
Seyfried, Lisa; Hanauer, David A; Nease, Donald; Albeiruti, Rashad; Kavanagh, Janet; Kales, Helen C
2009-12-01
Electronic medical records (EMRs) have become part of daily practice for many physicians. Attempts have been made to apply electronic search engine technology to speed EMR review. This was a prospective, observational study to compare the speed and clinical accuracy of a medical record search engine vs. manual review of the EMR. Three raters reviewed 49 cases in the EMR to screen for eligibility in a depression study using the electronic medical record search engine (EMERSE). One week later raters received a scrambled set of the same patients including 9 distractor cases, and used manual EMR review to determine eligibility. For both methods, accuracy was assessed for the original 49 cases by comparison with a gold standard rater. Use of EMERSE resulted in considerable time savings; chart reviews using EMERSE were significantly faster than traditional manual review (p=0.03). The percent agreement of raters with the gold standard (e.g. concurrent validity) using either EMERSE or manual review was not significantly different. Using a search engine optimized for finding clinical information in the free-text sections of the EMR can provide significant time savings while preserving clinical accuracy. The major power of this search engine is not from a more advanced and sophisticated search algorithm, but rather from a user interface designed explicitly to help users search the entire medical record in a way that protects health information.
Seyfried, Lisa; Hanauer, David; Nease, Donald; Albeiruti, Rashad; Kavanagh, Janet; Kales, Helen C.
2009-01-01
Purpose Electronic medical records (EMR) have become part of daily practice for many physicians. Attempts have been made to apply electronic search engine technology to speed EMR review. This was a prospective, observational study to compare the speed and accuracy of electronic search engine vs. manual review of the EMR. Methods Three raters reviewed 49 cases in the EMR to screen for eligibility in a depression study using the electronic search engine (EMERSE). One week later raters received a scrambled set of the same patients including 9 distractor cases, and used manual EMR review to determine eligibility. For both methods, accuracy was assessed for the original 49 cases by comparison with a gold standard rater. Results Use of EMERSE resulted in considerable time savings; chart reviews using EMERSE were significantly faster than traditional manual review (p=0.03). The percent agreement of raters with the gold standard (e.g. concurrent validity) using either EMERSE or manual review was not significantly different. Conclusions Using a search engine optimized for finding clinical information in the free-text sections of the EMR can provide significant time savings while preserving reliability. The major power of this search engine is not from a more advanced and sophisticated search algorithm, but rather from a user interface designed explicitly to help users search the entire medical record in a way that protects health information. PMID:19560962
Investigation into some characteristics of the mass-customized production paradigm
NASA Astrophysics Data System (ADS)
Tapper, Jerome; Sundar, Pratap S.; Kamarthi, Sagar V.
2000-10-01
In recent times, while markets are reaching their saturation limits and customers are becoming more demanding, a paradigm shift has been taking place from mass production to mass- customized production (MCP). The concept of mass customization (MC) focuses on satisfying a customer's unique needs with the help of new technologies such as Internet, digital product realization, and re-configurable production facilities. In MC the needs of an individual customer are translated into design, accordingly produced, and delivered to the customer. In this research three hypothesis related to MCP are investigated by the data/information collected from ten companies, which are engaged in MCP. These three hypothesis are (1) mass-customized production systems can be classified into make-to-stock MCP, assemble-to-order MCP, make-to-order MCP, engineer-to-order MC, and develop-to-order MCP, (2) in mass-customized production systems the process of customization eliminates customer sacrifice, and (3) mass-customized production systems can deliver products at mass-production cost. The preliminary study indicates that while the first hypothesis is valid, MCP companies rarely fulfill what is stated in the other two hypotheses.
Complex dynamics of our economic life on different scales: insights from search engine query data.
Preis, Tobias; Reith, Daniel; Stanley, H Eugene
2010-12-28
Search engine query data deliver insight into the behaviour of individuals who are the smallest possible scale of our economic life. Individuals are submitting several hundred million search engine queries around the world each day. We study weekly search volume data for various search terms from 2004 to 2010 that are offered by the search engine Google for scientific use, providing information about our economic life on an aggregated collective level. We ask the question whether there is a link between search volume data and financial market fluctuations on a weekly time scale. Both collective 'swarm intelligence' of Internet users and the group of financial market participants can be regarded as a complex system of many interacting subunits that react quickly to external changes. We find clear evidence that weekly transaction volumes of S&P 500 companies are correlated with weekly search volume of corresponding company names. Furthermore, we apply a recently introduced method for quantifying complex correlations in time series with which we find a clear tendency that search volume time series and transaction volume time series show recurring patterns.
Research on the optimization strategy of web search engine based on data mining
NASA Astrophysics Data System (ADS)
Chen, Ronghua
2018-04-01
With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.
The History of the Internet Search Engine: Navigational Media and the Traffic Commodity
NASA Astrophysics Data System (ADS)
van Couvering, E.
This chapter traces the economic development of the search engine industry over time, beginning with the earliest Web search engines and ending with the domination of the market by Google, Yahoo! and MSN. Specifically, it focuses on the ways in which search engines are similar to and different from traditional media institutions, and how the relations between traditional and Internet media have changed over time. In addition to its historical overview, a core contribution of this chapter is the analysis of the industry using a media value chain based on audiences rather than on content, and the development of traffic as the core unit of exchange. It shows that traditional media companies failed when they attempted to create vertically integrated portals in the late 1990s, based on the idea of controlling Internet content, while search engines succeeded in creating huge "virtually integrated" networks based on control of Internet traffic rather than Internet content.
The value of price transparency in residential solar photovoltaic markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Shaughnessy, Eric; Margolis, Robert
Installed prices for residential solar photovoltaic (PV) systems have declined significantly in recent years. However price dispersion and limited customer access to PV quotes prevents some prospective customers from obtaining low price offers. This study shows that improved customer access to prices - also known as price transparency - is a potential policy lever for further PV price reductions. We use customer search and strategic pricing theory to show that PV installation companies face incentives to offer lower prices in markets with more price transparency. We test this theoretical framework using a unique residential PV quote dataset. Our results showmore » that installers offer lower prices to customers that are expected to receive more quotes. Our study provides a rationale for policies to improve price transparency in residential PV markets.« less
The value of price transparency in residential solar photovoltaic markets
O'Shaughnessy, Eric; Margolis, Robert
2018-04-05
Installed prices for residential solar photovoltaic (PV) systems have declined significantly in recent years. However price dispersion and limited customer access to PV quotes prevents some prospective customers from obtaining low price offers. This study shows that improved customer access to prices - also known as price transparency - is a potential policy lever for further PV price reductions. We use customer search and strategic pricing theory to show that PV installation companies face incentives to offer lower prices in markets with more price transparency. We test this theoretical framework using a unique residential PV quote dataset. Our results showmore » that installers offer lower prices to customers that are expected to receive more quotes. Our study provides a rationale for policies to improve price transparency in residential PV markets.« less
Yu, Wen; Taylor, J Alex; Davis, Michael T; Bonilla, Leo E; Lee, Kimberly A; Auger, Paul L; Farnsworth, Chris C; Welcher, Andrew A; Patterson, Scott D
2010-03-01
Despite recent advances in qualitative proteomics, the automatic identification of peptides with optimal sensitivity and accuracy remains a difficult goal. To address this deficiency, a novel algorithm, Multiple Search Engines, Normalization and Consensus is described. The method employs six search engines and a re-scoring engine to search MS/MS spectra against protein and decoy sequences. After the peptide hits from each engine are normalized to error rates estimated from the decoy hits, peptide assignments are then deduced using a minimum consensus model. These assignments are produced in a series of progressively relaxed false-discovery rates, thus enabling a comprehensive interpretation of the data set. Additionally, the estimated false-discovery rate was found to have good concordance with the observed false-positive rate calculated from known identities. Benchmarking against standard proteins data sets (ISBv1, sPRG2006) and their published analysis, demonstrated that the Multiple Search Engines, Normalization and Consensus algorithm consistently achieved significantly higher sensitivity in peptide identifications, which led to increased or more robust protein identifications in all data sets compared with prior methods. The sensitivity and the false-positive rate of peptide identification exhibit an inverse-proportional and linear relationship with the number of participating search engines.
Information Discovery and Retrieval Tools
2004-12-01
information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.
Information Discovery and Retrieval Tools
2003-04-01
information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.
Incorporating the Internet into Traditional Library Instruction.
ERIC Educational Resources Information Center
Fonseca, Tony; King, Monica
2000-01-01
Presents a template for teaching traditional library research and one for incorporating the Web. Highlights include the differences between directories and search engines; devising search strategies; creating search terms; how to choose search engines; evaluating online resources; helpful Web sites; and how to read URLs to evaluate a Web site's…
Learning Mechanisms in Multidisciplinary Teamwork with Real Customers and Open-Ended Problems
ERIC Educational Resources Information Center
Heikkinen, Juho; Isomöttönen, Ville
2015-01-01
Recently, there has been a trend towards adding a multidisciplinary or multicultural element to traditional monodisciplinary project courses in computing and engineering. In this article, we examine the implications of multidisciplinarity for students' learning experiences during a one-semester project course for real customers. We use a…
Applying the Theory of Constraints to a Base Civil Engineering Operations Branch
1991-09-01
Figure Page 1. Typical Work Order Processing . .......... 7 2. Typical Job Order Processing . .......... 8 3. Typical Simplified In-Service Work Plan for...Customers’ Customer Request Service Planning Unit Production] Control Center Material Control Scheduling CE Shops Figure 1.. Typical Work Order Processing 7
Experimental characterization of a small custom-built double-acting gamma-type stirling engine
NASA Astrophysics Data System (ADS)
Intsiful, Peter; Mensah, Francis; Thorpe, Arthur
This paper investigates characterization of a small custom-built double-acting gamma-type stirling engine. Stirling-cycle engine is a reciprocating energy conversion machine with working spaces operating under conditions of oscillating pressure and flow. These conditions may be due to compressibility as wells as pressure and temperature fluctuations. In standard literature, research indicates that there is lack of basic physics to account for the transport phenomena that manifest themselves in the working spaces of reciprocating engines. Previous techniques involve governing equations: mass, momentum and energy. Some authors use engineering thermodynamics. None of these approaches addresses this particular engine. A technique for observing and analyzing the behavior of this engine via parametric spectral profiles has been developed, using laser beams. These profiles enabled the generation of pv-curves and other trajectories for investigating the thermos-physical and thermos-hydrodynamic phenomena that manifest in the exchangers. The engine's performance was examined. The results indicate that with current load of 35.78A, electric power of 0.505 kW was generated at a speed of 240 rpm and 29.50 percent efficiency was obtained. Nasa grants to Howard University NASA/HBCU-NHRETU & CSTEA.
Labrecque, Michel; Ratté, Stéphane; Frémont, Pierre; Cauchon, Michel; Ouellet, Jérôme; Hogg, William; McGowan, Jessie; Gagnon, Marie-Pierre; Njoya, Merlin; Légaré, France
2013-10-01
To compare the ability of users of 2 medical search engines, InfoClinique and the Trip database, to provide correct answers to clinical questions and to explore the perceived effects of the tools on the clinical decision-making process. Randomized trial. Three family medicine units of the family medicine program of the Faculty of Medicine at Laval University in Quebec city, Que. Fifteen second-year family medicine residents. Residents generated 30 structured questions about therapy or preventive treatment (2 questions per resident) based on clinical encounters. Using an Internet platform designed for the trial, each resident answered 20 of these questions (their own 2, plus 18 of the questions formulated by other residents, selected randomly) before and after searching for information with 1 of the 2 search engines. For each question, 5 residents were randomly assigned to begin their search with InfoClinique and 5 with the Trip database. The ability of residents to provide correct answers to clinical questions using the search engines, as determined by third-party evaluation. After answering each question, participants completed a questionnaire to assess their perception of the engine's effect on the decision-making process in clinical practice. Of 300 possible pairs of answers (1 answer before and 1 after the initial search), 254 (85%) were produced by 14 residents. Of these, 132 (52%) and 122 (48%) pairs of answers concerned questions that had been assigned an initial search with InfoClinique and the Trip database, respectively. Both engines produced an important and similar absolute increase in the proportion of correct answers after searching (26% to 62% for InfoClinique, for an increase of 36%; 24% to 63% for the Trip database, for an increase of 39%; P = .68). For all 30 clinical questions, at least 1 resident produced the correct answer after searching with either search engine. The mean (SD) time of the initial search for each question was 23.5 (7.6) minutes with InfoClinique and 22.3 (7.8) minutes with the Trip database (P = .30). Participants' perceptions of each engine's effect on the decision-making process were very positive and similar for both search engines. Family medicine residents' ability to provide correct answers to clinical questions increased dramatically and similarly with the use of both InfoClinique and the Trip database. These tools have strong potential to increase the quality of medical care.
Metal Matrix Composites: Custom-made Materials for Automotive and Aerospace Engineering
NASA Astrophysics Data System (ADS)
Kainer, Karl U.
2006-02-01
Since the properties of MMCs can be directly designed "into" the material, they can fulfill all the demands set by design engineers. This book surveys the latest results and development possibilities for MMCs as engineering and functional materials, making it of utmost value to all materials scientists and engineers seeking in-depth background information on the potentials these materials have to offer in research, development and design engineering.
PlateRunner: A Search Engine to Identify EMR Boilerplates.
Divita, Guy; Workman, T Elizabeth; Carter, Marjorie E; Redd, Andrew; Samore, Matthew H; Gundlapalli, Adi V
2016-01-01
Medical text contains boilerplated content, an artifact of pull-down forms from EMRs. Boilerplated content is the source of challenges for concept extraction on clinical text. This paper introduces PlateRunner, a search engine on boilerplates from the US Department of Veterans Affairs (VA) EMR. Boilerplates containing concepts should be identified and reviewed to recognize challenging formats, identify high yield document titles, and fine tune section zoning. This search engine has the capability to filter negated and asserted concepts, save and search query results. This tool can save queries, search results, and documents found for later analysis.
What Major Search Engines Like Google, Yahoo and Bing Need to Know about Teachers in the UK?
ERIC Educational Resources Information Center
Seyedarabi, Faezeh
2014-01-01
This article briefly outlines the current major search engines' approach to teachers' web searching. The aim of this article is to make Web searching easier for teachers when searching for relevant online teaching materials, in general, and UK teacher practitioners at primary, secondary and post-compulsory levels, in particular. Therefore, major…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-03
...'' field when using either the Web-based search (advanced search) engine or the ADAMS FIND tool in Citrix... should enter ``05200011'' in the ``Docket Number'' field in the web-based search (advanced search) engine... ML100740441. To search for documents in ADAMS using Vogtle Units 3 and 4 COL application docket numbers, 52...
Brief Report: Consistency of Search Engine Rankings for Autism Websites
ERIC Educational Resources Information Center
Reichow, Brian; Naples, Adam; Steinhoff, Timothy; Halpern, Jason; Volkmar, Fred R.
2012-01-01
The World Wide Web is one of the most common methods used by parents to find information on autism spectrum disorders and most consumers find information through search engines such as Google or Bing. However, little is known about how the search engines operate or the consistency of the results that are returned over time. This study presents the…
Green Hospital and Climate Change: Their Interrelationship and the Way Forward
Kaur, Dilpreet
2015-01-01
Climate change is a reality, and the modern healthcare sector not just contributes towards this grave phenomenon but is itself being affected by it. The present review was thus conducted to understand the meaning of ‘Green Hospital’, to identify the many ways in which health sector is contributing towards climate change, to explore possibilities for countering this grave trend and last of all to look for institutions that are pioneering change. Data for the review was extracted from multiple online sources using the Google search engine. It was found that hospitals, being resource intensive establishments, consume vast amounts of electricity, water, food and construction materials to provide high quality care. It was also found that certain healthcare institutions, by employing simple, smart and sustainable measures can greatly reduce their environmental footprint. But constructing Green Hospitals can be a challenge considering the local conditions and growing customer expectations. PMID:26814377
Fuel for the Future: Biodiesel - A Case study
NASA Astrophysics Data System (ADS)
Lutterbach, Márcia T. S.; Galvão, Mariana M.
High crude oil prices, concern over depletion of world reserves, and growing apprehension about the environment, encouraged the search for alternative energy sources that use renewable natural resources to reduce or replace traditional fossil fuels such as diesel and gasoline (Hill et al., 2006). Among renewable fuels, biodiesel has been attracting great interest, especially in Europe and the United States. Biodiesel is defined by the World Customs Organization (WCO) as 'a mixture of mono-alkyl esters of long-chain [C16-C18] fatty acids derived from vegetable oils or animal fats, which is a domestic renewable fuel for diesel engines and which meets the US specifications of ASTM D 6751'. Biodiesel is biodegradable and non toxic, produces 93% more energy than the fossil energy required for its production, reduces greenhouse gas emissions by 40% compared to fossil diesel (Peterson and Hustrulid, 1998; Hill et al., 2006) and stimulates agriculture.
VLSI processors for signal detection in SETI
NASA Technical Reports Server (NTRS)
Duluk, J. F.; Linscott, I. R.; Peterson, A. M.; Burr, J.; Ekroot, B.; Twicken, J.
1989-01-01
The objective of the Search for Extraterrestrial Intelligence (SETI) is to locate an artificially created signal coming from a distant star. This is done in two steps: (1) spectral analysis of an incoming radio frequency band, and (2) pattern detection for narrow-band signals. Both steps are computationally expensive and require the development of specially designed computer architectures. To reduce the size and cost of the SETI signal detection machine, two custom VLSI chips are under development. The first chip, the SETI DSP Engine, is used in the spectrum analyzer and is specially designed to compute Discrete Fourier Transforms (DFTs). It is a high-speed arithmetic processor that has two adders, one multiplier-accumulator, and three four-port memories. The second chip is a new type of Content-Addressable Memory. It is the heart of an associative processor that is used for pattern detection. Both chips incorporate many innovative circuits and architectural features.
VLSI processors for signal detection in SETI.
Duluk, J F; Linscott, I R; Peterson, A M; Burr, J; Ekroot, B; Twicken, J
1989-01-01
The objective of the Search for Extraterrestrial Intelligence (SETI) is to locate an artificially created signal coming from a distant star. This is done in two steps: (1) spectral analysis of an incoming radio frequency band, and (2) pattern detection for narrow-band signals. Both steps are computationally expensive and require the development of specially designed computer architectures. To reduce the size and cost of the SETI signal detection machine, two custom VLSI chips are under development. The first chip, the SETI DSP Engine, is used in the spectrum analyzer and is specially designed to compute Discrete Fourier Transforms (DFTs). It is a high-speed arithmetic processor that has two adders, one multiplier-accumulator, and three four-port memories. The second chip is a new type of Content-Addressable Memory. It is the heart of an associative processor that is used for pattern detection. Both chips incorporate many innovative circuits and architectural features.
Hung, Kun-Che; Tseng, Ching-Shiow; Dai, Lien-Guo; Hsu, Shan-hui
2016-03-01
Conventional 3D printing may not readily incorporate bioactive ingredients for controlled release because the process often involves the use of heat, organic solvent, or crosslinkers that reduce the bioactivity of the ingredients. Water-based 3D printing materials with controlled bioactivity for customized cartilage tissue engineering is developed in this study. The printing ink contains the water dispersion of synthetic biodegradable polyurethane (PU) elastic nanoparticles, hyaluronan, and bioactive ingredients TGFβ3 or a small molecule drug Y27632 to replace TGFβ3. Compliant scaffolds are printed from the ink at low temperature. These scaffolds promote the self-aggregation of mesenchymal stem cells (MSCs) and, with timely release of the bioactive ingredients, induce the chondrogenic differentiation of MSCs and produce matrix for cartilage repair. Moreover, the growth factor-free controlled release design may prevent cartilage hypertrophy. Rabbit knee implantation supports the potential of the novel 3D printing scaffolds in cartilage regeneration. We consider that the 3D printing composite scaffolds with controlled release bioactivity may have potential in customized tissue engineering. Copyright © 2016 Elsevier Ltd. All rights reserved.
D-score: a search engine independent MD-score.
Vaudel, Marc; Breiter, Daniela; Beck, Florian; Rahnenführer, Jörg; Martens, Lennart; Zahedi, René P
2013-03-01
While peptides carrying PTMs are routinely identified in gel-free MS, the localization of the PTMs onto the peptide sequences remains challenging. Search engine scores of secondary peptide matches have been used in different approaches in order to infer the quality of site inference, by penalizing the localization whenever the search engine similarly scored two candidate peptides with different site assignments. In the present work, we show how the estimation of posterior error probabilities for peptide candidates allows the estimation of a PTM score called the D-score, for multiple search engine studies. We demonstrate the applicability of this score to three popular search engines: Mascot, OMSSA, and X!Tandem, and evaluate its performance using an already published high resolution data set of synthetic phosphopeptides. For those peptides with phosphorylation site inference uncertainty, the number of spectrum matches with correctly localized phosphorylation increased by up to 25.7% when compared to using Mascot alone, although the actual increase depended on the fragmentation method used. Since this method relies only on search engine scores, it can be readily applied to the scoring of the localization of virtually any modification at no additional experimental or in silico cost. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Synthetic biology meets tissue engineering
Davies, Jamie A.; Cachat, Elise
2016-01-01
Classical tissue engineering is aimed mainly at producing anatomically and physiologically realistic replacements for normal human tissues. It is done either by encouraging cellular colonization of manufactured matrices or cellular recolonization of decellularized natural extracellular matrices from donor organs, or by allowing cells to self-organize into organs as they do during fetal life. For repair of normal bodies, this will be adequate but there are reasons for making unusual, non-evolved tissues (repair of unusual bodies, interface to electromechanical prostheses, incorporating living cells into life-support machines). Synthetic biology is aimed mainly at engineering cells so that they can perform custom functions: applying synthetic biological approaches to tissue engineering may be one way of engineering custom structures. In this article, we outline the ‘embryological cycle’ of patterning, differentiation and morphogenesis and review progress that has been made in constructing synthetic biological systems to reproduce these processes in new ways. The state-of-the-art remains a long way from making truly synthetic tissues, but there are now at least foundations for future work. PMID:27284030
MedlinePlus Connect: Web Application
... will result in a query to the MedlinePlus search engine. If you specify a code and the name/ ... system or problem code, will use the MedlinePlus search engine (English only): https://connect.medlineplus.gov/application?mainSearchCriteria. ...
Real-time earthquake monitoring using a search engine method.
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-12-04
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.
Real-time earthquake monitoring using a search engine method
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-01-01
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-26
... distribution of electric motors, as well as engineering, customer service and information technology (IT... portion of the supply of engineering services (or like or directly competitive services) to a foreign..., Missouri, who were engaged in employment related to the supply of warehousing, distribution, engineering...
ERIC Educational Resources Information Center
Nicole Hunter, Deirdre-Annaliese
2015-01-01
Increasing pressure to transform teaching and learning of engineering is supported by mounting research evidence for the value of learner-centered pedagogies. Despite this evidence, engineering faculty are often unsuccessful in applying such teaching approaches often because they lack the necessary knowledge to customize these pedagogies for their…
‘Sciencenet’—towards a global search and share engine for all scientific knowledge
Lütjohann, Dominic S.; Shah, Asmi H.; Christen, Michael P.; Richter, Florian; Knese, Karsten; Liebel, Urban
2011-01-01
Summary: Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, ‘Sciencenet’, which facilitates rapid searching over this large data space. By ‘bringing the search engine to the data’, we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. Availability and Implementation: The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the ‘AskMe’ experiment publisher is written in Python 2.7, and the backend ‘YaCy’ search engine is based on Java 1.6. Contact: urban.liebel@kit.edu Supplementary Material: Detailed instructions and descriptions can be found on the project homepage: http://sciencenet.kit.edu. PMID:21493657
Just-in-Time Web Searches for Trainers & Adult Educators.
ERIC Educational Resources Information Center
Kirk, James J.
Trainers and adult educators often need to quickly locate quality information on the World Wide Web (WWW) and need assistance in searching for such information. A "search engine" is an application used to query existing information on the WWW. The three types of search engines are computer-generated indexes, directories, and meta search…
Discovering How Students Search a Library Web Site: A Usability Case Study.
ERIC Educational Resources Information Center
Augustine, Susan; Greene, Courtney
2002-01-01
Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…
Use of an Academic Library Web Site Search Engine.
ERIC Educational Resources Information Center
Fagan, Jody Condit
2002-01-01
Describes an analysis of the search engine logs of Southern Illinois University, Carbondale's library to determine how patrons used the site search. Discusses results that showed patrons did not understand the function of the search and explains improvements that were made in the Web site and in online reference services. (Author/LRW)
GeoSearcher: Location-Based Ranking of Search Engine Results.
ERIC Educational Resources Information Center
Watters, Carolyn; Amoudi, Ghada
2003-01-01
Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…
Clinician search behaviors may be influenced by search engine design.
Lau, Annie Y S; Coiera, Enrico; Zrimec, Tatjana; Compton, Paul
2010-06-30
Searching the Web for documents using information retrieval systems plays an important part in clinicians' practice of evidence-based medicine. While much research focuses on the design of methods to retrieve documents, there has been little examination of the way different search engine capabilities influence clinician search behaviors. Previous studies have shown that use of task-based search engines allows for faster searches with no loss of decision accuracy compared with resource-based engines. We hypothesized that changes in search behaviors may explain these differences. In all, 75 clinicians (44 doctors and 31 clinical nurse consultants) were randomized to use either a resource-based or a task-based version of a clinical information retrieval system to answer questions about 8 clinical scenarios in a controlled setting in a university computer laboratory. Clinicians using the resource-based system could select 1 of 6 resources, such as PubMed; clinicians using the task-based system could select 1 of 6 clinical tasks, such as diagnosis. Clinicians in both systems could reformulate search queries. System logs unobtrusively capturing clinicians' interactions with the systems were coded and analyzed for clinicians' search actions and query reformulation strategies. The most frequent search action of clinicians using the resource-based system was to explore a new resource with the same query, that is, these clinicians exhibited a "breadth-first" search behaviour. Of 1398 search actions, clinicians using the resource-based system conducted 401 (28.7%, 95% confidence interval [CI] 26.37-31.11) in this way. In contrast, the majority of clinicians using the task-based system exhibited a "depth-first" search behavior in which they reformulated query keywords while keeping to the same task profiles. Of 585 search actions conducted by clinicians using the task-based system, 379 (64.8%, 95% CI 60.83-68.55) were conducted in this way. This study provides evidence that different search engine designs are associated with different user search behaviors.
A Real-Time All-Atom Structural Search Engine for Proteins
Gonzalez, Gabriel; Hannigan, Brett; DeGrado, William F.
2014-01-01
Protein designers use a wide variety of software tools for de novo design, yet their repertoire still lacks a fast and interactive all-atom search engine. To solve this, we have built the Suns program: a real-time, atomic search engine integrated into the PyMOL molecular visualization system. Users build atomic-level structural search queries within PyMOL and receive a stream of search results aligned to their query within a few seconds. This instant feedback cycle enables a new “designability”-inspired approach to protein design where the designer searches for and interactively incorporates native-like fragments from proven protein structures. We demonstrate the use of Suns to interactively build protein motifs, tertiary interactions, and to identify scaffolds compatible with hot-spot residues. The official web site and installer are located at http://www.degradolab.org/suns/ and the source code is hosted at https://github.com/godotgildor/Suns (PyMOL plugin, BSD license), https://github.com/Gabriel439/suns-cmd (command line client, BSD license), and https://github.com/Gabriel439/suns-search (search engine server, GPLv2 license). PMID:25079944
A real-time all-atom structural search engine for proteins.
Gonzalez, Gabriel; Hannigan, Brett; DeGrado, William F
2014-07-01
Protein designers use a wide variety of software tools for de novo design, yet their repertoire still lacks a fast and interactive all-atom search engine. To solve this, we have built the Suns program: a real-time, atomic search engine integrated into the PyMOL molecular visualization system. Users build atomic-level structural search queries within PyMOL and receive a stream of search results aligned to their query within a few seconds. This instant feedback cycle enables a new "designability"-inspired approach to protein design where the designer searches for and interactively incorporates native-like fragments from proven protein structures. We demonstrate the use of Suns to interactively build protein motifs, tertiary interactions, and to identify scaffolds compatible with hot-spot residues. The official web site and installer are located at http://www.degradolab.org/suns/ and the source code is hosted at https://github.com/godotgildor/Suns (PyMOL plugin, BSD license), https://github.com/Gabriel439/suns-cmd (command line client, BSD license), and https://github.com/Gabriel439/suns-search (search engine server, GPLv2 license).
Electronic Biomedical Literature Search for Budding Researcher
Thakre, Subhash B.; Thakre S, Sushama S.; Thakre, Amol D.
2013-01-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research. PMID:24179937
Electronic biomedical literature search for budding researcher.
Thakre, Subhash B; Thakre S, Sushama S; Thakre, Amol D
2013-09-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research.
Preliminary Comparison of Three Search Engines for Point of Care Access to MEDLINE® Citations
Hauser, Susan E.; Demner-Fushman, Dina; Ford, Glenn M.; Jacobs, Joshua L.; Thoma, George
2006-01-01
Medical resident physicians used MD on Tap in real time to search for MEDLINE citations relevant to clinical questions using three search engines: Essie, Entrez and Google™, in order of performance. PMID:17238564
Designing Search: Effective Search Interfaces for Academic Library Web Sites
ERIC Educational Resources Information Center
Teague-Rector, Susan; Ghaphery, Jimmy
2008-01-01
Academic libraries customize, support, and provide access to myriad information systems, each with complex graphical user interfaces. The number of possible information entry points on an academic library Web site is both daunting to the end-user and consistently challenging to library Web site designers. Faced with the challenges inherent in…
A Smart Itsy Bitsy Spider for the Web.
ERIC Educational Resources Information Center
Chen, Hsinchun; Chung, Yi-Ming; Ramsey, Marshall; Yang, Christopher C.
1998-01-01
This study tested two Web personal spiders (i.e., agents that take users' requests and perform real-time customized searches) based on best first-search and genetic-algorithm techniques. Both results were comparable and complementary, although the genetic algorithm obtained higher recall value. The Java-based interface was found to be necessary…
Guidelines for Conducting an Ethnic Heritage Search.
ERIC Educational Resources Information Center
Williams, Maxine Patrick
Based on the work of a 22-member research team in the San Diego Community College District, this booklet offers guidelines for developing cultural awareness and presents instruments for conducting an ethnic heritage search, i.e., a systematic examination of a culture to, for example, reveal reasons for customs or practices or clarify the modes of…
Index Compression and Efficient Query Processing in Large Web Search Engines
ERIC Educational Resources Information Center
Ding, Shuai
2013-01-01
The inverted index is the main data structure used by all the major search engines. Search engines build an inverted index on their collection to speed up query processing. As the size of the web grows, the length of the inverted list structures, which can easily grow to hundreds of MBs or even GBs for common terms (roughly linear in the size of…
Knowing healthcare customer needs key to profitable niche marketing.
Weller, S C
1992-06-01
A potential $177 million-a-year market has been opened to textile rental operators, thanks to OSHA's recent ruling on bloodborne pathogens. Healthcare providers nationwide are now searching for solutions to their protective apparel needs. Such customer needs are what drive niche markets. Whether you've been serving the healthcare market for years or are just now targeting it, here are the marketing strategies you need.
Sampson, Margaret; Barrowman, Nicholas J; Moher, David; Clifford, Tammy J; Platt, Robert W; Morrison, Andra; Klassen, Terry P; Zhang, Li
2006-02-24
Most electronic search efforts directed at identifying primary studies for inclusion in systematic reviews rely on the optimal Boolean search features of search interfaces such as DIALOG and Ovid. Our objective is to test the ability of an Ultraseek search engine to rank MEDLINE records of the included studies of Cochrane reviews within the top half of all the records retrieved by the Boolean MEDLINE search used by the reviewers. Collections were created using the MEDLINE bibliographic records of included and excluded studies listed in the review and all records retrieved by the MEDLINE search. Records were converted to individual HTML files. Collections of records were indexed and searched through a statistical search engine, Ultraseek, using review-specific search terms. Our data sources, systematic reviews published in the Cochrane library, were included if they reported using at least one phase of the Cochrane Highly Sensitive Search Strategy (HSSS), provided citations for both included and excluded studies and conducted a meta-analysis using a binary outcome measure. Reviews were selected if they yielded between 1000-6000 records when the MEDLINE search strategy was replicated. Nine Cochrane reviews were included. Included studies within the Cochrane reviews were found within the first 500 retrieved studies more often than would be expected by chance. Across all reviews, recall of included studies into the top 500 was 0.70. There was no statistically significant difference in ranking when comparing included studies with just the subset of excluded studies listed as excluded in the published review. The relevance ranking provided by the search engine was better than expected by chance and shows promise for the preliminary evaluation of large results from Boolean searches. A statistical search engine does not appear to be able to make fine discriminations concerning the relevance of bibliographic records that have been pre-screened by systematic reviewers.
Modeling Group Interactions via Open Data Sources
2011-08-30
data. The state-of-art search engines are designed to help general query-specific search and not suitable for finding disconnected online groups. The...groups, (2) developing innovative mathematical and statistical models and efficient algorithms that leverage existing search engines and employ
Wang, Ximing; Liu, Brent J; Martinez, Clarisa; Zhang, Xuejun; Winstein, Carolee J
2015-01-01
Imaging based clinical trials can benefit from a solution to efficiently collect, analyze, and distribute multimedia data at various stages within the workflow. Currently, the data management needs of these trials are typically addressed with custom-built systems. However, software development of the custom- built systems for versatile workflows can be resource-consuming. To address these challenges, we present a system with a workflow engine for imaging based clinical trials. The system enables a project coordinator to build a data collection and management system specifically related to study protocol workflow without programming. Web Access to DICOM Objects (WADO) module with novel features is integrated to further facilitate imaging related study. The system was initially evaluated by an imaging based rehabilitation clinical trial. The evaluation shows that the cost of the development of system can be much reduced compared to the custom-built system. By providing a solution to customize a system and automate the workflow, the system will save on development time and reduce errors especially for imaging clinical trials. PMID:25870169
Mobile Timekeeping Application Built on Reverse-Engineered JPL Infrastructure
NASA Technical Reports Server (NTRS)
Witoff, Robert J.
2013-01-01
Every year, non-exempt employees cumulatively waste over one man-year tracking their time and using the timekeeping Web page to save those times. This app eliminates this waste. The innovation is a native iPhone app. Libraries were built around a reverse- engineered JPL API. It represents a punch-in/punch-out paradigm for timekeeping. It is accessible natively via iPhones, and features ease of access. Any non-exempt employee can natively punch in and out, as well as save and view their JPL timecard. This app is built on custom libraries created by reverse-engineering the standard timekeeping application. Communication is through custom libraries that re-route traffic through BrowserRAS (remote access service). This has value at any center where employees track their time.
Committee on Women in Science, Engineering, and Medicine (CWSEM)
Skip to Main Content Contact Us | Search: Search The National Academies of Sciences, Engineering and Medicine Committee on Women in Science, Engineering, and Medicine Committee on Women in Science , Engineering, and Medicine Policy and Global Affairs Home About Us Members Subscribe to CWSEM Alerts Resources
Reconsidering the Rhizome: A Textual Analysis of Web Search Engines as Gatekeepers of the Internet
NASA Astrophysics Data System (ADS)
Hess, A.
Critical theorists have often drawn from Deleuze and Guattari's notion of the rhizome when discussing the potential of the Internet. While the Internet may structurally appear as a rhizome, its day-to-day usage by millions via search engines precludes experiencing the random interconnectedness and potential democratizing function. Through a textual analysis of four search engines, I argue that Web searching has grown hierarchies, or "trees," that organize data in tracts of knowledge and place users in marketing niches rather than assist in the development of new knowledge.
E-Referencer: Transforming Boolean OPACs to Web Search Engines.
ERIC Educational Resources Information Center
Khoo, Christopher S. G.; Poo, Danny C. C.; Toh, Teck-Kang; Hong, Glenn
E-Referencer is an expert intermediary system for searching library online public access catalogs (OPACs) on the World Wide Web. It is implemented as a proxy server that mediates the interaction between the user and Boolean OPACs. It transforms a Boolean OPAC into a retrieval system with many of the search capabilities of Web search engines.…
The Effect of Individual Differences on Searching the Web.
ERIC Educational Resources Information Center
Ihadjadene, Madjid; Chaudiron, Stephanne,; Martins, Daniel
2003-01-01
Reports results from a project that investigated the influence of two types of expertise--knowledge of the search domain and experience of the Web search engines--on students' use of a Web search engine. Results showed participants with good knowledge in the domain and participants with high experience of the Web had the best performances. (AEF)
Document Clustering Approach for Meta Search Engine
NASA Astrophysics Data System (ADS)
Kumar, Naresh, Dr.
2017-08-01
The size of WWW is growing exponentially with ever change in technology. This results in huge amount of information with long list of URLs. Manually it is not possible to visit each page individually. So, if the page ranking algorithms are used properly then user search space can be restricted up to some pages of searched results. But available literatures show that no single search system can provide qualitative results from all the domains. This paper provides solution to this problem by introducing a new meta search engine that determine the relevancy of query corresponding to web page and cluster the results accordingly. The proposed approach reduces the user efforts, improves the quality of results and performance of the meta search engine.
[Design and fabrication of the custom-made titanium condyle by selective laser melting technology].
Chen, Jianyu; Luo, Chongdai; Zhang, Chunyu; Zhang, Gong; Qiu, Weiqian; Zhang, Zhiguang
2014-10-01
To design and fabricate the custom-made titanium mandibular condyle by the reverse engineering technology combined with selective laser melting (SLM) technology and to explore the mechanical properties of the SLM-processed samples and the application of the custom-made condyle in the temporomandibular joint (TMJ) reconstruction. The three-dimensional model of the mandibular condyle was obtained from a series of CT databases. The custom-made condyle model was designed by the reverse engineering software. The mandibular condyle was made of titanium powder with a particle size of 20-65 µm as the basic material and the processing was carried out in an argon atmosphere by the SLM machine. The yield strength, ultimate strength, bending strength, hardness, surface morphology and roughness were tested and analyzed. The finite element analysis (FEA) was used to analyze the stress distribution. The complex geometry and the surface of the custom-made condyle can be reproduced precisely by the SLM. The mechanical results showed that the yield strength, ultimate strength, bending strength and hardness were (559±14) MPa, (659±32) MPa, (1 067±42) MPa, and (212±4)HV, respectively. The surface roughness was reduced by sandblast treatment. The custom-made titanium condyle can be fabricated by SLM technology which is time-saving and highly digitized. The mechanical properties of the SLM sample can meet the requirements of surgical implant material in the clinic. The possibility of fabricating custom-made titanium mandibular condyle combined with the FEA opens new interesting perspectives for TMJ reconstruction.
Zhao, Panpan; Zhong, Jiayong; Liu, Wanting; Zhao, Jing; Zhang, Gong
2017-12-01
Multiple search engines based on various models have been developed to search MS/MS spectra against a reference database, providing different results for the same data set. How to integrate these results efficiently with minimal compromise on false discoveries is an open question due to the lack of an independent, reliable, and highly sensitive standard. We took the advantage of the translating mRNA sequencing (RNC-seq) result as a standard to evaluate the integration strategies of the protein identifications from various search engines. We used seven mainstream search engines (Andromeda, Mascot, OMSSA, X!Tandem, pFind, InsPecT, and ProVerB) to search the same label-free MS data sets of human cell lines Hep3B, MHCCLM3, and MHCC97H from the Chinese C-HPP Consortium for Chromosomes 1, 8, and 20. As expected, the union of seven engines resulted in a boosted false identification, whereas the intersection of seven engines remarkably decreased the identification power. We found that identifications of at least two out of seven engines resulted in maximizing the protein identification power while minimizing the ratio of suspicious/translation-supported identifications (STR), as monitored by our STR index, based on RNC-Seq. Furthermore, this strategy also significantly improves the peptides coverage of the protein amino acid sequence. In summary, we demonstrated a simple strategy to significantly improve the performance for shotgun mass spectrometry by protein-level integrating multiple search engines, maximizing the utilization of the current MS spectra without additional experimental work.
Semantically optiMize the dAta seRvice operaTion (SMART) system for better data discovery and access
NASA Astrophysics Data System (ADS)
Yang, C.; Huang, T.; Armstrong, E. M.; Moroni, D. F.; Liu, K.; Gui, Z.
2013-12-01
Abstract: We present a Semantically optiMize the dAta seRvice operaTion (SMART) system for better data discovery and access across the NASA data systems, Global Earth Observation System of Systems (GEOSS) Clearinghouse and Data.gov to facilitate scientists to select Earth observation data that fit better their needs in four aspects: 1. Integrating and interfacing the SMART system to include the functionality of a) semantic reasoning based on Jena, an open source semantic reasoning engine, b) semantic similarity calculation, c) recommendation based on spatiotemporal, semantic, and user workflow patterns, and d) ranking results based on similarity between search terms and data ontology. 2. Collaborating with data user communities to a) capture science data ontology and record relevant ontology triple stores, b) analyze and mine user search and download patterns, c) integrate SMART into metadata-centric discovery system for community-wide usage and feedback, and d) customizing data discovery, search and access user interface to include the ranked results, recommendation components, and semantic based navigations. 3. Laying the groundwork to interface the SMART system with other data search and discovery systems as an open source data search and discovery solution. The SMART systems leverages NASA, GEO, FGDC data discovery, search and access for the Earth science community by enabling scientists to readily discover and access data appropriate to their endeavors, increasing the efficiency of data exploration and decreasing the time that scientists must spend on searching, downloading, and processing the datasets most applicable to their research. By incorporating the SMART system, it is a likely aim that the time being devoted to discovering the most applicable dataset will be substantially reduced, thereby reducing the number of user inquiries and likewise reducing the time and resources expended by a data center in addressing user inquiries. Keywords: EarthCube; ECHO, DAACs, GeoPlatform; Geospatial Cyberinfrastructure References: 1. Yang, P., Evans, J., Cole, M., Alameh, N., Marley, S., & Bambacus, M., (2007). The Emerging Concepts and Applications of the Spatial Web Portal. Photogrammetry Engineering &Remote Sensing,73(6):691-698. 2. Zhang, C, Zhao, T. and W. Li. (2010). The Framework of a Geospatial Semantic Web based Spatial Decision Support System for Digital Earth. International Journal of Digital Earth. 3(2):111-134. 3. Yang C., Raskin R., Goodchild M.F., Gahegan M., 2010, Geospatial Cyberinfrastructure: Past, Present and Future,Computers, Environment, and Urban Systems, 34(4):264-277. 4. Liu K., Yang C., Li W., Gui Z., Xu C., Xia J., 2013. Using ontology and similarity calculations to rank Earth science data searching results, International Journal of Geospatial Information Applications. (in press)
Pratt and Whitney Overview and Advanced Health Management Program
NASA Technical Reports Server (NTRS)
Inabinett, Calvin
2008-01-01
Hardware Development Activity: Design and Test Custom Multi-layer Circuit Boards for use in the Fault Emulation Unit; Logic design performed using VHDL; Layout power system for lab hardware; Work lab issues with software developers and software testers; Interface with Engine Systems personnel with performance of Engine hardware components; Perform off nominal testing with new engine hardware.
Comet: an open-source MS/MS sequence database search tool.
Eng, Jimmy K; Jahan, Tahmina A; Hoopmann, Michael R
2013-01-01
Proteomics research routinely involves identifying peptides and proteins via MS/MS sequence database search. Thus the database search engine is an integral tool in many proteomics research groups. Here, we introduce the Comet search engine to the existing landscape of commercial and open-source database search tools. Comet is open source, freely available, and based on one of the original sequence database search tools that has been widely used for many years. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sundanese ancient manuscripts search engine using probability approach
NASA Astrophysics Data System (ADS)
Suryani, Mira; Hadi, Setiawan; Paulus, Erick; Nurma Yulita, Intan; Supriatna, Asep K.
2017-10-01
Today, Information and Communication Technology (ICT) has become a regular thing for every aspect of live include cultural and heritage aspect. Sundanese ancient manuscripts as Sundanese heritage are in damage condition and also the information that containing on it. So in order to preserve the information in Sundanese ancient manuscripts and make them easier to search, a search engine has been developed. The search engine must has good computing ability. In order to get the best computation in developed search engine, three types of probabilistic approaches: Bayesian Networks Model, Divergence from Randomness with PL2 distribution, and DFR-PL2F as derivative form DFR-PL2 have been compared in this study. The three probabilistic approaches supported by index of documents and three different weighting methods: term occurrence, term frequency, and TF-IDF. The experiment involved 12 Sundanese ancient manuscripts. From 12 manuscripts there are 474 distinct terms. The developed search engine tested by 50 random queries for three types of query. The experiment results showed that for the single query and multiple query, the best searching performance given by the combination of PL2F approach and TF-IDF weighting method. The performance has been evaluated using average time responds with value about 0.08 second and Mean Average Precision (MAP) about 0.33.
NASA Astrophysics Data System (ADS)
Cinquini, L.; Bell, G. M.; Williams, D.; Harney, J.
2012-12-01
The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing state-of-the-art services for the management and access of Earth system data. ESGF is currently used to serve the totality of the model output used for the forthcoming IPCC 5th assessment report on climate change, as well as supporting observational and reanalysis datasets. Also, it is been adopted by several other projects that focus on global, regional and local climate modeling. The ESGF software stack is composed of several modular applications that cover related but disjoint areas of functionality: data publishing, data search and discovery, data access, user management, security, and federation. Overall, the ESGF infrastructure offers a configurable end-to-end solution to the problem of enabling web-based access to large amounts of geospatial data. This talk will present the architectural and configuration options that are available to a data provider leveraging ESGF to serve their data: which services to expose, how to scale to larger data collections, how to establish access control, how to customize the user interface, and others. Additionally, the framework provides extension points that allow each site to plug in custom functionality such as crawling of specific metadata repositories, exposing domain-specific analysis and visualization services, developing custom access clients that interact with the system APIs. These configuration and extension capabilities are based on simple but effective domain-specific object models, that underpin the software applications: the data model, the security model, and the federation model. The ESGF software stack is developed collaboratively by software engineers at many institutions around the world, and is made freely available to the community under an open source license to promote adoption, reuse, inspection and continuous improvement.
Informedia at TRECVID 2003: Analyzing and Searching Broadcast News Video
2004-11-03
browsing interface to browse the top-ranked shots according to the different classifiers. Color and texture based image search engines were also...different classifiers. Color and texture based image search engines were also optimized better performance. This “new” interface was evaluated as
Human Interface to Netcentricity
2006-06-01
experiencing. This is a radically different approach than using a federated search engine to bring back all relevant documents. The search engine...not be any closer to answering their question. More importantly, if they only have access to a 22 federated search , the program does not have the
Chemical-text hybrid search engines.
Zhou, Yingyao; Zhou, Bin; Jiang, Shumei; King, Frederick J
2010-01-01
As the amount of chemical literature increases, it is critical that researchers be enabled to accurately locate documents related to a particular aspect of a given compound. Existing solutions, based on text and chemical search engines alone, suffer from the inclusion of "false negative" and "false positive" results, and cannot accommodate diverse repertoire of formats currently available for chemical documents. To address these concerns, we developed an approach called Entity-Canonical Keyword Indexing (ECKI), which converts a chemical entity embedded in a data source into its canonical keyword representation prior to being indexed by text search engines. We implemented ECKI using Microsoft Office SharePoint Server Search, and the resultant hybrid search engine not only supported complex mixed chemical and keyword queries but also was applied to both intranet and Internet environments. We envision that the adoption of ECKI will empower researchers to pose more complex search questions that were not readily attainable previously and to obtain answers at much improved speed and accuracy.
Optimizing Online Suicide Prevention: A Search Engine-Based Tailored Approach.
Arendt, Florian; Scherr, Sebastian
2017-11-01
Search engines are increasingly used to seek suicide-related information online, which can serve both harmful and helpful purposes. Google acknowledges this fact and presents a suicide-prevention result for particular search terms. Unfortunately, the result is only presented to a limited number of visitors. Hence, Google is missing the opportunity to provide help to vulnerable people. We propose a two-step approach to a tailored optimization: First, research will identify the risk factors. Second, search engines will reweight algorithms according to the risk factors. In this study, we show that the query share of the search term "poisoning" on Google shows substantial peaks corresponding to peaks in actual suicidal behavior. Accordingly, thresholds for showing the suicide-prevention result should be set to the lowest levels during the spring, on Sundays and Mondays, on New Year's Day, and on Saturdays following Thanksgiving. Search engines can help to save lives globally by utilizing a more tailored approach to suicide prevention.
ERIC Educational Resources Information Center
Reifschneider, Louis; Kaufman, Peter; Langrehr, Frederick W.; Kaufman, Kristina
2015-01-01
Marketers are criticized for not understanding the steps in the engineering research and development process and the challenges of manufacturing a new product at a profit. Engineers are criticized for not considering the marketability of and customer interest in such a product during the planning stages. With the development of 3D printing, rapid…
Customized Resources | OSTI, US Dept of Energy Office of Scientific and
Technical Information skip to main content Sign In Create Account OSTI.GOV title logo U.S . Department of Energy Office of Scientific and Technical Information Search terms: Advanced search options Tools Public Access Policy Data Services & Dev Tools About FAQs News Sign In Create Account This
ERIC Educational Resources Information Center
Dahlen, Sarah P. C.; Hanson, Kathlene
2017-01-01
Discovery layers provide a simplified interface for searching library resources. Libraries with limited finances make decisions about retaining indexing and abstracting databases when similar information is available in discovery layers. These decisions should be informed by student success at finding quality information as well as satisfaction…
The Google Online Marketing Challenge: Real Clients, Real Money, Real Ads and Authentic Learning
ERIC Educational Resources Information Center
Miko, John S.
2014-01-01
Search marketing is the process of utilizing search engines to drive traffic to a Web site through both paid and unpaid efforts. One potential paid component of a search marketing strategy is the use of a pay-per-click (PPC) advertising campaign in which advertisers pay search engine hosts only when their advertisement is clicked. This paper…
Information Retrieval for Education: Making Search Engines Language Aware
ERIC Educational Resources Information Center
Ott, Niels; Meurers, Detmar
2010-01-01
Search engines have been a major factor in making the web the successful and widely used information source it is today. Generally speaking, they make it possible to retrieve web pages on a topic specified by the keywords entered by the user. Yet web searching currently does not take into account which of the search results are comprehensible for…
Balancing Efficiency and Effectiveness for Fusion-Based Search Engines in the "Big Data" Environment
ERIC Educational Resources Information Center
Li, Jieyu; Huang, Chunlan; Wang, Xiuhong; Wu, Shengli
2016-01-01
Introduction: In the big data age, we have to deal with a tremendous amount of information, which can be collected from various types of sources. For information search systems such as Web search engines or online digital libraries, the collection of documents becomes larger and larger. For some queries, an information search system needs to…
NASA Astrophysics Data System (ADS)
Piasecki, M.; Beran, B.
2007-12-01
Search engines have changed the way we see the Internet. The ability to find the information by just typing in keywords was a big contribution to the overall web experience. While the conventional search engine methodology worked well for textual documents, locating scientific data remains a problem since they are stored in databases not readily accessible by search engine bots. Considering different temporal, spatial and thematic coverage of different databases, especially for interdisciplinary research it is typically necessary to work with multiple data sources. These sources can be federal agencies which generally offer national coverage or regional sources which cover a smaller area with higher detail. However for a given geographic area of interest there often exists more than one database with relevant data. Thus being able to query multiple databases simultaneously is a desirable feature that would be tremendously useful for scientists. Development of such a search engine requires dealing with various heterogeneity issues. In scientific databases, systems often impose controlled vocabularies which ensure that they are generally homogeneous within themselves but are semantically heterogeneous when moving between different databases. This defines the boundaries of possible semantic related problems making it easier to solve than with the conventional search engines that deal with free text. We have developed a search engine that enables querying multiple data sources simultaneously and returns data in a standardized output despite the aforementioned heterogeneity issues between the underlying systems. This application relies mainly on metadata catalogs or indexing databases, ontologies and webservices with virtual globe and AJAX technologies for the graphical user interface. Users can trigger a search of dozens of different parameters over hundreds of thousands of stations from multiple agencies by providing a keyword, a spatial extent, i.e. a bounding box, and a temporal bracket. As part of this development we have also added an environment that allows users to do some of the semantic tagging, i.e. the linkage of a variable name (which can be anything they desire) to defined concepts in the ontology structure which in turn provides the backbone of the search engine.
Kelly, Jason
2012-01-20
A new industry model is emerging where microbes are first developed by specialist organism engineering firms and then deployed by customers in specific application areas. It is now realistic for companies without prior fermentation experience to purchase and deploy an engineered organism to expand their business.
SAMSON Technology Demonstrator
2014-06-01
requested. The SAMSON TD was testing with two different policy engines: 1. A custom XACML-based element matching engine using a MySQL database for...performed during the course of the event. Full information protection across the sphere of access management, information protection and auditing was in...
Estimating search engine index size variability: a 9-year longitudinal study.
van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice
One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.
Cleanroom Software Engineering Reference Model. Version 1.0.
1996-11-01
teams. It also serves as a baseline for continued evolution of Cleanroom practice. The scope of the CRM is software management , specification...addition to project staff, participants include management , peer organization representatives, and customer representatives as appropriate for...2 Review the status of the process with management , the project team, peer groups, and the customer . These verification activities include
Essie: A Concept-based Search Engine for Structured Biomedical Text
Ide, Nicholas C.; Loane, Russell F.; Demner-Fushman, Dina
2007-01-01
This article describes the algorithms implemented in the Essie search engine that is currently serving several Web sites at the National Library of Medicine. Essie is a phrase-based search engine with term and concept query expansion and probabilistic relevancy ranking. Essie’s design is motivated by an observation that query terms are often conceptually related to terms in a document, without actually occurring in the document text. Essie’s performance was evaluated using data and standard evaluation methods from the 2003 and 2006 Text REtrieval Conference (TREC) Genomics track. Essie was the best-performing search engine in the 2003 TREC Genomics track and achieved results comparable to those of the highest-ranking systems on the 2006 TREC Genomics track task. Essie shows that a judicious combination of exploiting document structure, phrase searching, and concept based query expansion is a useful approach for information retrieval in the biomedical domain. PMID:17329729
Use of controlled vocabularies to improve biomedical information retrieval tasks.
Pasche, Emilie; Gobeill, Julien; Vishnyakova, Dina; Ruch, Patrick; Lovis, Christian
2013-01-01
The high heterogeneity of biomedical vocabulary is a major obstacle for information retrieval in large biomedical collections. Therefore, using biomedical controlled vocabularies is crucial for managing these contents. We investigate the impact of query expansion based on controlled vocabularies to improve the effectiveness of two search engines. Our strategy relies on the enrichment of users' queries with additional terms, directly derived from such vocabularies applied to infectious diseases and chemical patents. We observed that query expansion based on pathogen names resulted in improvements of the top-precision of our first search engine, while the normalization of diseases degraded the top-precision. The expansion of chemical entities, which was performed on the second search engine, positively affected the mean average precision. We have shown that query expansion of some types of biomedical entities has a great potential to improve search effectiveness; therefore a fine-tuning of query expansion strategies could help improving the performances of search engines.
Effective Materials Property Information Management for the 21st Century
NASA Technical Reports Server (NTRS)
Ren, Weiju; Cebon, David; Arnold, Steve
2009-01-01
This paper discusses key principles for the development of materials property information management software systems. There are growing needs for automated materials information management in various organizations. In part these are fueled by the demands for higher efficiency in material testing, product design and engineering analysis. But equally important, organizations are being driven by the need for consistency, quality and traceability of data, as well as control of access to sensitive information such as proprietary data. Further, the use of increasingly sophisticated nonlinear, anisotropic and multi-scale engineering analyses requires both processing of large volumes of test data for development of constitutive models and complex materials data input for Computer-Aided Engineering (CAE) software. And finally, the globalization of economy often generates great needs for sharing a single "gold source" of materials information between members of global engineering teams in extended supply chains. Fortunately, material property management systems have kept pace with the growing user demands and evolved to versatile data management systems that can be customized to specific user needs. The more sophisticated of these provide facilities for: (i) data management functions such as access, version, and quality controls; (ii) a wide range of data import, export and analysis capabilities; (iii) data "pedigree" traceability mechanisms; (iv) data searching, reporting and viewing tools; and (v) access to the information via a wide range of interfaces. In this paper the important requirements for advanced material data management systems, future challenges and opportunities such as automated error checking, data quality characterization, identification of gaps in datasets, as well as functionalities and business models to fuel database growth and maintenance are discussed.
Customizing the JPL Multimission Ground Data System: Lessons learned
NASA Technical Reports Server (NTRS)
Murphy, Susan C.; Louie, John J.; Guerrero, Ana Maria; Hurley, Daniel; Flora-Adams, Dana
1994-01-01
The Multimission Ground Data System (MGDS) at NASA's Jet Propulsion Laboratory has brought improvements and new technologies to mission operations. It was designed as a generic data system to meet the needs of multiple missions and avoid re-inventing capabilities for each new mission and thus reduce costs. It is based on adaptable tools that can be customized to support different missions and operations scenarios. The MGDS is based on a distributed client/server architecture, with powerful Unix workstations, incorporating standards and open system architectures. The distributed architecture allows remote operations and user science data exchange, while also providing capabilities for centralized ground system monitor and control. The MGDS has proved its capabilities in supporting multiple large-class missions simultaneously, including the Voyager, Galileo, Magellan, Ulysses, and Mars Observer missions. The Operations Engineering Lab (OEL) at JPL has been leading Customer Adaptation Training (CAT) teams for adapting and customizing MGDS for the various operations and engineering teams. These CAT teams have typically consisted of only a few engineers who are familiar with operations and with the MGDS software and architecture. Our experience has provided a unique opportunity to work directly with the spacecraft and instrument operations teams and understand their requirements and how the MGDS can be adapted and customized to minimize their operations costs. As part of this work, we have developed workstation configurations, automation tools, and integrated user interfaces at minimal cost that have significantly improved productivity. We have also proved that these customized data systems are most successful if they are focused on the people and the tasks they perform and if they are based upon user confidence in the development team resulting from daily interactions. This paper will describe lessons learned in adapting JPL's MGDS to fly the Voyager, Galileo, and Mars Observer missions. We will explain how powerful, existing ground data systems can be adapted and packaged in a cost effective way for operations of small and large planetary missions. We will also describe how the MGDS was adapted to support operations within the Galileo Spacecraft Testbed. The Galileo testbed provided a unique opportunity to adapt MGDS to support command and control operations for a small autonomous operations team of a handful of engineers flying the Galileo Spacecraft flight system model.
[Study of the health food information for cancer patients on Japanese websites].
Kishimoto, Keiko; Yoshino, Chie; Fukushima, Noriko
2010-08-01
The aim of this paper is to evaluate the reliability of websites providing health food information for cancer patients and, to assess the status to get this information online. We used four common Japanese search engines (Yahoo!, Google, goo, and MSN) to look up websites on Dec. 2, 2008. The search keywords were "health food" and "cancer". The websites for the first 100 hits generated by each search engine were screened and extracted by three conditions. We extracted 64 unique websites by the result of retrieval, of which 54 websites had information about health food factors. The two scales were used to evaluate the quality of the content on 54 websites. On the scale of reliability of information on the Web, the average score was 2.69+/-1.70 (maximum 6) and the median was 2.5. The other scale was matter need to check whether listed to use safely this information. On this scale, the average score was 0.72+/-1.22 (maximum 5) and the median was 0. Three engines showed poor correlation between the ranking and the latter score. But several websites on the top indicated 0 score. Fifty-four websites were extracted with one to four engines and the average number of search engines was 1.9. The two scales were positively correlated with the number of search engines, but these correlations were very poor. Ranking high and extraction by multiple search engines were of minor benefit to pick out more reliable information.
Searching the scientific literature: implications for quantitative and qualitative reviews.
Wu, Yelena P; Aylward, Brandon S; Roberts, Michael C; Evans, Spencer C
2012-08-01
Literature reviews are an essential step in the research process and are included in all empirical and review articles. Electronic databases are commonly used to gather this literature. However, several factors can affect the extent to which relevant articles are retrieved, influencing future research and conclusions drawn. The current project examined articles obtained by comparable search strategies in two electronic archives using an exemplar search to illustrate factors that authors should consider when designing their own search strategies. Specifically, literature searches were conducted in PsycINFO and PubMed targeting review articles on two exemplar disorders (bipolar disorder and attention deficit/hyperactivity disorder) and issues of classification and/or differential diagnosis. Articles were coded for relevance and characteristics of article content. The two search engines yielded significantly different proportions of relevant articles overall and by disorder. Keywords differed across search engines for the relevant articles identified. Based on these results, it is recommended that when gathering literature for review papers, multiple search engines should be used, and search syntax and strategies be tailored to the unique capabilities of particular engines. For meta-analyses and systematic reviews, authors may consider reporting the extent to which different archives or sources yielded relevant articles for their particular review. Copyright © 2012 Elsevier Ltd. All rights reserved.
Guiding Students to Answers: Query Recommendation
ERIC Educational Resources Information Center
Yilmazel, Ozgur
2011-01-01
This paper reports on a guided navigation system built on the textbook search engine developed at Anadolu University to support distance education students. The search engine uses Turkish Language specific language processing modules to enable searches over course material presented in Open Education Faculty textbooks. We implemented a guided…
An Annotated and Federated Digital Library of Marine Animal Sounds
2005-01-01
of the annotations and the relevant segment delimitation points and linkages to other relevant metadata fields; e) search engines that support the...annotators to add information to the same recording, and search engines that permit either all-annotator or specific-annotator searches. To our knowledge
Customizing cell signaling using engineered genetic logic circuits.
Wang, Baojun; Buck, Martin
2012-08-01
Cells live in an ever-changing environment and continuously sense, process and react to environmental signals using their inherent signaling and gene regulatory networks. Recently, there have been great advances on rewiring the native cell signaling and gene networks to program cells to sense multiple noncognate signals and integrate them in a logical manner before initiating a desired response. Here, we summarize the current state-of-the-art of engineering synthetic genetic logic circuits to customize cellular signaling behaviors, and discuss their promising applications in biocomputing, environmental, biotechnological and biomedical areas as well as the remaining challenges in this growing field. Copyright © 2012 Elsevier Ltd. All rights reserved.
TAIPAN fibre feed and spectrograph: engineering overview
NASA Astrophysics Data System (ADS)
Staszak, Nicholas F.; Lawrence, Jon; Zhelem, Ross; Content, Robert; Churilov, Vladimir; Case, Scott; Brown, Rebecca; Hopkins, Andrew M.; Kuehn, Kyler; Pai, Naveen; Klauser, Urs; Nichani, Vijay; Waller, Lew
2016-07-01
TAIPAN will conduct a stellar and galaxy survey of the Southern sky. The TAIPAN positioner is being developed as a prototype for the MANIFEST instrument on the GMT. The TAIPAN Spectrograph is an AAO designed all-refractive 2-arm design that delivers a spectral resolution of R>2000 over the wavelength range 370-870 nm. It is fed by a custom fibre cable from the TAIPAN Starbugs positioner. The design for TAIPAN incorporates 150 optical fibres (with an upgrade path to 300). Presented is an engineering overview of the UKST Fibre Cable design used to support Starbugs, the custom slit design, and the overall design and build plan for the TAIPAN Spectrograph.
Here's the beef: A case study in organizational transformation
NASA Technical Reports Server (NTRS)
Huseonica, William F.; Giardino, Marco J.
1992-01-01
The Science and Technology Lab (STL) is tasked with the design, development, and application of the science and engineering services. Formed in the early 1970's STL adhered to many traditional attitudes including barriers to communication, excessive management control, parochial strategies, unclear measures of success, lack of customer focus, underutilization of people, and excessive administrative burdens on scientists and engineers. The challenge for the STL was to maximize customer satisfaction through the effective and efficient application of the notable skills and talents of the STL's workforce. In this way, the Lab would begin its exciting journey toward becoming world class. A discussion of this on-going transformation is presented.
Goals Analysis Procedure Guidelines for Applying the Goals Analysis Process
NASA Technical Reports Server (NTRS)
Motley, Albert E., III
2000-01-01
One of the key elements to successful project management is the establishment of the "right set of requirements", requirements that reflect the true customer needs and are consistent with the strategic goals and objectives of the participating organizations. A viable set of requirements implies that each individual requirement is a necessary element in satisfying the stated goals and that the entire set of requirements, taken as a whole, is sufficient to satisfy the stated goals. Unfortunately, it is the author's experience that during project formulation phases' many of the Systems Engineering customers do not conduct a rigorous analysis of the goals and objectives that drive the system requirements. As a result, the Systems Engineer is often provided with requirements that are vague, incomplete, and internally inconsistent. To complicate matters, most systems development methodologies assume that the customer provides unambiguous, comprehensive and concise requirements. This paper describes the specific steps of a Goals Analysis process applied by Systems Engineers at the NASA Langley Research Center during the formulation of requirements for research projects. The objective of Goals Analysis is to identify and explore all of the influencing factors that ultimately drive the system's requirements.
Performance optimization of an online retailer by a unique online resilience engineering algorithm
NASA Astrophysics Data System (ADS)
Azadeh, A.; Salehi, V.; Salehi, R.; Hassani, S. M.
2018-03-01
Online shopping has become more attractive and competitive in electronic markets. Resilience engineering (RE) can help such systems divert to the normal state in case of encountering unexpected events. This study presents a unique online resilience engineering (ORE) approach for online shopping systems and customer service performance. Moreover, this study presents a new ORE algorithm for the performance optimisation of an actual online shopping system. The data are collected by standard questionnaires from both expert employees and customers. The problem is then formulated mathematically using data envelopment analysis (DEA). The results show that the design process which is based on ORE is more efficient than the conventional design approach. Moreover, on-time delivery is the most important factor from the personnel's perspective. In addition, according to customers' view, trust, security and good quality assurance are the most effective factors during transactions. This is the first study that introduces ORE for electronic markets. Second, it investigates impact of RE on online shopping through DEA and statistical methods. Third, a practical approach is employed in this study and it may be used for similar online shops. Fourth, the results are verified and validated through complete sensitivity analysis.
Kushniruk, Andre W; Kan, Min-Yem; McKeown, Kathleen; Klavans, Judith; Jordan, Desmond; LaFlamme, Mark; Patel, Vimia L
2002-01-01
This paper describes the comparative evaluation of an experimental automated text summarization system, Centrifuser and three conventional search engines - Google, Yahoo and About.com. Centrifuser provides information to patients and families relevant to their questions about specific health conditions. It then produces a multidocument summary of articles retrieved by a standard search engine, tailored to the user's question. Subjects, consisting of friends or family of hospitalized patients, were asked to "think aloud" as they interacted with the four systems. The evaluation involved audio- and video recording of subject interactions with the interfaces in situ at a hospital. Results of the evaluation show that subjects found Centrifuser's summarization capability useful and easy to understand. In comparing Centrifuser to the three search engines, subjects' ratings varied; however, specific interface features were deemed useful across interfaces. We conclude with a discussion of the implications for engineering Web-based retrieval systems.
Synthetic biology meets tissue engineering.
Davies, Jamie A; Cachat, Elise
2016-06-15
Classical tissue engineering is aimed mainly at producing anatomically and physiologically realistic replacements for normal human tissues. It is done either by encouraging cellular colonization of manufactured matrices or cellular recolonization of decellularized natural extracellular matrices from donor organs, or by allowing cells to self-organize into organs as they do during fetal life. For repair of normal bodies, this will be adequate but there are reasons for making unusual, non-evolved tissues (repair of unusual bodies, interface to electromechanical prostheses, incorporating living cells into life-support machines). Synthetic biology is aimed mainly at engineering cells so that they can perform custom functions: applying synthetic biological approaches to tissue engineering may be one way of engineering custom structures. In this article, we outline the 'embryological cycle' of patterning, differentiation and morphogenesis and review progress that has been made in constructing synthetic biological systems to reproduce these processes in new ways. The state-of-the-art remains a long way from making truly synthetic tissues, but there are now at least foundations for future work. © 2016 Authors; published by Portland Press Limited.
Search without Boundaries Using Simple APIs
Tong, Qi
2009-01-01
The U.S. Geological Survey (USGS) Library, where the author serves as the digital services librarian, is increasingly challenged to make it easier for users to find information from many heterogeneous information sources. Information is scattered throughout different software applications (i.e., library catalog, federated search engine, link resolver, and vendor websites), and each specializes in one thing. How could the library integrate the functionalities of one application with another and provide a single point of entry for users to search across? To improve the user experience, the library launched an effort to integrate the federated search engine into the library's intranet website. The result is a simple search box that leverages the federated search engine's built-in application programming interfaces (APIs). In this article, the author describes how this project demonstrated the power of APIs and their potential to be used by other enterprise search portals inside or outside of the library.
Searches Conducted for Engineers.
ERIC Educational Resources Information Center
Lorenz, Patricia
This paper reports an industrial information specialist's experience in performing online searches for engineers and surveys the databases used. Engineers seeking assistance fall into three categories: (1) those who recognize the value of online retrieval; (2) referrals by colleagues; and (3) those who do not seek help. As more successful searches…
DOE Office of Scientific and Technical Information (OSTI.GOV)
None Available
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
"Just the Answers, Please": Choosing a Web Search Service.
ERIC Educational Resources Information Center
Feldman, Susan
1997-01-01
Presents guidelines for selecting World Wide Web search engines. Real-life questions were used to test six search engines. Queries sought company information, product reviews, medical information, foreign information, technical reports, and current events. Compares performance and features of AltaVista, Excite, HotBot, Infoseek, Lycos, and Open…
A Search Engine Features Comparison.
ERIC Educational Resources Information Center
Vorndran, Gerald
Until recently, the World Wide Web (WWW) public access search engines have not included many of the advanced commands, options, and features commonly available with the for-profit online database user interfaces, such as DIALOG. This study evaluates the features and characteristics common to both types of search interfaces, examines the Web search…
ERIC Educational Resources Information Center
Gupta, Amardeep
2005-01-01
Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…
Creating a Classroom Kaleidoscope with the World Wide Web.
ERIC Educational Resources Information Center
Quinlan, Laurie A.
1997-01-01
Discusses the elements of classroom Web presentations: planning; construction, including design tips; classroom use; and assessment. Lists 14 World Wide Web resources for K-12 teachers; Internet search tools (directories, search engines and meta-search engines); a Web glossary; and an example of HTML for a simple Web page. (PEN)
ERIC Educational Resources Information Center
Pavlu, Virgil
2008-01-01
Today, search engines are embedded into all aspects of digital world: in addition to Internet search, all operating systems have integrated search engines that respond even as you type, even over the network, even on cell phones; therefore the importance of their efficacy and efficiency cannot be overstated. There are many open possibilities for…
A review of the reporting of web searching to identify studies for Cochrane systematic reviews.
Briscoe, Simon
2018-03-01
The literature searches that are used to identify studies for inclusion in a systematic review should be comprehensively reported. This ensures that the literature searches are transparent and reproducible, which is important for assessing the strengths and weaknesses of a systematic review and re-running the literature searches when conducting an update review. Web searching using search engines and the websites of topically relevant organisations is sometimes used as a supplementary literature search method. Previous research has shown that the reporting of web searching in systematic reviews often lacks important details and is thus not transparent or reproducible. Useful details to report about web searching include the name of the search engine or website, the URL, the date searched, the search strategy, and the number of results. This study reviews the reporting of web searching to identify studies for Cochrane systematic reviews published in the 6-month period August 2016 to January 2017 (n = 423). Of these reviews, 61 reviews reported using web searching using a search engine or website as a literature search method. In the majority of reviews, the reporting of web searching was found to lack essential detail for ensuring transparency and reproducibility, such as the search terms. Recommendations are made on how to improve the reporting of web searching in Cochrane systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Williams, Sarah C.
2010-01-01
The purpose of this study was to investigate how federated search engines are incorporated into the Web sites of libraries in the Association of Research Libraries. In 2009, information was gathered for each library in the Association of Research Libraries with a federated search engine. This included the name of the federated search service and…
The Army Digital Terrain Catalog II (ADTC)
2006-06-01
Engineering (Eds.). Readings for Systems Engineering & Engineering Management. Mason, OH: Thomson Customer Publishing, 2004, p. 2. [3] E. von Hippel ...responsive, deployable, agile, versatile, lethal, survivable, and sustainable force. --Former Army Chief of Staff General Eric Shinseki and former...to advance the tenets of Army Transformation. As former Army Chief of Staff General Eric Shinseki and former Army Secretary Thomas White have stated
Modeling Primary Atomization Processes
1999-02-01
consumable , catalytic igniter has shown to provide reliable, reproducible ignition in hydrogen peroxide/polyethylene hybrid engines. Currently, a...verified in a hybrid rocket using hydrogen peroxide as oxidizer and polyethylene as fuel. The engine made use of a unique Consumable Catalytic Bed (CCB...interest to the liquid and hybrid rocket engine community. TECHNOLOGY TRANSFER Performer Customer Result Application 1 S. D. Heister Purdue University
Semantic interpretation of search engine resultant
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.
2018-01-01
In semantic, logical language can be interpreted in various forms, but the certainty of meaning is included in the uncertainty, which directly always influences the role of technology. One results of this uncertainty applies to search engines as user interfaces with information spaces such as the Web. Therefore, the behaviour of search engine results should be interpreted with certainty through semantic formulation as interpretation. Behaviour formulation shows there are various interpretations that can be done semantically either temporary, inclusion, or repeat.
Health search engine with e-document analysis for reliable search results.
Gaudinat, Arnaud; Ruch, Patrick; Joubert, Michel; Uziel, Philippe; Strauss, Anne; Thonnet, Michèle; Baud, Robert; Spahni, Stéphane; Weber, Patrick; Bonal, Juan; Boyer, Celia; Fieschi, Marius; Geissbuhler, Antoine
2006-01-01
After a review of the existing practical solution available to the citizen to retrieve eHealth document, the paper describes an original specialized search engine WRAPIN. WRAPIN uses advanced cross lingual information retrieval technologies to check information quality by synthesizing medical concepts, conclusions and references contained in the health literature, to identify accurate, relevant sources. Thanks to MeSH terminology [1] (Medical Subject Headings from the U.S. National Library of Medicine) and advanced approaches such as conclusion extraction from structured document, reformulation of the query, WRAPIN offers to the user a privileged access to navigate through multilingual documents without language or medical prerequisites. The results of an evaluation conducted on the WRAPIN prototype show that results of the WRAPIN search engine are perceived as informative 65% (59% for a general-purpose search engine), reliable and trustworthy 72% (41% for the other engine) by users. But it leaves room for improvement such as the increase of database coverage, the explanation of the original functionalities and an audience adaptability. Thanks to evaluation outcomes, WRAPIN is now in exploitation on the HON web site (http://www.healthonnet.org), free of charge. Intended to the citizen it is a good alternative to general-purpose search engines when the user looks up trustworthy health and medical information or wants to check automatically a doubtful content of a Web page.
Special Gender Studies for Engineering?
ERIC Educational Resources Information Center
Ihsen, Susanne
2005-01-01
Today we are confronted with a new challenge in product development: "Diversity" needs to be implemented in the engineering design and development teams. Such diversity means to "mirror" within the teams the characteristics of different customer groups: the two genders, the different age groups, and the different cultural…
List of DOE radioisotope customers with summary of radioisotope shipments, FY 1980
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burlison, J.S.
1981-08-01
The sixteenth edition of the radioisotope customer list was prepared at the request of the Office of Health and Environmental Research, Office of energy Research, Department of Energy (DOE). This document lists DOE's radioisotope production and distribution activities by its facilities at Argonne National Laboratory; Pacific Northwest Laboratory; Brookhaven National Laboraory; Hanford Engineering Development Laboratory; Idaho Operations Office; Los Alamos Scientific Laboratory; Mound Facility; Oak Ridge National Laboratory; Savannah River Laboratory; and UNC Nuclear Industries, Inc. The information is divided into five sections: (1) isotope suppliers, facility, contracts and isotopes or services supplied; (2) alphabetical list of customers, and isotopesmore » purchased; (3) alphabetical list of isotopes cross-referenced to customer numbers; (4) geographical location of radioisotope customers; and (5) radioisotope sales and transfers-FY 1980.« less
List of DOE radioisotope customers with summary of radioisotope shipments, FY 1979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burlison, J.S.
1980-06-01
The fifteenth edition of the radioisotope customer list was prepared at the request of the Division of Financial Services, Office of the Assistant Secretary for Environment, Department of Energy (DOE). This document lists DOE's radioisotope production and distribution activities by its facilities at Argonne National Laboratory; Pacific Northwest Laboratory; Brookhaven National Laboratory; Hanford Engineering Development Laboratory; Idaho Operations Office; Los Alamos Scientific Laboratory; Mound Facility; Oak Ridge National Laboratory; Rocky Flats Area Office; Savannah River Laboratory; and UNC Nuclear Industries, Inc. The information is divided into five sections: Isotope suppliers, facility, contracts and isotopes or services supplied; alphabetical list ofmore » customers, and isotopes purchased; alphabetical list of isotopes cross-referenced to customer numbers; geographical location of radioisotope customers; and radioisotope sales and transfers-FY 1979.« less
List of DOE radioisotope customers with summary of radioisotope shipments, FY 1981
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burlison, J.S.
1982-09-01
The seventeenth edition of the radioisotope customer list was prepared at the request of the Office of Health and Environmental Research, Office of Energy Research, Department of Energy (DOE). This document lists DOE's radioisotope production and distribution activities by its facilities at Argonne National Laboratory: Pacific Northwest Laboratory; Brookhaven National Laboratory; Hanford Engineering Development Laboratory; Idaho Operations Office; Los Alamos Scientific Laboratory; Mound Facility; Oak Ridge National Laboratory; Savannah River Laboratory; and UNC Nuclear Industries, Inc. The information is divided into five sections: (1) isotope suppliers, facility, contracts and isotopes or services supplied; (2) alphabetical list of customers, and isotopesmore » purchased; (3) alphabetical list of isotopes cross-referenced to customer numbers; (4) geographical location of radioisotope customers; and (5) radioisotope sales and transfers-FY 1980.« less
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Steinetz, B. M.; Braun, M. J.
2004-01-01
Although forces outside our control shape our industry, turbomachine sealing research, design, and customer agendas established in 1978 by Ludwig, Campbell, and Smith in terms of specific fuel consumption and performance remain as objectives today. Advances have been made because failures of the space shuttle main engine turbomachinery ushered in a new understanding of sealing in high-power-density systems. Further, it has been shown that changes in sealing, especially for high-pressure rotors, dramatically change the performance of the entire engine or turbomachine. Maintaining seal leakages and secondary flows within engine design specifications remains the most efficient and cost effective way to enhance performance and minimize maintenance costs. This three-part review summarizes experiences, ideas, successes, and failures by NASA and the U.S. aerospace industry in secondary flow management in advanced turbomachinery. Part 1 presents system sealing, part 2 system rotordynamics, and part 3 modeling, with some overlap of each part.
SearchGUI: An open-source graphical user interface for simultaneous OMSSA and X!Tandem searches.
Vaudel, Marc; Barsnes, Harald; Berven, Frode S; Sickmann, Albert; Martens, Lennart
2011-03-01
The identification of proteins by mass spectrometry is a standard technique in the field of proteomics, relying on search engines to perform the identifications of the acquired spectra. Here, we present a user-friendly, lightweight and open-source graphical user interface called SearchGUI (http://searchgui.googlecode.com), for configuring and running the freely available OMSSA (open mass spectrometry search algorithm) and X!Tandem search engines simultaneously. Freely available under the permissible Apache2 license, SearchGUI is supported on Windows, Linux and OSX. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Saparova, D; Belden, J; Williams, J; Richardson, B; Schuster, K
2014-01-01
Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants' expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for system improvement.
Evaluating a Federated Medical Search Engine
Belden, J.; Williams, J.; Richardson, B.; Schuster, K.
2014-01-01
Summary Background Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. Objectives To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. Methods This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Results Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants’ expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. Conclusions The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for system improvement. PMID:25298813
Tobacco control in a changing media landscape: how tobacco control programs use the internet.
Emery, Sherry; Aly, Eman H; Vera, Lisa; Alexander, Robert L
2014-03-01
More than 80% of U.S. adults use the Internet, 65% of online adults use social media, and more than 60% use the Internet to find and share health information. State tobacco control campaigns could effectively harness the powerful, inexpensive online messaging opportunities. Characterizing current Internet presence of state-sponsored tobacco control programs is an important first step toward informing such campaigns. A research specialist searched the Internet for state-sponsored tobacco control resources and social media presence for each state in 2010 and 2011, to develop a resource inventory and observe change over 6 months. Data were analyzed and websites coded for interactivity and content between July and October 2011. Although all states have tobacco control websites, content and interactivity of those sites remain limited. State tobacco control program use of social media appears to be increasing over time. Information presented on the Internet by state-sponsored tobacco control programs remains modest and limited in interactivity, customization, and search engine optimization. These programs could take advantage of an important opportunity to communicate with the public about the health effects of tobacco use and available community cessation and prevention resources. © 2013 American Journal of Preventive Medicine Published by American Journal of Preventive Medicine All rights reserved.
Querying archetype-based EHRs by search ontology-based XPath engineering.
Kropf, Stefan; Uciteli, Alexandr; Schierle, Katrin; Krücken, Peter; Denecke, Kerstin; Herre, Heinrich
2018-05-11
Legacy data and new structured data can be stored in a standardized format as XML-based EHRs on XML databases. Querying documents on these databases is crucial for answering research questions. Instead of using free text searches, that lead to false positive results, the precision can be increased by constraining the search to certain parts of documents. A search ontology-based specification of queries on XML documents defines search concepts and relates them to parts in the XML document structure. Such query specification method is practically introduced and evaluated by applying concrete research questions formulated in natural language on a data collection for information retrieval purposes. The search is performed by search ontology-based XPath engineering that reuses ontologies and XML-related W3C standards. The key result is that the specification of research questions can be supported by the usage of search ontology-based XPath engineering. A deeper recognition of entities and a semantic understanding of the content is necessary for a further improvement of precision and recall. Key limitation is that the application of the introduced process requires skills in ontology and software development. In future, the time consuming ontology development could be overcome by implementing a new clinical role: the clinical ontologist. The introduced Search Ontology XML extension connects Search Terms to certain parts in XML documents and enables an ontology-based definition of queries. Search ontology-based XPath engineering can support research question answering by the specification of complex XPath expressions without deep syntax knowledge about XPaths.
The Scatter Search Based Algorithm to Revenue Management Problem in Broadcasting Companies
NASA Astrophysics Data System (ADS)
Pishdad, Arezoo; Sharifyazdi, Mehdi; Karimpour, Reza
2009-09-01
The problem under question in this paper which is faced by broadcasting companies is how to benefit from a limited advertising space. This problem is due to the stochastic behavior of customers (advertiser) in different fare classes. To address this issue we propose a mathematical constrained nonlinear multi period model which incorporates cancellation and overbooking. The objective function is to maximize the total expected revenue and our numerical method performs it by determining the sales limits for each class of customer to present the revenue management control policy. Scheduling the advertising spots in breaks is another area of concern and we consider it as a constraint in our model. In this paper an algorithm based on Scatter search is developed to acquire a good feasible solution. This method uses simulation over customer arrival and in a continuous finite time horizon [0, T]. Several sensitivity analyses are conducted in computational result for depicting the effectiveness of proposed method. It also provides insight into better results of considering revenue management (control policy) compared to "no sales limit" policy in which sooner demand will served first.
An Analysis of the Applicability of Federal Law Regarding Hash-Based Searches of Digital Media
2014-06-01
that connect the SD card to the crime. • The second scenario involves a border crossing search by Customs and Border Pro - tection (CBP). In this... marijuana was being grown in the home of Danny Lee Kyllo due to circumstances involving another investiga- tion. Knowing that the indoor growth of marijuana ...requested and was issued a warrant to search the home for drugs. Upon execution of the warrant, more than 100 marijuana plants were found and Kyllo was
DRUMS: a human disease related unique gene mutation search engine.
Li, Zuofeng; Liu, Xingnan; Wen, Jingran; Xu, Ye; Zhao, Xin; Li, Xuan; Liu, Lei; Zhang, Xiaoyan
2011-10-01
With the completion of the human genome project and the development of new methods for gene variant detection, the integration of mutation data and its phenotypic consequences has become more important than ever. Among all available resources, locus-specific databases (LSDBs) curate one or more specific genes' mutation data along with high-quality phenotypes. Although some genotype-phenotype data from LSDB have been integrated into central databases little effort has been made to integrate all these data by a search engine approach. In this work, we have developed disease related unique gene mutation search engine (DRUMS), a search engine for human disease related unique gene mutation as a convenient tool for biologists or physicians to retrieve gene variant and related phenotype information. Gene variant and phenotype information were stored in a gene-centred relational database. Moreover, the relationships between mutations and diseases were indexed by the uniform resource identifier from LSDB, or another central database. By querying DRUMS, users can access the most popular mutation databases under one interface. DRUMS could be treated as a domain specific search engine. By using web crawling, indexing, and searching technologies, it provides a competitively efficient interface for searching and retrieving mutation data and their relationships to diseases. The present system is freely accessible at http://www.scbit.org/glif/new/drums/index.html. © 2011 Wiley-Liss, Inc.
Procuring load curtailment from local customers under uncertainty.
Mijatović, Aleksandar; Moriarty, John; Vogrinc, Jure
2017-08-13
Demand side response (DSR) provides a flexible approach to managing constrained power network assets. This is valuable if future asset utilization is uncertain. However there may be uncertainty over the process of procurement of DSR from customers. In this context we combine probabilistic modelling, simulation and optimization to identify economically optimal procurement policies from heterogeneous customers local to the asset, under chance constraints on the adequacy of the procured DSR. Mathematically this gives rise to a search over permutations, and we provide an illustrative example implementation and case study.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
Re-ranking via User Feedback: Georgetown University at TREC 2015 DD Track
2015-11-20
Re-ranking via User Feedback: Georgetown University at TREC 2015 DD Track Jiyun Luo and Hui Yang Department of Computer Science, Georgetown...involved in a search process, the user and the search engine. In TREC DD , the user is modeled by a simulator, called “jig”. The jig and the search engine...simulating user is provided by TREC 2015 DD Track organizer, and is called “jig”. There are 118 search topics in total. For each search topic, a short
COMBATXXI: Usage and Analysis at TACOM
2011-06-20
Prescribed by ANSI Std Z39-18 Operational Effectiveness UNCLASSIFIED UNCLASSIFIED Outline Who We Are Our Equipment Our Customers COMBATXXI Model ...Research, Development and Engineering Center Our Customers 5 Operational Effectiveness UNCLASSIFIED UNCLASSIFIED Model Overview Combined Arms...Analysis Tool for the 21st Century (COMBATXXI) - Developed jointly by TRAC- White Sands Missle Range (WSMR) and Marine Corps Combat Development Command
Collection of Medical Original Data with Search Engine for Decision Support.
Orthuber, Wolfgang
2016-01-01
Medicine is becoming more and more complex and humans can capture total medical knowledge only partially. For specific access a high resolution search engine is demonstrated, which allows besides conventional text search also search of precise quantitative data of medical findings, therapies and results. Users can define metric spaces ("Domain Spaces", DSs) with all searchable quantitative data ("Domain Vectors", DSs). An implementation of the search engine is online in http://numericsearch.com. In future medicine the doctor could make first a rough diagnosis and check which fine diagnostics (quantitative data) colleagues had collected in such a case. Then the doctor decides about fine diagnostics and results are sent (half automatically) to the search engine which filters a group of patients which best fits to these data. In this specific group variable therapies can be checked with associated therapeutic results, like in an individual scientific study for the current patient. The statistical (anonymous) results could be used for specific decision support. Reversely the therapeutic decision (in the best case with later results) could be used to enhance the collection of precise pseudonymous medical original data which is used for better and better statistical (anonymous) search results.
Lyceum: A Multi-Protocol Digital Library Gateway
NASA Technical Reports Server (NTRS)
Maa, Ming-Hokng; Nelson, Michael L.; Esler, Sandra L.
1997-01-01
Lyceum is a prototype scalable query gateway that provides a logically central interface to multi-protocol and physically distributed, digital libraries of scientific and technical information. Lyceum processes queries to multiple syntactically distinct search engines used by various distributed information servers from a single logically central interface without modification of the remote search engines. A working prototype (http://www.larc.nasa.gov/lyceum/) demonstrates the capabilities, potentials, and advantages of this type of meta-search engine by providing access to over 50 servers covering over 20 disciplines.
A natural language based search engine for ICD10 diagnosis encoding.
Baud, Robert
2004-01-01
We have developed a multiple step process for implementing an ICD10 search engine. The complexity of the task has been shown and we recommend collecting adequate expertise before starting any implementation. Underestimation of the expert time and inadequate data resources are probable reasons for failure. We also claim that when all conditions are met in term of resource and availability of the expertise, the benefits of a responsive ICD10 search engine will be present and the investment will be successful.
Start Your Search Engines. Part 2: When Image is Everything, Here are Some Great Ways to Find One
ERIC Educational Resources Information Center
Adam, Anna; Mowers, Helen
2008-01-01
There is no doubt that Google is great for finding images. Simply head to its home page, click the "Images" link, enter criteria in the search box, and--voila! In this article, the authors share some of their other favorite search engines for finding images. To make sure the desired images are available for educational use, consider searching for…
ScienceDirect through SciVerse: a new way to approach Elsevier.
Bengtson, Jason
2011-01-01
SciVerse is the new combined portal from Elsevier that services their ScienceDirect collection, SciTopics, and their Scopus database. Using SciVerse to access ScienceDirect is the specific focus of this review. Along with advanced keyword searching and citation searching options, SciVerse also incorporates a very useful image search feature. The aim seems to be not only to create an interface that provides broad functionality on par with other database search tools that many searchers use regularly but also to create an open platform that could be changed to respond effectively to the needs of customers.
None Available
2018-02-06
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
Paying Your Way to the Top: Search Engine Advertising.
ERIC Educational Resources Information Center
Scott, David M.
2003-01-01
Explains how organizations can buy listings on major Web search engines, making it the fastest growing form of advertising. Highlights include two network models, Google and Overture; bidding on phrases to buy as links to use with ads; ad ranking; benefits for small businesses; and paid listings versus regular search results. (LRW)
How Safe Are Kid-Safe Search Engines?
ERIC Educational Resources Information Center
Masterson-Krum, Hope
2001-01-01
Examines search tools available to elementary and secondary school students, both human-compiled and crawler-based, to help direct them to age-appropriate Web sites; analyzes the procedures of search engines labeled family-friendly or kid safe that use filters; and tests the effectiveness of these services to students in school libraries. (LRW)
Improving Web Search for Difficult Queries
ERIC Educational Resources Information Center
Wang, Xuanhui
2009-01-01
Search engines have now become essential tools in all aspects of our life. Although a variety of information needs can be served very successfully, there are still a lot of queries that search engines can not answer very effectively and these queries always make users feel frustrated. Since it is quite often that users encounter such "difficult…
2006-12-01
speed of search engines improves the efficiency of such methods, effectiveness is not improved. The objective of this thesis is to construct and test...interest, users are assisted in finding a relevant set of key terms that will aid the search engines in narrowing, widening, or refocusing a Web search