Web Searching: A Process-Oriented Experimental Study of Three Interactive Search Paradigms.
ERIC Educational Resources Information Center
Dennis, Simon; Bruza, Peter; McArthur, Robert
2002-01-01
Compares search effectiveness when using query-based Internet search via the Google search engine, directory-based search via Yahoo, and phrase-based query reformulation-assisted search via the Hyperindex browser by means of a controlled, user-based experimental study of undergraduates at the University of Queensland. Discusses cognitive load,…
In-context query reformulation for failing SPARQL queries
NASA Astrophysics Data System (ADS)
Viswanathan, Amar; Michaelis, James R.; Cassidy, Taylor; de Mel, Geeth; Hendler, James
2017-05-01
Knowledge bases for decision support systems are growing increasingly complex, through continued advances in data ingest and management approaches. However, humans do not possess the cognitive capabilities to retain a bird's-eyeview of such knowledge bases, and may end up issuing unsatisfiable queries to such systems. This work focuses on the implementation of a query reformulation approach for graph-based knowledge bases, specifically designed to support the Resource Description Framework (RDF). The reformulation approach presented is instance-and schema-aware. Thus, in contrast to relaxation techniques found in the state-of-the-art, the presented approach produces in-context query reformulation.
How Do Children Reformulate Their Search Queries?
ERIC Educational Resources Information Center
Rutter, Sophie; Ford, Nigel; Clough, Paul
2015-01-01
Introduction: This paper investigates techniques used by children in year 4 (age eight to nine) of a UK primary school to reformulate their queries, and how they use information retrieval systems to support query reformulation. Method: An in-depth study analysing the interactions of twelve children carrying out search tasks in a primary school…
Ontological Approach to Military Knowledge Modeling and Management
2004-03-01
federated search mechanism has to reformulate user queries (expressed using the ontology) in the query languages of the different sources (e.g. SQL...ontologies as a common terminology – Unified query to perform federated search • Query processing – Ontology mapping to sources reformulate queries
ERIC Educational Resources Information Center
Belkin, N. J.; Cool, C.; Kelly, D.; Lin, S. -J.; Park, S. Y.; Perez-Carballo, J.; Sikora, C.
2001-01-01
Reports on the progressive investigation of techniques for supporting interactive query reformulation in the TREC (Text Retrieval Conference) Interactive Track. Highlights include methods of term suggestion; interface design to support different system functionalities; an overview of each year's TREC investigation; and relevance to the development…
ERIC Educational Resources Information Center
Hancock-Beaulieu, Micheline; And Others
1995-01-01
An online library catalog was used to evaluate an interactive query expansion facility based on relevance feedback for the Okapi, probabilistic, term weighting, retrieval system. A graphical user interface allowed searchers to select candidate terms extracted from relevant retrieved items to reformulate queries. Results suggested that the…
Leroy, Gondy; Xu, Jennifer; Chung, Wingyan; Eggers, Shauna; Chen, Hsinchun
2007-01-01
Retrieving sufficient relevant information online is difficult for many people because they use too few keywords to search and search engines do not provide many support tools. To further complicate the search, users often ignore support tools when available. Our goal is to evaluate in a realistic setting when users use support tools and how they perceive these tools. We compared three medical search engines with support tools that require more or less effort from users to form a query and evaluate results. We carried out an end user study with 23 users who were asked to find information, i.e., subtopics and supporting abstracts, for a given theme. We used a balanced within-subjects design and report on the effectiveness, efficiency and usability of the support tools from the end user perspective. We found significant differences in efficiency but did not find significant differences in effectiveness between the three search engines. Dynamic user support tools requiring less effort led to higher efficiency. Fewer searches were needed and more documents were found per search when both query reformulation and result review tools dynamically adjust to the user query. The query reformulation tool that provided a long list of keywords, dynamically adjusted to the user query, was used most often and led to more subtopics. As hypothesized, the dynamic result review tools were used more often and led to more subtopics than static ones. These results were corroborated by the usability questionnaires, which showed that support tools that dynamically optimize output were preferred.
Clinician search behaviors may be influenced by search engine design.
Lau, Annie Y S; Coiera, Enrico; Zrimec, Tatjana; Compton, Paul
2010-06-30
Searching the Web for documents using information retrieval systems plays an important part in clinicians' practice of evidence-based medicine. While much research focuses on the design of methods to retrieve documents, there has been little examination of the way different search engine capabilities influence clinician search behaviors. Previous studies have shown that use of task-based search engines allows for faster searches with no loss of decision accuracy compared with resource-based engines. We hypothesized that changes in search behaviors may explain these differences. In all, 75 clinicians (44 doctors and 31 clinical nurse consultants) were randomized to use either a resource-based or a task-based version of a clinical information retrieval system to answer questions about 8 clinical scenarios in a controlled setting in a university computer laboratory. Clinicians using the resource-based system could select 1 of 6 resources, such as PubMed; clinicians using the task-based system could select 1 of 6 clinical tasks, such as diagnosis. Clinicians in both systems could reformulate search queries. System logs unobtrusively capturing clinicians' interactions with the systems were coded and analyzed for clinicians' search actions and query reformulation strategies. The most frequent search action of clinicians using the resource-based system was to explore a new resource with the same query, that is, these clinicians exhibited a "breadth-first" search behaviour. Of 1398 search actions, clinicians using the resource-based system conducted 401 (28.7%, 95% confidence interval [CI] 26.37-31.11) in this way. In contrast, the majority of clinicians using the task-based system exhibited a "depth-first" search behavior in which they reformulated query keywords while keeping to the same task profiles. Of 585 search actions conducted by clinicians using the task-based system, 379 (64.8%, 95% CI 60.83-68.55) were conducted in this way. This study provides evidence that different search engine designs are associated with different user search behaviors.
Intelligent Information Retrieval: An Introduction.
ERIC Educational Resources Information Center
Gauch, Susan
1992-01-01
Discusses the application of artificial intelligence to online information retrieval systems and describes several systems: (1) CANSEARCH, from MEDLINE; (2) Intelligent Interface for Information Retrieval (I3R); (3) Gausch's Query Reformulation; (4) Environmental Pollution Expert (EP-X); (5) PLEXUS (gardening); and (6) SCISOR (corporate…
Mining Tasks from the Web Anchor Text Graph: MSR Notebook Paper for the TREC 2015 Tasks Track
2015-11-20
Mining Tasks from the Web Anchor Text Graph: MSR Notebook Paper for the TREC 2015 Tasks Track Paul N. Bennett Microsoft Research Redmond, USA pauben...anchor text graph has proven useful in the general realm of query reformulation [2], we sought to quantify the value of extracting key phrases from...anchor text in the broader setting of the task understanding track. Given a query, our approach considers a simple method for identifying a relevant
Federated ontology-based queries over cancer data
2012-01-01
Background Personalised medicine provides patients with treatments that are specific to their genetic profiles. It requires efficient data sharing of disparate data types across a variety of scientific disciplines, such as molecular biology, pathology, radiology and clinical practice. Personalised medicine aims to offer the safest and most effective therapeutic strategy based on the gene variations of each subject. In particular, this is valid in oncology, where knowledge about genetic mutations has already led to new therapies. Current molecular biology techniques (microarrays, proteomics, epigenetic technology and improved DNA sequencing technology) enable better characterisation of cancer tumours. The vast amounts of data, however, coupled with the use of different terms - or semantic heterogeneity - in each discipline makes the retrieval and integration of information difficult. Results Existing software infrastructures for data-sharing in the cancer domain, such as caGrid, support access to distributed information. caGrid follows a service-oriented model-driven architecture. Each data source in caGrid is associated with metadata at increasing levels of abstraction, including syntactic, structural, reference and domain metadata. The domain metadata consists of ontology-based annotations associated with the structural information of each data source. However, caGrid's current querying functionality is given at the structural metadata level, without capitalising on the ontology-based annotations. This paper presents the design of and theoretical foundations for distributed ontology-based queries over cancer research data. Concept-based queries are reformulated to the target query language, where join conditions between multiple data sources are found by exploiting the semantic annotations. The system has been implemented, as a proof of concept, over the caGrid infrastructure. The approach is applicable to other model-driven architectures. A graphical user interface has been developed, supporting ontology-based queries over caGrid data sources. An extensive evaluation of the query reformulation technique is included. Conclusions To support personalised medicine in oncology, it is crucial to retrieve and integrate molecular, pathology, radiology and clinical data in an efficient manner. The semantic heterogeneity of the data makes this a challenging task. Ontologies provide a formal framework to support querying and integration. This paper provides an ontology-based solution for querying distributed databases over service-oriented, model-driven infrastructures. PMID:22373043
Potential for improvement of population diet through reformulation of commonly eaten foods.
van Raaij, Joop; Hendriksen, Marieke; Verhagen, Hans
2009-03-01
FOOD REFORMULATION: Reformulation of foods is considered one of the key options to achieve population nutrient goals. The compositions of many foods are modified to assist the consumer bring his or her daily diet more in line with dietary recommendations. INITIATIVES ON FOOD REFORMULATION: Over the past few years the number of reformulated foods introduced on the European market has increased enormously and it is expected that this trend will continue for the coming years. LIMITS TO FOOD REFORMULATION: Limitations to food reformulation in terms of choice of foods appropriate for reformulation and level of feasible reformulation relate mainly to consumer acceptance, safety aspects, technological challenges and food legislation. IMPACT ON KEY NUTRIENT INTAKE AND HEALTH: The potential impact of reformulated foods on key nutrient intake and health is obvious. Evaluation of the actual impact requires not only regular food consumption surveys, but also regular updates of the food composition table including the compositions of newly launched reformulated foods.
Ephemeral Relevance and User Activities in a Search Session
ERIC Educational Resources Information Center
Jiang, Jiepu
2016-01-01
We study relevance judgment and user activities in a search session. We focus on ephemeral relevance--a contextual measurement regarding the amount of useful information a searcher acquired from a clicked result at a particular time--and two primary types of search activities--query reformulation and click. The purpose of the study is both…
Browsing schematics: Query-filtered graphs with context nodes
NASA Technical Reports Server (NTRS)
Ciccarelli, Eugene C.; Nardi, Bonnie A.
1988-01-01
The early results of a research project to create tools for building interfaces to intelligent systems on the NASA Space Station are reported. One such tool is the Schematic Browser which helps users engaged in engineering problem solving find and select schematics from among a large set. Users query for schematics with certain components, and the Schematic Browser presents a graph whose nodes represent the schematics with those components. The query greatly reduces the number of choices presented to the user, filtering the graph to a manageable size. Users can reformulate and refine the query serially until they locate the schematics of interest. To help users maintain orientation as they navigate a large body of data, the graph also includes nodes that are not matches but provide global and local context for the matching nodes. Context nodes include landmarks, ancestors, siblings, children and previous matches.
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions.
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen’s interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen’s relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379
Analyzing Medical Image Search Behavior: Semantics and Prediction of Query Results.
De-Arteaga, Maria; Eggel, Ivan; Kahn, Charles E; Müller, Henning
2015-10-01
Log files of information retrieval systems that record user behavior have been used to improve the outcomes of retrieval systems, understand user behavior, and predict events. In this article, a log file of the ARRS GoldMiner search engine containing 222,005 consecutive queries is analyzed. Time stamps are available for each query, as well as masked IP addresses, which enables to identify queries from the same person. This article describes the ways in which physicians (or Internet searchers interested in medical images) search and proposes potential improvements by suggesting query modifications. For example, many queries contain only few terms and therefore are not specific; others contain spelling mistakes or non-medical terms that likely lead to poor or empty results. One of the goals of this report is to predict the number of results a query will have since such a model allows search engines to automatically propose query modifications in order to avoid result lists that are empty or too large. This prediction is made based on characteristics of the query terms themselves. Prediction of empty results has an accuracy above 88%, and thus can be used to automatically modify the query to avoid empty result sets for a user. The semantic analysis and data of reformulations done by users in the past can aid the development of better search systems, particularly to improve results for novice users. Therefore, this paper gives important ideas to better understand how people search and how to use this knowledge to improve the performance of specialized medical search engines.
Assisting Consumer Health Information Retrieval with Query Recommendations
Zeng, Qing T.; Crowell, Jonathan; Plovnick, Robert M.; Kim, Eunjung; Ngo, Long; Dibble, Emily
2006-01-01
Objective: Health information retrieval (HIR) on the Internet has become an important practice for millions of people, many of whom have problems forming effective queries. We have developed and evaluated a tool to assist people in health-related query formation. Design: We developed the Health Information Query Assistant (HIQuA) system. The system suggests alternative/additional query terms related to the user's initial query that can be used as building blocks to construct a better, more specific query. The recommended terms are selected according to their semantic distance from the original query, which is calculated on the basis of concept co-occurrences in medical literature and log data as well as semantic relations in medical vocabularies. Measurements: An evaluation of the HIQuA system was conducted and a total of 213 subjects participated in the study. The subjects were randomized into 2 groups. One group was given query recommendations and the other was not. Each subject performed HIR for both a predefined and a self-defined task. Results: The study showed that providing HIQuA recommendations resulted in statistically significantly higher rates of successful queries (odds ratio = 1.66, 95% confidence interval = 1.16–2.38), although no statistically significant impact on user satisfaction or the users' ability to accomplish the predefined retrieval task was found. Conclusion: Providing semantic-distance-based query recommendations can help consumers with query formation during HIR. PMID:16221944
[Common law study of the legal responsibility of health care staff related to drug reformulation].
Reche-Castex, F J; Alonso Herreros, J M
2005-01-01
To analyze the responsibility of health care staff in drug reformulation (change of dose, pharmaceutical form or route of administration of medicinal products) based on the common law of the High Court and the National Court. Search and analysis of common law and legal studies included in databases "El Derecho", "Difusión Jurídica" and "Indret". Health care staff has means--not outcomes--obligations according to the care standards included in the "Lex Artis" that can go beyond the mere legal standards. Failure to apply these care standards, denial of assistance or disrespect to the autonomy of the patient can be negligent behavior. We found 4 cases in common law. In the two cases in which care standards were complied with, including reformulation, health care professionals were acquitted, whereas in the other two cases in which reformulations were not used even though the "Lex Artis" required them, the professionals were condemned. Reformulation of medicinal products, as set forth in the Lex Artis, is a practice accepted by the High Court and the National Court and failure to use it when the scientific knowledge advises so is a cause for conviction.
Multi-field query expansion is effective for biomedical dataset retrieval.
Bouadjenek, Mohamed Reda; Verspoor, Karin
2017-01-01
In the context of the bioCADDIE challenge addressing information retrieval of biomedical datasets, we propose a method for retrieval of biomedical data sets with heterogenous schemas through query reformulation. In particular, the method proposed transforms the initial query into a multi-field query that is then enriched with terms that are likely to occur in the relevant datasets. We compare and evaluate two query expansion strategies, one based on the Rocchio method and another based on a biomedical lexicon. We then perform a comprehensive comparative evaluation of our method on the bioCADDIE dataset collection for biomedical retrieval. We demonstrate the effectiveness of our multi-field query method compared to two baselines, with MAP improved from 0.2171 and 0.2669 to 0.2996. We also show the benefits of query expansion, where the Rocchio expanstion method improves the MAP for our two baselines from 0.2171 and 0.2669 to 0.335. We show that the Rocchio query expansion method slightly outperforms the one based on the biomedical lexicon as a source of terms, with an improvement of roughly 3% for MAP. However, the query expansion method based on the biomedical lexicon is much less resource intensive since it does not require computation of any relevance feedback set or any initial execution of the query. Hence, in term of trade-off between efficiency, execution time and retrieval accuracy, we argue that the query expansion method based on the biomedical lexicon offers the best performance for a prototype biomedical data search engine intended to be used at a large scale. In the official bioCADDIE challenge results, although our approach is ranked seventh in terms of the infNDCG evaluation metric, it ranks second in term of P@10 and NDCG. Hence, the method proposed here provides overall good retrieval performance in relation to the approaches of other competitors. Consequently, the observations made in this paper should benefit the development of a Data Discovery Index prototype or the improvement of the existing one. © The Author(s) 2017. Published by Oxford University Press.
Multi-field query expansion is effective for biomedical dataset retrieval
2017-01-01
Abstract In the context of the bioCADDIE challenge addressing information retrieval of biomedical datasets, we propose a method for retrieval of biomedical data sets with heterogenous schemas through query reformulation. In particular, the method proposed transforms the initial query into a multi-field query that is then enriched with terms that are likely to occur in the relevant datasets. We compare and evaluate two query expansion strategies, one based on the Rocchio method and another based on a biomedical lexicon. We then perform a comprehensive comparative evaluation of our method on the bioCADDIE dataset collection for biomedical retrieval. We demonstrate the effectiveness of our multi-field query method compared to two baselines, with MAP improved from 0.2171 and 0.2669 to 0.2996. We also show the benefits of query expansion, where the Rocchio expanstion method improves the MAP for our two baselines from 0.2171 and 0.2669 to 0.335. We show that the Rocchio query expansion method slightly outperforms the one based on the biomedical lexicon as a source of terms, with an improvement of roughly 3% for MAP. However, the query expansion method based on the biomedical lexicon is much less resource intensive since it does not require computation of any relevance feedback set or any initial execution of the query. Hence, in term of trade-off between efficiency, execution time and retrieval accuracy, we argue that the query expansion method based on the biomedical lexicon offers the best performance for a prototype biomedical data search engine intended to be used at a large scale. In the official bioCADDIE challenge results, although our approach is ranked seventh in terms of the infNDCG evaluation metric, it ranks second in term of P@10 and NDCG. Hence, the method proposed here provides overall good retrieval performance in relation to the approaches of other competitors. Consequently, the observations made in this paper should benefit the development of a Data Discovery Index prototype or the improvement of the existing one. PMID:29220457
Federated Space-Time Query for Earth Science Data Using OpenSearch Conventions
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Beaumont, Bruce; Duerr, Ruth; Hua, Hook
2009-01-01
This slide presentation reviews a Space-time query system that has been developed to assist the user in finding Earth science data that fulfills the researchers needs. It reviews the reasons why finding Earth science data can be so difficult, and explains the workings of the Space-Time Query with OpenSearch and how this system can assist researchers in finding the required data, It also reviews the developments with client server systems.
SPARQL Assist language-neutral query composer
2012-01-01
Background SPARQL query composition is difficult for the lay-person, and even the experienced bioinformatician in cases where the data model is unfamiliar. Moreover, established best-practices and internationalization concerns dictate that the identifiers for ontological terms should be opaque rather than human-readable, which further complicates the task of synthesizing queries manually. Results We present SPARQL Assist: a Web application that addresses these issues by providing context-sensitive type-ahead completion during SPARQL query construction. Ontological terms are suggested using their multi-lingual labels and descriptions, leveraging existing support for internationalization and language-neutrality. Moreover, the system utilizes the semantics embedded in ontologies, and within the query itself, to help prioritize the most likely suggestions. Conclusions To ensure success, the Semantic Web must be easily available to all users, regardless of locale, training, or preferred language. By enhancing support for internationalization, and moreover by simplifying the manual construction of SPARQL queries through the use of controlled-natural-language interfaces, we believe we have made some early steps towards simplifying access to Semantic Web resources. PMID:22373327
SPARQL assist language-neutral query composer.
McCarthy, Luke; Vandervalk, Ben; Wilkinson, Mark
2012-01-25
SPARQL query composition is difficult for the lay-person, and even the experienced bioinformatician in cases where the data model is unfamiliar. Moreover, established best-practices and internationalization concerns dictate that the identifiers for ontological terms should be opaque rather than human-readable, which further complicates the task of synthesizing queries manually. We present SPARQL Assist: a Web application that addresses these issues by providing context-sensitive type-ahead completion during SPARQL query construction. Ontological terms are suggested using their multi-lingual labels and descriptions, leveraging existing support for internationalization and language-neutrality. Moreover, the system utilizes the semantics embedded in ontologies, and within the query itself, to help prioritize the most likely suggestions. To ensure success, the Semantic Web must be easily available to all users, regardless of locale, training, or preferred language. By enhancing support for internationalization, and moreover by simplifying the manual construction of SPARQL queries through the use of controlled-natural-language interfaces, we believe we have made some early steps towards simplifying access to Semantic Web resources.
What can the food and drink industry do to help achieve the 5% free sugars goal?
Gibson, Sigrid; Ashwell, Margaret; Arthur, Jenny; Bagley, Lindsey; Lennox, Alison; Rogers, Peter J; Stanner, Sara
2017-07-01
To contribute evidence and make recommendations to assist in achieving free sugars reduction, with due consideration to the broader picture of weight management and dietary quality. An expert workshop in July 2016 addressed options outlined in the Public Health England report 'Sugar reduction: The evidence for action' that related directly to the food industry. Panel members contributed expertise in food technology, public heath nutrition, marketing, communications, psychology and behaviour. Recommendations were directed towards reformulation, reduced portion sizes, labelling and consumer education. These were evaluated based on their feasibility, likely consumer acceptability, efficacy and cost. The panel agreed that the 5% target for energy from free sugars is unlikely to be achievable by the UK population in the near future, but a gradual reduction from average current level of intake is feasible. Progress requires collaborations between government, food industry, non-government organisations, health professionals, educators and consumers. Reformulation should start with the main contributors of free sugars in the diet, prioritising those products high in free sugars and relatively low in micronutrients. There is most potential for replacing free sugars in beverages using high-potency sweeteners and possibly via gradual reduction in sweetness levels. However, reformulation alone, with its inherent practical difficulties, will not achieve the desired reduction in free sugars. Food manufacturers and the out-of-home sector can help consumers by providing smaller portions. Labelling of free sugars would extend choice and encourage reformulation; however, government needs to assist industry by addressing current analytical and regulatory problems. There are also opportunities for multi-agency collaboration to develop tools/communications based on the Eatwell Guide, to help consumers understand the principles of a varied, healthy, balanced diet. Multiple strategies will be required to achieve a reduction in free sugars intake to attain the 5% energy target. The panel produced consensus statements with recommendations as to how this might be achieved.
Magnusson, Roger; Reeve, Belinda
2015-01-01
Strategies to reduce excess salt consumption play an important role in preventing cardiovascular disease, which is the largest contributor to global mortality from non-communicable diseases. In many countries, voluntary food reformulation programs seek to reduce salt levels across selected product categories, guided by aspirational targets to be achieved progressively over time. This paper evaluates the industry-led salt reduction programs that operate in the United Kingdom and Australia. Drawing on theoretical concepts from the field of regulatory studies, we propose a step-wise or “responsive” approach that introduces regulatory “scaffolds” to progressively increase levels of government oversight and control in response to industry inaction or under-performance. Our model makes full use of the food industry’s willingness to reduce salt levels in products to meet reformulation targets, but recognizes that governments remain accountable for addressing major diet-related health risks. Creative regulatory strategies can assist governments to fulfill their public health obligations, including in circumstances where there are political barriers to direct, statutory regulation of the food industry. PMID:26133973
Understanding PubMed user search behavior through log analysis.
Islamaj Dogan, Rezarta; Murray, G Craig; Névéol, Aurélie; Lu, Zhiyong
2009-01-01
This article reports on a detailed investigation of PubMed users' needs and behavior as a step toward improving biomedical information retrieval. PubMed is providing free service to researchers with access to more than 19 million citations for biomedical articles from MEDLINE and life science journals. It is accessed by millions of users each day. Efficient search tools are crucial for biomedical researchers to keep abreast of the biomedical literature relating to their own research. This study provides insight into PubMed users' needs and their behavior. This investigation was conducted through the analysis of one month of log data, consisting of more than 23 million user sessions and more than 58 million user queries. Multiple aspects of users' interactions with PubMed are characterized in detail with evidence from these logs. Despite having many features in common with general Web searches, biomedical information searches have unique characteristics that are made evident in this study. PubMed users are more persistent in seeking information and they reformulate queries often. The three most frequent types of search are search by author name, search by gene/protein, and search by disease. Use of abbreviation in queries is very frequent. Factors such as result set size influence users' decisions. Analysis of characteristics such as these plays a critical role in identifying users' information needs and their search habits. In turn, such an analysis also provides useful insight for improving biomedical information retrieval.Database URL:http://www.ncbi.nlm.nih.gov/PubMed.
Assistant Superintendent Hiring Criteria Used by Golf Course Superintendents
ERIC Educational Resources Information Center
Schlossberg, Maxim J.; Greene, Wilmot; Karnok, Keith J.
2004-01-01
Of the many opportunities available upon graduating, most turfgrass management/turfgrass science students seek assistant golf course superintendent positions. By tradition, faculty are responsible for preparing graduates to serve as capable assistant superintendents. Moreover, faculty are queried for guidance on how to best compete for these…
A Semantic Basis for Proof Queries and Transformations
NASA Technical Reports Server (NTRS)
Aspinall, David; Denney, Ewen W.; Luth, Christoph
2013-01-01
We extend the query language PrQL, designed for inspecting machine representations of proofs, to also allow transformation of proofs. PrQL natively supports hiproofs which express proof structure using hierarchically nested labelled trees, which we claim is a natural way of taming the complexity of huge proofs. Query-driven transformations enable manipulation of this structure, in particular, to transform proofs produced by interactive theorem provers into forms that assist their understanding, or that could be consumed by other tools. In this paper we motivate and define basic transformation operations, using an abstract denotational semantics of hiproofs and queries. This extends our previous semantics for queries based on syntactic tree representations.We define update operations that add and remove sub-proofs, and manipulate the hierarchy to group and ungroup nodes. We show that
Intervening to Reduce Suicide Risk in Veterans with Substance Use Disorders
2015-01-01
Other Support for Dr. Valenstein: 1. Effort ended on NIH/ NIA P01 AG031098 (PI – Cutler). 2. Effort ended on DoD W81XWH-11-2-0059 (PI – Hunt). 3...Effort started on VA QUERI RRP 12-511 (PI – Zivin). 19. Effort started on VA QUERI RRP 12- 505 (PI – Pfeiffer). OTHER SUPPORT VALENSTEIN...Pfeiffer, P.) 11/01/13 – 02/28/15 1.2 calendar VA MH-QUERI RRP 12- 505 $90,942 Technology-assisted peer support for recently hospitalized
Architecture for knowledge-based and federated search of online clinical evidence.
Coiera, Enrico; Walther, Martin; Nguyen, Ken; Lovell, Nigel H
2005-10-24
It is increasingly difficult for clinicians to keep up-to-date with the rapidly growing biomedical literature. Online evidence retrieval methods are now seen as a core tool to support evidence-based health practice. However, standard search engine technology is not designed to manage the many different types of evidence sources that are available or to handle the very different information needs of various clinical groups, who often work in widely different settings. The objectives of this paper are (1) to describe the design considerations and system architecture of a wrapper-mediator approach to federate search system design, including the use of knowledge-based, meta-search filters, and (2) to analyze the implications of system design choices on performance measurements. A trial was performed to evaluate the technical performance of a federated evidence retrieval system, which provided access to eight distinct online resources, including e-journals, PubMed, and electronic guidelines. The Quick Clinical system architecture utilized a universal query language to reformulate queries internally and utilized meta-search filters to optimize search strategies across resources. We recruited 227 family physicians from across Australia who used the system to retrieve evidence in a routine clinical setting over a 4-week period. The total search time for a query was recorded, along with the duration of individual queries sent to different online resources. Clinicians performed 1662 searches over the trial. The average search duration was 4.9 +/- 3.2 s (N = 1662 searches). Mean search duration to the individual sources was between 0.05 s and 4.55 s. Average system time (ie, system overhead) was 0.12 s. The relatively small system overhead compared to the average time it takes to perform a search for an individual source shows that the system achieves a good trade-off between performance and reliability. Furthermore, despite the additional effort required to incorporate the capabilities of each individual source (to improve the quality of search results), system maintenance requires only a small additional overhead.
Hybrid ontology for semantic information retrieval model using keyword matching indexing system.
Uthayan, K R; Mala, G S Anandha
2015-01-01
Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.
Hybrid Ontology for Semantic Information Retrieval Model Using Keyword Matching Indexing System
Uthayan, K. R.; Anandha Mala, G. S.
2015-01-01
Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology. PMID:25922851
A survey of the reformulation of Australian child-oriented food products.
Savio, Stephanie; Mehta, Kaye; Udell, Tuesday; Coveney, John
2013-09-11
Childhood obesity is one of the most pressing public health challenges of the 21st century. Reformulating commonly eaten food products is a key emerging strategy to improve the food supply and help address rising rates of obesity and chronic disease. This study aimed to monitor reformulation of Australian child-oriented food products (products marketed specifically to children) from 2009-2011. In 2009, all child-oriented food products in a large supermarket in metropolitan Adelaide were identified. These baseline products were followed up in 2011 to identify products still available for sale. Nutrient content data were collected from Nutrient Information Panels in 2009 and 2011. Absolute and percentage change in nutrient content were calculated for energy, total fat, saturated fat, sugars, sodium and fibre. Data were descriptively analysed to examine reformulation in individual products, in key nutrients, within product categories and across all products. Two methods were used to assess the extent of reformulation; the first involved assessing percentage change in single nutrients over time, while the second involved a set of nutrient criteria to assess changes in overall healthiness of products over time. Of 120 products, 40 remained unchanged in nutrient composition from 2009-2011 and 80 underwent change. The proportions of positively and negatively reformulated products were similar for most nutrients surveyed, with the exception of sodium. Eighteen products (15%) were simultaneously positively and negatively reformulated for different nutrients. Using percentage change in nutrient content to assess extent of reformulation, nearly half (n = 53) of all products were at least moderately reformulated and just over one third (n = 42) were substantially reformulated. The nutrient criteria method revealed 5 products (6%) that were positively reformulated and none that had undergone negative reformulation. Positive and negative reformulation was observed to a similar extent within the sample indicating little overall improvement in healthiness of the child-oriented food supply from 2009-2011. In the absence of agreed reformulation standards, the extent of reformulation was assessed against criteria developed specifically for this project. While arbitrary in nature, these criteria were based on reasonable assessment of the meaningfulness of reformulation and change in nutrient composition. As well as highlighting nutrient composition changes in a number of food products directed to children, this study emphasises the need to develop comprehensive, targeted and standardised reformulation benchmarks to assess the extent of reformulation occurring in the food supply.
Are Microsoft's Animated Interface Agents Helpful?
ERIC Educational Resources Information Center
Head, Allison J.
1998-01-01
Discusses interface agents and online help systems, focusing on Microsoft's animated office assistants. Highlights include intermediaries such as librarians in off-line reference problems; user complaints about online help systems; navigation problems; evaluation of the online office assistants; and categories of user queries to online help…
A survey of the reformulation of Australian child-oriented food products
2013-01-01
Background Childhood obesity is one of the most pressing public health challenges of the 21st century. Reformulating commonly eaten food products is a key emerging strategy to improve the food supply and help address rising rates of obesity and chronic disease. This study aimed to monitor reformulation of Australian child-oriented food products (products marketed specifically to children) from 2009–2011. Methods In 2009, all child-oriented food products in a large supermarket in metropolitan Adelaide were identified. These baseline products were followed up in 2011 to identify products still available for sale. Nutrient content data were collected from Nutrient Information Panels in 2009 and 2011. Absolute and percentage change in nutrient content were calculated for energy, total fat, saturated fat, sugars, sodium and fibre. Data were descriptively analysed to examine reformulation in individual products, in key nutrients, within product categories and across all products. Two methods were used to assess the extent of reformulation; the first involved assessing percentage change in single nutrients over time, while the second involved a set of nutrient criteria to assess changes in overall healthiness of products over time. Results Of 120 products, 40 remained unchanged in nutrient composition from 2009–2011 and 80 underwent change. The proportions of positively and negatively reformulated products were similar for most nutrients surveyed, with the exception of sodium. Eighteen products (15%) were simultaneously positively and negatively reformulated for different nutrients. Using percentage change in nutrient content to assess extent of reformulation, nearly half (n = 53) of all products were at least moderately reformulated and just over one third (n = 42) were substantially reformulated. The nutrient criteria method revealed 5 products (6%) that were positively reformulated and none that had undergone negative reformulation. Conclusion Positive and negative reformulation was observed to a similar extent within the sample indicating little overall improvement in healthiness of the child-oriented food supply from 2009–2011. In the absence of agreed reformulation standards, the extent of reformulation was assessed against criteria developed specifically for this project. While arbitrary in nature, these criteria were based on reasonable assessment of the meaningfulness of reformulation and change in nutrient composition. As well as highlighting nutrient composition changes in a number of food products directed to children, this study emphasises the need to develop comprehensive, targeted and standardised reformulation benchmarks to assess the extent of reformulation occurring in the food supply. PMID:24025190
Assistant for Specifying Quality Software (ASQS) Mission Area Analysis
1990-12-01
somewhat arbitrary, it was a reasonable and fast approach for partitioning the mission and software domains. The MAD builds on work done by Boeing Aerospace...Reliability ++ Reliability +++ Response 2: NO Discussion: A NO response implies intermittent burns -- most likely to perform attitude control functions...Propulsion Reliability +++ Reliability ++ 4-15 4.8.3 Query BT.3 Query: For intermittent thruster firing requirements, will the average burn time be less than
Reducing calorie sales from supermarkets - 'silent' reformulation of retailer-brand food products.
Jensen, Jørgen Dejgård; Sommer, Iben
2017-08-23
Food product reformulation is seen as one among several tools to promote healthier eating. Reformulating the recipe for a processed food, e.g. reducing the fat, sugar or salt content of the foods, or increasing the content of whole-grains, can help the consumers to pursue a healthier life style. In this study, we evaluate the effects on calorie sales of a 'silent' reformulation strategy, where a retail chain's private-label brands are reformulated to a lower energy density without making specific claims on the product. Using an ecological study design, we analyse 52 weeks' sales data - enriched with data on products' energy density - from a Danish retail chain. Sales of eight product categories were studied. Within each of these categories, specific products had been reformulated during the 52 weeks data period. Using econometric methods, we decompose the changes in calorie turnover and sales value into direct and indirect effects of product reformulation. For all considered products, the direct effect of product reformulation was a reduction in the sale of calories from the respective product categories - between 0.5 and 8.2%. In several cases, the reformulation led to indirect substitution effects that were counterproductive with regard to reducing calorie turnover. However, except in two insignificant cases, these indirect substitution effects were dominated by the direct effect of the reformulation, leading to net reductions in calorie sales between -3.1 and 7.5%. For all considered product reformulations, the reformulation had either positive, zero or very moderate negative effects on the sales value of the product category to which the reformulated product belonged. Based on these findings, 'silent' reformulation of retailer's private brands towards lower energy density seems to contribute to lowering the calorie intake in the population (although to a moderate extent) with moderate losses in retailer's sales revenues.
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1985-01-01
A collection of presentation visuals associated with the companion report entitled KARL: A Knowledge-Assisted Retrieval Language, is presented. Information is given on data retrieval, natural language database front ends, generic design objectives, processing capababilities and the query processing cycle.
ERIC Educational Resources Information Center
Johnson, Genevieve Marie
2013-01-01
Twelve special education teachers and teacher assistants who have instructional experience using iPads with children with special needs completed a survey that queried their practices and perceptions. In general, teachers and assistants were extremely positive about the value of iPads for children with special needs, particularly for children with…
Coplan, Paul M; Black, Ryan A; Weber, Sarah E; Chilcoat, Howard D; Butler, Stephen F
2014-01-01
Background Reformulating opioid analgesics to deter abuse is one approach toward improving their benefit-risk balance. To assess sentiment and attempts to defeat these products among difficult-to-reach populations of prescription drug abusers, evaluation of posts on Internet forums regarding reformulated products may be useful. A reformulated version of OxyContin (extended-release oxycodone) with physicochemical properties to deter abuse presented an opportunity to evaluate posts about the reformulation in online discussions. Objective The objective of this study was to use messages on Internet forums to evaluate reactions to the introduction of reformulated OxyContin and to identify methods aimed to defeat the abuse-deterrent properties of the product. Methods Posts collected from 7 forums between January 1, 2008 and September 30, 2013 were evaluated before and after the introduction of reformulated OxyContin on August 9, 2010. A quantitative evaluation of discussion levels across the study period and a qualitative coding of post content for OxyContin and 2 comparators for the 26 month period before and after OxyContin reformulation were conducted. Product endorsement was estimated for each product before and after reformulation as the ratio of endorsing-to-discouraging posts (ERo). Post-to-preintroduction period changes in ERos (ie, ratio of ERos) for each product were also calculated. Additionally, post content related to recipes for defeating reformulated OxyContin were evaluated from August 9, 2010 through September 2013. Results Over the study period, 45,936 posts related to OxyContin, 18,685 to Vicodin (hydrocodone), and 23,863 to Dilaudid (hydromorphone) were identified. The proportion of OxyContin-related posts fluctuated between 6.35 and 8.25 posts per 1000 posts before the reformulation, increased to 10.76 in Q3 2010 when reformulated OxyContin was introduced, and decreased from 9.14 in Q4 2010 to 3.46 in Q3 2013 in the period following the reformulation. The sentiment profile for OxyContin changed following reformulation; the post-to-preintroduction change in the ERo indicated reformulated OxyContin was discouraged significantly more than the original formulation (ratio of ERos=0.43, P<.001). A total of 37 recipes for circumventing the abuse-deterrent characteristics of reformulated OxyContin were observed; 32 were deemed feasible (ie, able to abuse). The frequency of posts reporting abuse of reformulated OxyContin via these recipes was low and decreased over time. Among the 5677 posts mentioning reformulated OxyContin, 825 posts discussed recipes and 498 reported abuse of reformulated OxyContin by such recipes (41 reported injecting and 128 reported snorting). Conclusions After introduction of physicochemical properties to deter abuse, changes in discussion of OxyContin on forums occurred reflected by a reduction in discussion levels and endorsing content. Despite discussion of recipes, there is a relatively small proportion of reported abuse of reformulated OxyContin via recipes, particularly by injecting or snorting routes. Analysis of Internet discussion is a valuable tool for monitoring the impact of abuse-deterrent formulations. PMID:24800858
McNaughton, Emily C; Coplan, Paul M; Black, Ryan A; Weber, Sarah E; Chilcoat, Howard D; Butler, Stephen F
2014-05-02
Reformulating opioid analgesics to deter abuse is one approach toward improving their benefit-risk balance. To assess sentiment and attempts to defeat these products among difficult-to-reach populations of prescription drug abusers, evaluation of posts on Internet forums regarding reformulated products may be useful. A reformulated version of OxyContin (extended-release oxycodone) with physicochemical properties to deter abuse presented an opportunity to evaluate posts about the reformulation in online discussions. The objective of this study was to use messages on Internet forums to evaluate reactions to the introduction of reformulated OxyContin and to identify methods aimed to defeat the abuse-deterrent properties of the product. Posts collected from 7 forums between January 1, 2008 and September 30, 2013 were evaluated before and after the introduction of reformulated OxyContin on August 9, 2010. A quantitative evaluation of discussion levels across the study period and a qualitative coding of post content for OxyContin and 2 comparators for the 26 month period before and after OxyContin reformulation were conducted. Product endorsement was estimated for each product before and after reformulation as the ratio of endorsing-to-discouraging posts (ERo). Post-to-preintroduction period changes in ERos (ie, ratio of ERos) for each product were also calculated. Additionally, post content related to recipes for defeating reformulated OxyContin were evaluated from August 9, 2010 through September 2013. Over the study period, 45,936 posts related to OxyContin, 18,685 to Vicodin (hydrocodone), and 23,863 to Dilaudid (hydromorphone) were identified. The proportion of OxyContin-related posts fluctuated between 6.35 and 8.25 posts per 1000 posts before the reformulation, increased to 10.76 in Q3 2010 when reformulated OxyContin was introduced, and decreased from 9.14 in Q4 2010 to 3.46 in Q3 2013 in the period following the reformulation. The sentiment profile for OxyContin changed following reformulation; the post-to-preintroduction change in the ERo indicated reformulated OxyContin was discouraged significantly more than the original formulation (ratio of ERos=0.43, P<.001). A total of 37 recipes for circumventing the abuse-deterrent characteristics of reformulated OxyContin were observed; 32 were deemed feasible (ie, able to abuse). The frequency of posts reporting abuse of reformulated OxyContin via these recipes was low and decreased over time. Among the 5677 posts mentioning reformulated OxyContin, 825 posts discussed recipes and 498 reported abuse of reformulated OxyContin by such recipes (41 reported injecting and 128 reported snorting). After introduction of physicochemical properties to deter abuse, changes in discussion of OxyContin on forums occurred reflected by a reduction in discussion levels and endorsing content. Despite discussion of recipes, there is a relatively small proportion of reported abuse of reformulated OxyContin via recipes, particularly by injecting or snorting routes. Analysis of Internet discussion is a valuable tool for monitoring the impact of abuse-deterrent formulations.
Gauging interest of the general public in laser-assisted in situ keratomileusis eye surgery.
Stein, Joshua D; Childers, David M; Nan, Bin; Mian, Shahzad I
2013-07-01
To assess interest among members of the general public in laser-assisted in situ keratomileusis (LASIK) surgery and how levels of interest in this procedure have changed over time in the United States and other countries. Using the Google Trends Web site, we determined the weekly frequency of queries involving the term "LASIK" from January 1, 2007, through January 1, 2011, in the United States, United Kingdom, Canada, and India. We fit separate regression models for each of the countries to assess whether residents of these countries differed in their querying rates on specific dates and over time. Similar analyses were performed to compare 4 US states. Additional regression models compared general public interest in LASIK surgery before and after the release of a 2008 Food and Drug Administration report describing complaints associated with this procedure. During 2007 to 2011, the Google query rate for "LASIK" was highest among persons residing in India, followed by the United Kingdom, Canada, and the United States. During this time period, the query rate declined by 40% in the United States, 24% in India, and 22% in the United Kingdom, and it increased by 8% in Canada. In all 4 of the US states examined, the query rate declined-by 52% in Florida, 56% in New York, 54% in Texas, and 42% in California. Interest in LASIK declined further among US citizens after the Food and Drug Administration report release. Interest among the general public in LASIK surgery has been waning in recent years.
Adoption and Design of Emerging Dietary Policies to Improve Cardiometabolic Health in the US.
Huang, Yue; Pomeranz, Jennifer; Wilde, Parke; Capewell, Simon; Gaziano, Tom; O'Flaherty, Martin; Kersh, Rogan; Whitsel, Laurie; Mozaffarian, Dariush; Micha, Renata
2018-04-14
Suboptimal diet is a leading cause of cardiometabolic disease and economic burdens. Evidence-based dietary policies within 5 domains-food prices, reformulation, marketing, labeling, and government food assistance programs-appear promising at improving cardiometabolic health. Yet, the extent of new dietary policy adoption in the US and key elements crucial to define in designing such policies are not well established. We created an inventory of recent US dietary policy cases aiming to improve cardiometabolic health and assessed the extent of their proposal and adoption at federal, state, local, and tribal levels; and categorized and characterized the key elements in their policy design. Recent federal dietary policies adopted to improve cardiometabolic health include reformulation (trans-fat elimination), marketing (mass-media campaigns to increase fruits and vegetables), labeling (Nutrition Facts Panel updates, menu calorie labeling), and food assistance programs (financial incentives for fruits and vegetables in the Supplemental Nutrition Assistance Program (SNAP) and Women, Infant and Children (WIC) program). Federal voluntary guidelines have been proposed for sodium reformulation and food marketing to children. Recent state proposals included sugar-sweetened beverage (SSB) taxes, marketing restrictions, and SNAP restrictions, but few were enacted. Local efforts varied significantly, with certain localities consistently leading in the proposal or adoption of relevant policies. Across all jurisdictions, most commonly selected dietary targets included fruits and vegetables, SSBs, trans-fat, added sugar, sodium, and calories; other healthy (e.g., nuts) or unhealthy (e.g., processed meats) factors were largely not addressed. Key policy elements to define in designing these policies included those common across domains (e.g., level of government, target population, dietary target, dietary definition, implementation mechanism), and domain-specific (e.g., media channels for food marketing domain) or policy-specific (e.g., earmarking for taxes) elements. Characteristics of certain elements were similarly defined (e.g., fruit and vegetable definition, warning language used in SSB warning labels), while others varied across cases within a policy (e.g., tax base for SSB taxes). Several key elements were not always sufficiently characterized in government documents, and dietary target selections and definitions did not consistently align with the evidence-base. These findings highlight recent action on dietary policies to improve cardiometabolic health in the US; and key elements necessary to design such policies.
40 CFR 80.78 - Controls and prohibitions on reformulated gasoline.
Code of Federal Regulations, 2013 CFR
2013-07-01
... reformulated gasoline. 80.78 Section 80.78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.78 Controls and prohibitions on reformulated gasoline. (a) Prohibited activities. (1) No person may manufacture...
40 CFR 80.66 - Calculation of reformulated gasoline properties.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Calculation of reformulated gasoline... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.66 Calculation of reformulated gasoline properties. (a) All volume measurements required by these regulations shall be...
40 CFR 80.78 - Controls and prohibitions on reformulated gasoline.
Code of Federal Regulations, 2011 CFR
2011-07-01
... reformulated gasoline. 80.78 Section 80.78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.78 Controls and prohibitions on reformulated gasoline. (a) Prohibited activities. (1) No person may manufacture...
40 CFR 80.78 - Controls and prohibitions on reformulated gasoline.
Code of Federal Regulations, 2010 CFR
2010-07-01
... reformulated gasoline. 80.78 Section 80.78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.78 Controls and prohibitions on reformulated gasoline. (a) Prohibited activities. (1) No person may manufacture...
40 CFR 80.66 - Calculation of reformulated gasoline properties.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Calculation of reformulated gasoline... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.66 Calculation of reformulated gasoline properties. (a) All volume measurements required by these regulations shall be...
40 CFR 80.78 - Controls and prohibitions on reformulated gasoline.
Code of Federal Regulations, 2012 CFR
2012-07-01
... reformulated gasoline. 80.78 Section 80.78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.78 Controls and prohibitions on reformulated gasoline. (a) Prohibited activities. (1) No person may manufacture...
40 CFR 80.66 - Calculation of reformulated gasoline properties.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Calculation of reformulated gasoline... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.66 Calculation of reformulated gasoline properties. (a) All volume measurements required by these regulations shall be...
40 CFR 80.66 - Calculation of reformulated gasoline properties.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Calculation of reformulated gasoline... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.66 Calculation of reformulated gasoline properties. (a) All volume measurements required by these regulations shall be...
40 CFR 80.78 - Controls and prohibitions on reformulated gasoline.
Code of Federal Regulations, 2014 CFR
2014-07-01
... reformulated gasoline. 80.78 Section 80.78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.78 Controls and prohibitions on reformulated gasoline. (a) Prohibited activities. (1) No person may manufacture...
40 CFR 80.66 - Calculation of reformulated gasoline properties.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Calculation of reformulated gasoline... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.66 Calculation of reformulated gasoline properties. (a) All volume measurements required by these regulations shall be...
Architecture for Knowledge-Based and Federated Search of Online Clinical Evidence
Walther, Martin; Nguyen, Ken; Lovell, Nigel H
2005-01-01
Background It is increasingly difficult for clinicians to keep up-to-date with the rapidly growing biomedical literature. Online evidence retrieval methods are now seen as a core tool to support evidence-based health practice. However, standard search engine technology is not designed to manage the many different types of evidence sources that are available or to handle the very different information needs of various clinical groups, who often work in widely different settings. Objectives The objectives of this paper are (1) to describe the design considerations and system architecture of a wrapper-mediator approach to federate search system design, including the use of knowledge-based, meta-search filters, and (2) to analyze the implications of system design choices on performance measurements. Methods A trial was performed to evaluate the technical performance of a federated evidence retrieval system, which provided access to eight distinct online resources, including e-journals, PubMed, and electronic guidelines. The Quick Clinical system architecture utilized a universal query language to reformulate queries internally and utilized meta-search filters to optimize search strategies across resources. We recruited 227 family physicians from across Australia who used the system to retrieve evidence in a routine clinical setting over a 4-week period. The total search time for a query was recorded, along with the duration of individual queries sent to different online resources. Results Clinicians performed 1662 searches over the trial. The average search duration was 4.9 ± 3.2 s (N = 1662 searches). Mean search duration to the individual sources was between 0.05 s and 4.55 s. Average system time (ie, system overhead) was 0.12 s. Conclusions The relatively small system overhead compared to the average time it takes to perform a search for an individual source shows that the system achieves a good trade-off between performance and reliability. Furthermore, despite the additional effort required to incorporate the capabilities of each individual source (to improve the quality of search results), system maintenance requires only a small additional overhead. PMID:16403716
Software Helps Retrieve Information Relevant to the User
NASA Technical Reports Server (NTRS)
Mathe, Natalie; Chen, James
2003-01-01
The Adaptive Indexing and Retrieval Agent (ARNIE) is a code library, designed to be used by an application program, that assists human users in retrieving desired information in a hypertext setting. Using ARNIE, the program implements a computational model for interactively learning what information each human user considers relevant in context. The model, called a "relevance network," incrementally adapts retrieved information to users individual profiles on the basis of feedback from the users regarding specific queries. The model also generalizes such knowledge for subsequent derivation of relevant references for similar queries and profiles, thereby, assisting users in filtering information by relevance. ARNIE thus enables users to categorize and share information of interest in various contexts. ARNIE encodes the relevance and structure of information in a neural network dynamically configured with a genetic algorithm. ARNIE maintains an internal database, wherein it saves associations, and from which it returns associated items in response to a query. A C++ compiler for a platform on which ARNIE will be utilized is necessary for creating the ARNIE library but is not necessary for the execution of the software.
40 CFR 80.46 - Measurement of reformulated gasoline fuel parameters.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Measurement of reformulated gasoline... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.46 Measurement of reformulated gasoline fuel parameters. (a) Sulfur. Sulfur content of gasoline and butane must...
40 CFR 80.46 - Measurement of reformulated gasoline fuel parameters.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Measurement of reformulated gasoline... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.46 Measurement of reformulated gasoline fuel parameters. (a) Sulfur. Sulfur content of gasoline and butane must...
40 CFR 80.46 - Measurement of reformulated gasoline fuel parameters.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Measurement of reformulated gasoline... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.46 Measurement of reformulated gasoline fuel parameters. (a) Sulfur. Sulfur content of gasoline and butane must...
A natural language query system for Hubble Space Telescope proposal selection
NASA Technical Reports Server (NTRS)
Hornick, Thomas; Cohen, William; Miller, Glenn
1987-01-01
The proposal selection process for the Hubble Space Telescope is assisted by a robust and easy to use query program (TACOS). The system parses an English subset language sentence regardless of the order of the keyword phases, allowing the user a greater flexibility than a standard command query language. Capabilities for macro and procedure definition are also integrated. The system was designed for flexibility in both use and maintenance. In addition, TACOS can be applied to any knowledge domain that can be expressed in terms of a single reaction. The system was implemented mostly in Common LISP. The TACOS design is described in detail, with particular attention given to the implementation methods of sentence processing.
Design of an intelligent information system for in-flight emergency assistance
NASA Technical Reports Server (NTRS)
Feyock, Stefan; Karamouzis, Stamos
1991-01-01
The present research has as its goal the development of AI tools to help flight crews cope with in-flight malfunctions. The relevant tasks in such situations include diagnosis, prognosis, and recovery plan generation. Investigation of the information requirements of these tasks has shown that the determination of paths figures largely: what components or systems are connected to what others, how are they connected, whether connections satisfying certain criteria exist, and a number of related queries. The formulation of such queries frequently requires capabilities of the second-order predicate calculus. An information system is described that features second-order logic capabilities, and is oriented toward efficient formulation and execution of such queries.
Terminology issues in user access to Web-based medical information.
McCray, A. T.; Loane, R. F.; Browne, A. C.; Bangalore, A. K.
1999-01-01
We conducted a study of user queries to the National Library of Medicine Web site over a three month period. Our purpose was to study the nature and scope of these queries in order to understand how to improve users' access to the information they are seeking on our site. The results show that the queries are primarily medical in content (94%), with only a small percentage (5.5%) relating to library services, and with a very small percentage (.5%) not being medically relevant at all. We characterize the data set, and conclude with a discussion of our plans to develop a UMLS-based terminology server to assist NLM Web users. Images Figure 1 PMID:10566330
Advances in nowcasting influenza-like illness rates using search query logs
NASA Astrophysics Data System (ADS)
Lampos, Vasileios; Miller, Andrew C.; Crossan, Steve; Stefansen, Christian
2015-08-01
User-generated content can assist epidemiological surveillance in the early detection and prevalence estimation of infectious diseases, such as influenza. Google Flu Trends embodies the first public platform for transforming search queries to indications about the current state of flu in various places all over the world. However, the original model significantly mispredicted influenza-like illness rates in the US during the 2012-13 flu season. In this work, we build on the previous modeling attempt, proposing substantial improvements. Firstly, we investigate the performance of a widely used linear regularized regression solver, known as the Elastic Net. Then, we expand on this model by incorporating the queries selected by the Elastic Net into a nonlinear regression framework, based on a composite Gaussian Process. Finally, we augment the query-only predictions with an autoregressive model, injecting prior knowledge about the disease. We assess predictive performance using five consecutive flu seasons spanning from 2008 to 2013 and qualitatively explain certain shortcomings of the previous approach. Our results indicate that a nonlinear query modeling approach delivers the lowest cumulative nowcasting error, and also suggest that query information significantly improves autoregressive inferences, obtaining state-of-the-art performance.
Advances in nowcasting influenza-like illness rates using search query logs.
Lampos, Vasileios; Miller, Andrew C; Crossan, Steve; Stefansen, Christian
2015-08-03
User-generated content can assist epidemiological surveillance in the early detection and prevalence estimation of infectious diseases, such as influenza. Google Flu Trends embodies the first public platform for transforming search queries to indications about the current state of flu in various places all over the world. However, the original model significantly mispredicted influenza-like illness rates in the US during the 2012-13 flu season. In this work, we build on the previous modeling attempt, proposing substantial improvements. Firstly, we investigate the performance of a widely used linear regularized regression solver, known as the Elastic Net. Then, we expand on this model by incorporating the queries selected by the Elastic Net into a nonlinear regression framework, based on a composite Gaussian Process. Finally, we augment the query-only predictions with an autoregressive model, injecting prior knowledge about the disease. We assess predictive performance using five consecutive flu seasons spanning from 2008 to 2013 and qualitatively explain certain shortcomings of the previous approach. Our results indicate that a nonlinear query modeling approach delivers the lowest cumulative nowcasting error, and also suggest that query information significantly improves autoregressive inferences, obtaining state-of-the-art performance.
Aligning food-processing policies to promote healthier fat consumption in India
Downs, Shauna M.; Marie Thow, Anne; Ghosh-Jerath, Suparna; Leeder, Stephen R.
2015-01-01
India is undergoing a shift in consumption from traditional foods to processed foods high in sugar, salt and fat. Partially hydrogenated vegetable oils (PHVOs) high in trans-fat are often used in processed foods in India given their low cost and extended shelf life. The World Health Organization has called for the elimination of PHVOs from the global food supply and recommends their replacement with polyunsaturated fat to maximize health benefits. This study examined barriers to replacing industrially produced trans-fat in the Indian food supply and systematically identified potential policy solutions to assist the government in encouraging its removal and replacement with healthier polyunsaturated fat. A combination of food supply chain analysis and semi-structured interviews with key stakeholders was conducted. The main barriers faced by the food-processing sector in terms of reducing use of trans-fat and replacing it with healthier oils in India were the low availability and high cost of oils high in polyunsaturated fats leading to a reliance on palm oil (high in saturated fat) and the low use of those healthier oils in product reformulation. Improved integration between farmers and processors, investment in technology and pricing strategies to incentivize use of healthier oils for product reformulation were identified as policy options. Food processors have trouble accessing sufficient affordable healthy oils for product reformulation, but existing incentives aimed at supporting food processing could be tweaked to ensure a greater supply of healthy oils with the potential to improve population health. PMID:24399031
Empowerment: reformulation of a non-Rogerian concept.
Crawford Shearer, Nelma B; Reed, Pamela G
2004-07-01
The authors present a reformulation of empowerment based upon historical and current perspectives of empowerment and a synthesis of existing literature and Rogerian thought. Reformulation of non-Rogerian concepts familiar to nurses is proposed as a strategy to accelerate the mainstreaming of Rogerian thought into nursing practice and research. The reformulation of empowerment as a participatory process of well-being inherent among human beings may provide nurses with new insights for practice. This paper may also serve as a model for reformulating other non-Rogerian concepts and theories for wider dissemination across the discipline.
Hanauer, David A; Wu, Danny T Y; Yang, Lei; Mei, Qiaozhu; Murkowski-Steffy, Katherine B; Vydiswaran, V G Vinod; Zheng, Kai
2017-03-01
The utility of biomedical information retrieval environments can be severely limited when users lack expertise in constructing effective search queries. To address this issue, we developed a computer-based query recommendation algorithm that suggests semantically interchangeable terms based on an initial user-entered query. In this study, we assessed the value of this approach, which has broad applicability in biomedical information retrieval, by demonstrating its application as part of a search engine that facilitates retrieval of information from electronic health records (EHRs). The query recommendation algorithm utilizes MetaMap to identify medical concepts from search queries and indexed EHR documents. Synonym variants from UMLS are used to expand the concepts along with a synonym set curated from historical EHR search logs. The empirical study involved 33 clinicians and staff who evaluated the system through a set of simulated EHR search tasks. User acceptance was assessed using the widely used technology acceptance model. The search engine's performance was rated consistently higher with the query recommendation feature turned on vs. off. The relevance of computer-recommended search terms was also rated high, and in most cases the participants had not thought of these terms on their own. The questions on perceived usefulness and perceived ease of use received overwhelmingly positive responses. A vast majority of the participants wanted the query recommendation feature to be available to assist in their day-to-day EHR search tasks. Challenges persist for users to construct effective search queries when retrieving information from biomedical documents including those from EHRs. This study demonstrates that semantically-based query recommendation is a viable solution to addressing this challenge. Published by Elsevier Inc.
Analysis of Technique to Extract Data from the Web for Improved Performance
NASA Astrophysics Data System (ADS)
Gupta, Neena; Singh, Manish
2010-11-01
The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.
Reformulation as an Integrated Approach of Four Disciplines: A Qualitative Study with Food Companies
van Gunst, Annelies; Roodenburg, Annet J. C.; Steenhuis, Ingrid H. M.
2018-01-01
In 2014, the Dutch government agreed with the food sector to lower salt, sugar, saturated fat and energy in foods. To reformulate, an integrated approach of four disciplines (Nutrition & Health, Food Technology, Legislation, and Consumer Perspectives) is important for food companies (Framework for Reformulation). The objective of this study was to determine whether this framework accurately reflects reformulation processes in food companies. Seventeen Dutch food companies in the bakery, meat and convenience sector were interviewed with a semi-structured topic list. Interviews were transcribed, coded and analysed. Interviews illustrated that there were opportunities to lower salt, sugar and saturated fat (Nutrition & Health). However, there were barriers to replacing the functionality of these ingredients (Food Technology). Most companies would like the government to push reformulation more (Legislation). Traditional meat products and luxury sweet bakery products were considered less suitable for reformulation (Consumer Perspectives). In addition, the reduction of E-numbers was considered important. The important role of the retailer is stressed by the respondents. In conclusion, all four disciplines are important in the reformulation processes in food companies. Reformulation does not only mean the reduction of salt, saturated fat and sugar for companies, but also the reduction of E-numbers. PMID:29677158
van Gunst, Annelies; Roodenburg, Annet J C; Steenhuis, Ingrid H M
2018-04-20
In 2014, the Dutch government agreed with the food sector to lower salt, sugar, saturated fat and energy in foods. To reformulate, an integrated approach of four disciplines (Nutrition & Health, Food Technology, Legislation, and Consumer Perspectives) is important for food companies (Framework for Reformulation). The objective of this study was to determine whether this framework accurately reflects reformulation processes in food companies. Seventeen Dutch food companies in the bakery, meat and convenience sector were interviewed with a semi-structured topic list. Interviews were transcribed, coded and analysed. Interviews illustrated that there were opportunities to lower salt, sugar and saturated fat (Nutrition & Health). However, there were barriers to replacing the functionality of these ingredients (Food Technology). Most companies would like the government to push reformulation more (Legislation). Traditional meat products and luxury sweet bakery products were considered less suitable for reformulation (Consumer Perspectives). In addition, the reduction of E-numbers was considered important. The important role of the retailer is stressed by the respondents. In conclusion, all four disciplines are important in the reformulation processes in food companies. Reformulation does not only mean the reduction of salt, saturated fat and sugar for companies, but also the reduction of E-numbers.
An XML-Based Manipulation and Query Language for Rule-Based Information
NASA Astrophysics Data System (ADS)
Mansour, Essam; Höpfner, Hagen
Rules are utilized to assist in the monitoring process that is required in activities, such as disease management and customer relationship management. These rules are specified according to the application best practices. Most of research efforts emphasize on the specification and execution of these rules. Few research efforts focus on managing these rules as one object that has a management life-cycle. This paper presents our manipulation and query language that is developed to facilitate the maintenance of this object during its life-cycle and to query the information contained in this object. This language is based on an XML-based model. Furthermore, we evaluate the model and language using a prototype system applied to a clinical case study.
LAILAPS-QSM: A RESTful API and JAVA library for semantic query suggestions.
Chen, Jinbo; Scholz, Uwe; Zhou, Ruonan; Lange, Matthias
2018-03-01
In order to access and filter content of life-science databases, full text search is a widely applied query interface. But its high flexibility and intuitiveness is paid for with potentially imprecise and incomplete query results. To reduce this drawback, query assistance systems suggest those combinations of keywords with the highest potential to match most of the relevant data records. Widespread approaches are syntactic query corrections that avoid misspelling and support expansion of words by suffixes and prefixes. Synonym expansion approaches apply thesauri, ontologies, and query logs. All need laborious curation and maintenance. Furthermore, access to query logs is in general restricted. Approaches that infer related queries by their query profile like research field, geographic location, co-authorship, affiliation etc. require user's registration and its public accessibility that contradict privacy concerns. To overcome these drawbacks, we implemented LAILAPS-QSM, a machine learning approach that reconstruct possible linguistic contexts of a given keyword query. The context is referred from the text records that are stored in the databases that are going to be queried or extracted for a general purpose query suggestion from PubMed abstracts and UniProt data. The supplied tool suite enables the pre-processing of these text records and the further computation of customized distributed word vectors. The latter are used to suggest alternative keyword queries. An evaluated of the query suggestion quality was done for plant science use cases. Locally present experts enable a cost-efficient quality assessment in the categories trait, biological entity, taxonomy, affiliation, and metabolic function which has been performed using ontology term similarities. LAILAPS-QSM mean information content similarity for 15 representative queries is 0.70, whereas 34% have a score above 0.80. In comparison, the information content similarity for human expert made query suggestions is 0.90. The software is either available as tool set to build and train dedicated query suggestion services or as already trained general purpose RESTful web service. The service uses open interfaces to be seamless embeddable into database frontends. The JAVA implementation uses highly optimized data structures and streamlined code to provide fast and scalable response for web service calls. The source code of LAILAPS-QSM is available under GNU General Public License version 2 in Bitbucket GIT repository: https://bitbucket.org/ipk_bit_team/bioescorte-suggestion.
On Reformulating Planning as Dynamic Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari K.; Morris, Paul; Koga, Dennis (Technical Monitor)
2000-01-01
In recent years, researchers have reformulated STRIPS planning problems as SAT problems or CSPs. In this paper, we discuss the Constraint-Based Interval Planning (CBIP) paradigm, which can represent planning problems incorporating interval time and resources. We describe how to reformulate mutual exclusion constraints for a CBIP-based system, the Extendible Uniform Remote Operations Planner Architecture (EUROPA). We show that reformulations involving dynamic variable domains restrict the algorithms which can be used to solve the resulting DCSP. We present an alternative formulation which does not employ dynamic domains, and describe the relative merits of the different reformulations.
Ares, Gastón; Aschemann-Witzel, Jessica; Curutchet, María Rosa; Antúnez, Lucía; Machín, Leandro; Vidal, Leticia; Giménez, Ana
2018-05-01
The reformulation of the food products available in the marketplace to improve their nutritional quality has been identified as one of the most cost-effective policies for controlling the global obesity pandemic. Front-of-pack (FOP) nutrition labelling is one of the strategies that has been suggested to encourage the food industry to reformulate their products. However, the extent to which certain FOP labels can encourage product reformulation is dependent on consumer reaction. The aim of the present work was to assess consumers' perception towards product reformulation in the context of the implementation of nutritional warnings, an interpretive FOP nutrition labelling scheme. Three product categories were selected as target products: bread, cream cheese and yogurt, each associated with high content of one target nutrient. For each category, six packages were designed using a 3 × 2 experimental design with the following variables: product version (regular, nutrient-reduced and nutrient-free) and brand (market leader and non-market leader). A total 306 Uruguayan participants completed a choice experiment with 18 choice sets. Reformulated products without nutritional warnings were preferred by participants compared to regular products with nutritional warnings. No apparent preference for products reformulated into nutrient-reduced or nutrient-free product versions was found, although differences depended on the product category and the specific reformulation strategy. Preference for reformulated products without nutritional warnings was more pronounced for non-market leaders. Results from the present work suggest that reformulation of foods in the context of the implementation of nutritional warnings holds potential to encourage consumers to make more healthful food choices and to cause a reduction of their intake of nutrients associated with non-communicable diseases. Copyright © 2018 Elsevier Ltd. All rights reserved.
Harm as the price of liberty? Pre-implantation diagnosis and reproductive freedom.
Haker, Hille
2003-01-01
Reproductive autonomy is often used as an argument to offer assisted reproduction services to women and to continue research into improving this service. What is often overlooked, however, is the gendered and normative background of parenthood, especially of motherhood. In this paper, I attempt to make women visible and to listen to their voices. Turning to the women's stories, the ethical perspective might be reversed: the so-called 'side-effects' of the overall successful assisted reproduction with or without genetic diagnosis, are to be considered the 'main effects' of assisted reproduction--true for the majority of couples and women. Autonomy, then, must be reformulated as concept of moral agency in the context of divergent social contexts and cultures of parenthood, of socially shaped images of disability, and in the context of scientific visions of technology which do not necessarily match with the medical practice.
Nutrient profiling for product reformulation: public health impact and benefits for the consumer.
Lehmann, Undine; Charles, Véronique Rheiner; Vlassopoulos, Antonis; Masset, Gabriel; Spieldenner, Jörg
2017-08-01
The food industry holds great potential for driving consumers to adopt healthy food choices as (re)formulation of foods can improve the nutritional quality of these foods. Reformulation has been identified as a cost-effective intervention in addressing non-communicable diseases as it does not require significant alterations of consumer behaviour and dietary habits. Nutrient profiling (NP), the science of categorizing foods based on their nutrient composition, has emerged as an essential tool and is implemented through many different profiling systems to guide reformulation and other nutrition policies. NP systems should be adapted to their specific purposes as it is not possible to design one system that can equally address all policies and purposes, e.g. reformulation and labelling. The present paper discusses some of the key principles and specificities that underlie a NP system designed for reformulation with the example of the Nestlé nutritional profiling system. Furthermore, the impact of reformulation at the level of the food product, dietary intakes and public health are reviewed. Several studies showed that food and beverage reformulation, guided by a NP system, may be effective in improving population nutritional intakes and thereby its health status. In order to achieve its maximum potential and modify the food environment in a beneficial manner, reformulation should be implemented by the entire food sector. Multi-stakeholder partnerships including governments, food industry, retailers and consumer associations that will state concrete time-bound objectives accompanied by an independent monitoring system are the potential solution.
Chan, Emily H; Sahai, Vikram; Conrad, Corrie; Brownstein, John S
2011-05-01
A variety of obstacles including bureaucracy and lack of resources have interfered with timely detection and reporting of dengue cases in many endemic countries. Surveillance efforts have turned to modern data sources, such as Internet search queries, which have been shown to be effective for monitoring influenza-like illnesses. However, few have evaluated the utility of web search query data for other diseases, especially those of high morbidity and mortality or where a vaccine may not exist. In this study, we aimed to assess whether web search queries are a viable data source for the early detection and monitoring of dengue epidemics. Bolivia, Brazil, India, Indonesia and Singapore were chosen for analysis based on available data and adequate search volume. For each country, a univariate linear model was then built by fitting a time series of the fraction of Google search query volume for specific dengue-related queries from that country against a time series of official dengue case counts for a time-frame within 2003-2010. The specific combination of queries used was chosen to maximize model fit. Spurious spikes in the data were also removed prior to model fitting. The final models, fit using a training subset of the data, were cross-validated against both the overall dataset and a holdout subset of the data. All models were found to fit the data quite well, with validation correlations ranging from 0.82 to 0.99. Web search query data were found to be capable of tracking dengue activity in Bolivia, Brazil, India, Indonesia and Singapore. Whereas traditional dengue data from official sources are often not available until after some substantial delay, web search query data are available in near real-time. These data represent valuable complement to assist with traditional dengue surveillance.
Semantic based man-machine interface for real-time communication
NASA Technical Reports Server (NTRS)
Ali, M.; Ai, C.-S.
1988-01-01
A flight expert system (FLES) was developed to assist pilots in monitoring, diagnosing and recovering from in-flight faults. To provide a communications interface between the flight crew and FLES, a natural language interface (NALI) was implemented. Input to NALI is processed by three processors: (1) the semantics parser; (2) the knowledge retriever; and (3) the response generator. First the semantic parser extracts meaningful words and phrases to generate an internal representation of the query. At this point, the semantic parser has the ability to map different input forms related to the same concept into the same internal representation. Then the knowledge retriever analyzes and stores the context of the query to aid in resolving ellipses and pronoun references. At the end of this process, a sequence of retrievel functions is created as a first step in generating the proper response. Finally, the response generator generates the natural language response to the query. The architecture of NALI was designed to process both temporal and nontemporal queries. The architecture and implementation of NALI are described.
Aligning food-processing policies to promote healthier fat consumption in India.
Downs, Shauna M; Marie Thow, Anne; Ghosh-Jerath, Suparna; Leeder, Stephen R
2015-09-01
India is undergoing a shift in consumption from traditional foods to processed foods high in sugar, salt and fat. Partially hydrogenated vegetable oils (PHVOs) high in trans-fat are often used in processed foods in India given their low cost and extended shelf life. The World Health Organization has called for the elimination of PHVOs from the global food supply and recommends their replacement with polyunsaturated fat to maximize health benefits. This study examined barriers to replacing industrially produced trans-fat in the Indian food supply and systematically identified potential policy solutions to assist the government in encouraging its removal and replacement with healthier polyunsaturated fat. A combination of food supply chain analysis and semi-structured interviews with key stakeholders was conducted. The main barriers faced by the food-processing sector in terms of reducing use of trans-fat and replacing it with healthier oils in India were the low availability and high cost of oils high in polyunsaturated fats leading to a reliance on palm oil (high in saturated fat) and the low use of those healthier oils in product reformulation. Improved integration between farmers and processors, investment in technology and pricing strategies to incentivize use of healthier oils for product reformulation were identified as policy options. Food processors have trouble accessing sufficient affordable healthy oils for product reformulation, but existing incentives aimed at supporting food processing could be tweaked to ensure a greater supply of healthy oils with the potential to improve population health. © The Author (2014). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Database technology and the management of multimedia data in the Mirror project
NASA Astrophysics Data System (ADS)
de Vries, Arjen P.; Blanken, H. M.
1998-10-01
Multimedia digital libraries require an open distributed architecture instead of a monolithic database system. In the Mirror project, we use the Monet extensible database kernel to manage different representation of multimedia objects. To maintain independence between content, meta-data, and the creation of meta-data, we allow distribution of data and operations using CORBA. This open architecture introduces new problems for data access. From an end user's perspective, the problem is how to search the available representations to fulfill an actual information need; the conceptual gap between human perceptual processes and the meta-data is too large. From a system's perspective, several representations of the data may semantically overlap or be irrelevant. We address these problems with an iterative query process and active user participating through relevance feedback. A retrieval model based on inference networks assists the user with query formulation. The integration of this model into the database design has two advantages. First, the user can query both the logical and the content structure of multimedia objects. Second, the use of different data models in the logical and the physical database design provides data independence and allows algebraic query optimization. We illustrate query processing with a music retrieval application.
Product reformulation and nutritional improvements after new competitive food standards in schools.
Jahn, Jaquelyn L; Cohen, Juliana Fw; Gorski-Findling, Mary T; Hoffman, Jessica A; Rosenfeld, Lindsay; Chaffee, Ruth; Smith, Lauren; Rimm, Eric B
2018-04-01
In 2012, Massachusetts enacted school competitive food and beverage standards similar to national Smart Snacks. These standards aim to improve the nutritional quality of competitive snacks. It was previously demonstrated that a majority of foods and beverages were compliant with the standards, but it was unknown whether food manufacturers reformulated products in response to the standards. The present study assessed whether products were reformulated after standards were implemented; the availability of reformulated products outside schools; and whether compliance with the standards improved the nutrient composition of competitive snacks. An observational cohort study documenting all competitive snacks sold before (2012) and after (2013 and 2014) the standards were implemented. The sample included thirty-six school districts with both a middle and high school. After 2012, energy, saturated fat, Na and sugar decreased and fibre increased among all competitive foods. By 2013, 8 % of foods were reformulated, as were an additional 9 % by 2014. Nearly 15 % of reformulated foods were look-alike products that could not be purchased at supermarkets. Energy and Na in beverages decreased after 2012, in part facilitated by smaller package sizes. Massachusetts' law was effective in improving the nutritional content of snacks and product reformulation helped schools adhere to the law. This suggests fully implementing Smart Snacks standards may similarly improve the foods available in schools nationally. However, only some healthier reformulated foods were available outside schools.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-03
... Request; Comment Request; Reformulated Gasoline Commingling Provisions AGENCY: Environmental Protection... information collection request (ICR), ``Reformulated Gasoline Commingling Provisions'' (EPA ICR No.2228.04.... Abstract: EPA would like to continue collecting notifications from gasoline retailers and wholesale...
Food and beverage product reformulation as a corporate political strategy.
Scott, C; Hawkins, B; Knai, C
2017-01-01
Product reformulation- the process of altering a food or beverage product's recipe or composition to improve the product's health profile - is a prominent response to the obesity and noncommunicable disease epidemics in the U.S. To date, reformulation in the U.S. has been largely voluntary and initiated by actors within the food and beverage industry. Similar voluntary efforts by the tobacco and alcohol industry have been considered to be a mechanism of corporate political strategy to shape public health policies and decisions to suit commercial needs. We propose a taxonomy of food and beverage industry corporate political strategies that builds on the existing literature. We then analyzed the industry's responses to a 2014 U.S. government consultation on product reformulation, run as part of the process to define the 2015 Dietary Guidelines for Americans. We qualitatively coded the industry's responses for predominant narratives and framings around reformulation using a purposely-designed coding framework, and compared the results to the taxonomy. The food and beverage industry in the United States used a highly similar narrative around voluntary product reformulation in their consultation responses: that reformulation is "part of the solution" to obesity and NCDs, even though their products or industry are not large contributors to the problem, and that progress has been made despite reformulation posing significant technical challenges. This narrative and the frames used in the submissions illustrate the four categories of the taxonomy: participation in the policy process, influencing the framing of the nutrition policy debate, creating partnerships, and influencing the interpretation of evidence. These strategic uses of reformulation align with previous research on food and beverage corporate political strategy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ultra-processed foods and the limits of product reformulation.
Scrinis, Gyorgy; Monteiro, Carlos Augusto
2018-01-01
The nutritional reformulation of processed food and beverage products has been promoted as an important means of addressing the nutritional imbalances in contemporary dietary patterns. The focus of most reformulation policies is the reduction in quantities of nutrients-to-limit - Na, free sugars, SFA, trans-fatty acids and total energy. The present commentary examines the limitations of what we refer to as 'nutrients-to-limit reformulation' policies and practices, particularly when applied to ultra-processed foods and drink products. Beyond these nutrients-to-limit, there are a range of other potentially harmful processed and industrially produced ingredients used in the production of ultra-processed products that are not usually removed during reformulation. The sources of nutrients-to-limit in these products may be replaced with other highly processed ingredients and additives, rather than with whole or minimally processed foods. Reformulation policies may also legitimise current levels of consumption of ultra-processed products in high-income countries and increased levels of consumption in emerging markets in the global South.
KARL: A Knowledge-Assisted Retrieval Language. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1985-01-01
Data classification and storage are tasks typically performed by application specialists. In contrast, information users are primarily non-computer specialists who use information in their decision-making and other activities. Interaction efficiency between such users and the computer is often reduced by machine requirements and resulting user reluctance to use the system. This thesis examines the problems associated with information retrieval for non-computer specialist users, and proposes a method for communicating in restricted English that uses knowledge of the entities involved, relationships between entities, and basic English language syntax and semantics to translate the user requests into formal queries. The proposed method includes an intelligent dictionary, syntax and semantic verifiers, and a formal query generator. In addition, the proposed system has a learning capability that can improve portability and performance. With the increasing demand for efficient human-machine communication, the significance of this thesis becomes apparent. As human resources become more valuable, software systems that will assist in improving the human-machine interface will be needed and research addressing new solutions will be of utmost importance. This thesis presents an initial design and implementation as a foundation for further research and development into the emerging field of natural language database query systems.
Reformulation as a Measure of Student Expression in Classroom Interaction.
ERIC Educational Resources Information Center
Dobson, James J.
1995-01-01
Investigates teacher reformulation of student talk in order to determine the manner in which teachers affect student meaning and expression. Findings indicate that reformulation is a device used by teachers to control classroom dialog and that teachers disproportionately perform the language functions most commonly associated with higher-order…
Microwave Assisted Grafting of Gums and Extraction of Natural Materials.
Singh, Inderbir; Rani, Priya; Kumar, Pradeep
2017-01-01
Microwave assisted modification of polymers has become an established technique for modifying the functionality of polymers. Microwave irradiation reduces reaction time as well as the use of toxic solvents with enhanced sensitivity and yields of quality products. In this review article instrumentation and basic principles of microwave activation have been discussed. Microwave assisted grafting of natural gums, characterization of grafted polymers and their toxicological parameters have also been listed. Pharmaceutical applications viz. drug release retardant, mucoahesion and tablet superdisintegrant potential of microwave assisted gums has also been discussed. An overview of microwave assisted extraction of plant based natural materials has also been presented. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
This procedure is designed to support the collection of potentially responsive information using automated E-Discovery tools that rely on keywords, key phrases, index queries, or other technological assistance to retrieve Electronically Stored Information
NASA Technical Reports Server (NTRS)
Arnold, Steven M; Bednarcyk, Brett; Aboydi, Jacob
2004-01-01
The High-Fidelity Generalized Method of Cells (HFGMC) micromechanics model has recently been reformulated by Bansal and Pindera (in the context of elastic phases with perfect bonding) to maximize its computational efficiency. This reformulated version of HFGMC has now been extended to include both inelastic phases and imperfect fiber-matrix bonding. The present paper presents an overview of the HFGMC theory in both its original and reformulated forms and a comparison of the results of the two implementations. The objective is to establish the correlation between the two HFGMC formulations and document the improved efficiency offered by the reformulation. The results compare the macro and micro scale predictions of the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) versions of both formulations into the inelastic regime, and, in the case of the discontinuous reinforcement version, with both perfect and weak interfacial bonding. The results demonstrate that identical predictions are obtained using either the original or reformulated implementations of HFGMC aside from small numerical differences in the inelastic regime due to the different implementation schemes used for the inelastic terms present in the two formulations. Finally, a direct comparison of execution times is presented for the original formulation and reformulation code implementations. It is shown that as the discretization employed in representing the composite repeating unit cell becomes increasingly refined (requiring a larger number of sub-volumes), the reformulated implementation becomes significantly (approximately an order of magnitude at best) more computationally efficient in both the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) cases.
Chan, Emily H.; Sahai, Vikram; Conrad, Corrie; Brownstein, John S.
2011-01-01
Background A variety of obstacles including bureaucracy and lack of resources have interfered with timely detection and reporting of dengue cases in many endemic countries. Surveillance efforts have turned to modern data sources, such as Internet search queries, which have been shown to be effective for monitoring influenza-like illnesses. However, few have evaluated the utility of web search query data for other diseases, especially those of high morbidity and mortality or where a vaccine may not exist. In this study, we aimed to assess whether web search queries are a viable data source for the early detection and monitoring of dengue epidemics. Methodology/Principal Findings Bolivia, Brazil, India, Indonesia and Singapore were chosen for analysis based on available data and adequate search volume. For each country, a univariate linear model was then built by fitting a time series of the fraction of Google search query volume for specific dengue-related queries from that country against a time series of official dengue case counts for a time-frame within 2003–2010. The specific combination of queries used was chosen to maximize model fit. Spurious spikes in the data were also removed prior to model fitting. The final models, fit using a training subset of the data, were cross-validated against both the overall dataset and a holdout subset of the data. All models were found to fit the data quite well, with validation correlations ranging from 0.82 to 0.99. Conclusions/Significance Web search query data were found to be capable of tracking dengue activity in Bolivia, Brazil, India, Indonesia and Singapore. Whereas traditional dengue data from official sources are often not available until after some substantial delay, web search query data are available in near real-time. These data represent valuable complement to assist with traditional dengue surveillance. PMID:21647308
Nietzsche contra "Self-Reformulation"
ERIC Educational Resources Information Center
Fennell, J.
2005-01-01
Not only do the writings of Nietzsche--early and late--fail to support the pedagogy of self-reformulation, this doctrine embodies what for him is worst in man and would destroy that which is higher. The pedagogy of self-reformulation is also incoherent. In contrast, Nietzsche offers a fruitful and comprehensive theory of education that, while…
Reformulation of Rothermel's wildland fire behaviour model for heterogeneous fuelbeds.
David V. Sandberg; Cynthia L. Riccardi; Mark D. Schaaf
2007-01-01
Abstract: The Fuel Characteristic Classification System (FCCS) includes equations that calculate energy release and one-dimensional spread rate in quasi-steady-state fires in heterogeneous but spatially uniform wildland fuelbeds, using a reformulation of the widely used Rothermel fire spread model. This reformulation provides an automated means to predict fire behavior...
21 CFR 106.120 - New formulations and reformulations.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false New formulations and reformulations. 106.120... § 106.120 New formulations and reformulations. (a) Information required by section 412(b)(2) and (3) of... manufacturer and that has left an establishment subject to the control of the manufacturer may not provide the...
21 CFR 106.120 - New formulations and reformulations.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 2 2011-04-01 2011-04-01 false New formulations and reformulations. 106.120... § 106.120 New formulations and reformulations. (a) Information required by section 412(b)(2) and (3) of... manufacturer and that has left an establishment subject to the control of the manufacturer may not provide the...
A Note on Interfacing Object Warehouses and Mass Storage Systems for Data Mining Applications
NASA Technical Reports Server (NTRS)
Grossman, Robert L.; Northcutt, Dave
1996-01-01
Data mining is the automatic discovery of patterns, associations, and anomalies in data sets. Data mining requires numerically and statistically intensive queries. Our assumption is that data mining requires a specialized data management infrastructure to support the aforementioned intensive queries, but because of the sizes of data involved, this infrastructure is layered over a hierarchical storage system. In this paper, we discuss the architecture of a system which is layered for modularity, but exploits specialized lightweight services to maintain efficiency. Rather than use a full functioned database for example, we use light weight object services specialized for data mining. We propose using information repositories between layers so that components on either side of the layer can access information in the repositories to assist in making decisions about data layout, the caching and migration of data, the scheduling of queries, and related matters.
Process for conversion of lignin to reformulated hydrocarbon gasoline
Shabtai, Joseph S.; Zmierczak, Wlodzimierz W.; Chornet, Esteban
1999-09-28
A process for converting lignin into high-quality reformulated hydrocarbon gasoline compositions in high yields is disclosed. The process is a two-stage, catalytic reaction process that produces a reformulated hydrocarbon gasoline product with a controlled amount of aromatics. In the first stage, a lignin material is subjected to a base-catalyzed depolymerization reaction in the presence of a supercritical alcohol as a reaction medium, to thereby produce a depolymerized lignin product. In the second stage, the depolymerized lignin product is subjected to a sequential two-step hydroprocessing reaction to produce a reformulated hydrocarbon gasoline product. In the first hydroprocessing step, the depolymerized lignin is contacted with a hydrodeoxygenation catalyst to produce a hydrodeoxygenated intermediate product. In the second hydroprocessing step, the hydrodeoxygenated intermediate product is contacted with a hydrocracking/ring hydrogenation catalyst to produce the reformulated hydrocarbon gasoline product which includes various desirable naphthenic and paraffinic compounds.
Progress of the European Assistive Technology Information Network.
Gower, Valerio; Andrich, Renzo
2015-01-01
The European Assistive Technology Information Network (EASTIN), launched in 2005 as the result of a collaborative EU project, provides information on Assistive Technology products and related material through the website www.eastin.eu. In the past few years several advancements have been implemented on the EASTIN website thanks to the contribution of EU funded projects, including a multilingual query processing component for supporting non expert users, a user rating and comment facility, and a detailed taxonomy for the description of ICT based assistive products. Recently, within the framework of the EU funded project Cloud4All, the EASTIN information system has also been federated with the Unified Listing of assistive products, one of the building blocks of the Global Public Inclusive Infrastructure initiative.
ERIC Educational Resources Information Center
Mackinnon, Sean P.; Sherry, Simon B.; Graham, Aislin R.; Stewart, Sherry H.; Sherry, Dayna L.; Allen, Stephanie L.; Fitzpatrick, Skye; McGrath, Daniel S.
2011-01-01
The perfectionism model of binge eating (PMOBE) is an integrative model explaining why perfectionism is related to binge eating. This study reformulates and tests the PMOBE, with a focus on addressing limitations observed in the perfectionism and binge-eating literature. In the reformulated PMOBE, concern over mistakes is seen as a destructive…
Applying a Consumer Behavior Lens to Salt Reduction Initiatives.
Regan, Áine; Kent, Monique Potvin; Raats, Monique M; McConnon, Áine; Wall, Patrick; Dubois, Lise
2017-08-18
Reformulation of food products to reduce salt content has been a central strategy for achieving population level salt reduction. In this paper, we reflect on current reformulation strategies and consider how consumer behavior determines the ultimate success of these strategies. We consider the merits of adopting a 'health by stealth', silent approach to reformulation compared to implementing a communications strategy which draws on labeling initiatives in tandem with reformulation efforts. We end this paper by calling for a multi-actor approach which utilizes co-design, participatory tools to facilitate the involvement of all stakeholders, including, and especially, consumers, in making decisions around how best to achieve population-level salt reduction.
An assessment of the potential health impacts of food reformulation.
Leroy, P; Réquillart, V; Soler, L-G; Enderli, G
2016-06-01
Policies focused on food quality are intended to facilitate healthy choices by consumers, even those who are not fully informed about the links between food consumption and health. The goal of this paper is to evaluate the potential impact of such a food reformulation scenario on health outcomes. We first created reformulation scenarios adapted to the French characteristics of foods. After computing the changes in the nutrient intakes of representative consumers, we determined the health effects of these changes. To do so, we used the DIETRON health assessment model, which calculates the number of deaths avoided by changes in food and nutrient intakes. Depending on the reformulation scenario, the total impact of reformulation varies between 2408 and 3597 avoided deaths per year, which amounts to a 3.7-5.5% reduction in mortality linked to diseases considered in the DIETRON model. The impacts are much higher for men than for women and much higher for low-income categories than for high-income categories. These differences result from the differences in consumption patterns and initial disease prevalence among the various income categories. Even without any changes in consumers' behaviors, realistic food reformulation may have significant health outcomes.
Supporting infobuttons with terminological knowledge.
Cimino, J. J.; Elhanan, G.; Zeng, Q.
1997-01-01
We have developed several prototype applications which integrate clinical systems with on-line information resources by using patient data to drive queries in response to user information needs. We refer to these collectively as infobuttons because they are evoked with a minimum of keyboard entry. We make use of knowledge in our terminology, the Medical Entities Dictionary (MED) to assist with the selection of appropriate queries and resources, as well as the translation of patient data to forms recognized by the resources. This paper describes the kinds of knowledge in the MED, including literal attributes, hierarchical links and other semantic links, and how this knowledge is used in system integration. PMID:9357682
Supporting infobuttons with terminological knowledge.
Cimino, J J; Elhanan, G; Zeng, Q
1997-01-01
We have developed several prototype applications which integrate clinical systems with on-line information resources by using patient data to drive queries in response to user information needs. We refer to these collectively as infobuttons because they are evoked with a minimum of keyboard entry. We make use of knowledge in our terminology, the Medical Entities Dictionary (MED) to assist with the selection of appropriate queries and resources, as well as the translation of patient data to forms recognized by the resources. This paper describes the kinds of knowledge in the MED, including literal attributes, hierarchical links and other semantic links, and how this knowledge is used in system integration.
Comparing image search behaviour in the ARRS GoldMiner search engine and a clinical PACS/RIS.
De-Arteaga, Maria; Eggel, Ivan; Do, Bao; Rubin, Daniel; Kahn, Charles E; Müller, Henning
2015-08-01
Information search has changed the way we manage knowledge and the ubiquity of information access has made search a frequent activity, whether via Internet search engines or increasingly via mobile devices. Medical information search is in this respect no different and much research has been devoted to analyzing the way in which physicians aim to access information. Medical image search is a much smaller domain but has gained much attention as it has different characteristics than search for text documents. While web search log files have been analysed many times to better understand user behaviour, the log files of hospital internal systems for search in a PACS/RIS (Picture Archival and Communication System, Radiology Information System) have rarely been analysed. Such a comparison between a hospital PACS/RIS search and a web system for searching images of the biomedical literature is the goal of this paper. Objectives are to identify similarities and differences in search behaviour of the two systems, which could then be used to optimize existing systems and build new search engines. Log files of the ARRS GoldMiner medical image search engine (freely accessible on the Internet) containing 222,005 queries, and log files of Stanford's internal PACS/RIS search called radTF containing 18,068 queries were analysed. Each query was preprocessed and all query terms were mapped to the RadLex (Radiology Lexicon) terminology, a comprehensive lexicon of radiology terms created and maintained by the Radiological Society of North America, so the semantic content in the queries and the links between terms could be analysed, and synonyms for the same concept could be detected. RadLex was mainly created for the use in radiology reports, to aid structured reporting and the preparation of educational material (Lanlotz, 2006) [1]. In standard medical vocabularies such as MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) specific terms of radiology are often underrepresented, therefore RadLex was considered to be the best option for this task. The results show a surprising similarity between the usage behaviour in the two systems, but several subtle differences can also be noted. The average number of terms per query is 2.21 for GoldMiner and 2.07 for radTF, the used axes of RadLex (anatomy, pathology, findings, …) have almost the same distribution with clinical findings being the most frequent and the anatomical entity the second; also, combinations of RadLex axes are extremely similar between the two systems. Differences include a longer length of the sessions in radTF than in GoldMiner (3.4 and 1.9 queries per session on average). Several frequent search terms overlap but some strong differences exist in the details. In radTF the term "normal" is frequent, whereas in GoldMiner it is not. This makes intuitive sense, as in the literature normal cases are rarely described whereas in clinical work the comparison with normal cases is often a first step. The general similarity in many points is likely due to the fact that users of the two systems are influenced by their daily behaviour in using standard web search engines and follow this behaviour in their professional search. This means that many results and insights gained from standard web search can likely be transferred to more specialized search systems. Still, specialized log files can be used to find out more on reformulations and detailed strategies of users to find the right content. Copyright © 2015 Elsevier Inc. All rights reserved.
Applying a Consumer Behavior Lens to Salt Reduction Initiatives
Potvin Kent, Monique; Raats, Monique M.; McConnon, Áine; Wall, Patrick; Dubois, Lise
2017-01-01
Reformulation of food products to reduce salt content has been a central strategy for achieving population level salt reduction. In this paper, we reflect on current reformulation strategies and consider how consumer behavior determines the ultimate success of these strategies. We consider the merits of adopting a ‘health by stealth’, silent approach to reformulation compared to implementing a communications strategy which draws on labeling initiatives in tandem with reformulation efforts. We end this paper by calling for a multi-actor approach which utilizes co-design, participatory tools to facilitate the involvement of all stakeholders, including, and especially, consumers, in making decisions around how best to achieve population-level salt reduction. PMID:28820449
Aggregating Queries Against Large Inventories of Remotely Accessible Data
NASA Astrophysics Data System (ADS)
Gallagher, J. H. R.; Fulker, D. W.
2016-12-01
Those seeking to discover data for a specific purpose often encounter search results that are so large as to be useless without computing assistance. This situation arises, with increasing frequency, in part because repositories contain ever greater numbers of granules, and their granularities may well be poorly aligned or even orthogonal to the data-selection needs of the user. This presentation describes a recently developed service for simultaneously querying large lists of OPeNDAP-accessible granules to extract specified data. The specifications include a richly expressive set of data-selection criteria—applicable to content as well as metadata—and the service has been tested successfully against lists naming hundreds of thousands of granules. Querying such numbers of local files (i.e., granules) on a desktop or laptop computer is practical (by using a scripting language, e.g.), but this practicality is diminished when the data are remote and thus best accessed through a Web-services interface. In these cases, which are increasingly common, scripted queries can take many hours because of inherent network latencies. Furthermore, communication dropouts can add fragility to such scripts, yielding gaps in the acquired results. In contrast, OPeNDAP's new aggregated-query services enable data discovery in the context of very large inventory sizes. These capabilities have been developed for use with OPeNDAP's Hyrax server, which is an open-source realization of DAP (for "Data Access Protocol," a specification widely used in NASA, NOAA and other data-intensive contexts). These aggregated-query services exhibit good response times (on the order of seconds, not hours) even for inventories that list hundreds of thousands of source granules.
Intelligent communication assistant for databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakobson, G.; Shaked, V.; Rowley, S.
1983-01-01
An intelligent communication assistant for databases, called FRED (front end for databases) is explored. FRED is designed to facilitate access to database systems by users of varying levels of experience. FRED is a second generation of natural language front-ends for databases and intends to solve two critical interface problems existing between end-users and databases: connectivity and communication problems. The authors report their experiences in developing software for natural language query processing, dialog control, and knowledge representation, as well as the direction of future work. 10 references.
Evaluation of reformulated thermal control coatings in a simulated space environment. Part 1: YB-71
NASA Technical Reports Server (NTRS)
Cerbus, Clifford A.; Carlin, Patrick S.
1994-01-01
The Air Force Space and Missile Systems Center and Wright Laboratory Materials Directorate (WL/ML) have sponsored and effort to effort to reformulate and qualify Illinois Institute of Technology Research Institute (IITRI) spacecraft thermal control coatings. S13G/LO-1, Z93, and YB-71 coatings were reformulated because the potassium silicate binder, Sylvania PS-7, used in the coatings is no longer manufactured. Coatings utilizing the binder's replacement candidate, Kasil 2130, manufactured by The Philadelphia Quartz (PQ) Corporation, Baltimore, Maryland, and undergoing testing at the Materials Directorate's Space Combined Effects Primary Test and Research Equipment (SCEPTRE) Facility operated by the University of Dayton Research Institute (UDRI). The simulated space environment consists of combined ultraviolet (UV) and electron exposure with in site specimen reflectance measurements. A brief description of the effort at IITRI, results and discussion from testing the reformulated YB-71 coating in SCEPTRE, and plans for further testing of reformulated Z93 and S13G/LO-1 are presented.
Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms
Roald, Line Alnaes; Andersson, Goran
2017-08-29
Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less
Combet, Emilie; Vlassopoulos, Antonis; Mölenberg, Famke; Gressier, Mathilde; Privet, Lisa; Wratten, Craig; Sharif, Sahar; Vieux, Florent; Lehmann, Undine; Masset, Gabriel
2017-04-21
Nutrient profiling ranks foods based on their nutrient composition, with applications in multiple aspects of food policy. We tested the capacity of a category-specific model developed for product reformulation to improve the average nutrient content of foods, using five national food composition datasets (UK, US, China, Brazil, France). Products ( n = 7183) were split into 35 categories based on the Nestlé Nutritional Profiling Systems (NNPS) and were then classified as NNPS 'Pass' if all nutrient targets were met (energy (E), total fat (TF), saturated fat (SFA), sodium (Na), added sugars (AS), protein, calcium). In a modelling scenario, all NNPS Fail products were 'reformulated' to meet NNPS standards. Overall, a third (36%) of all products achieved the NNPS standard/pass (inter-country and inter-category range: 32%-40%; 5%-72%, respectively), with most products requiring reformulation in two or more nutrients. The most common nutrients to require reformulation were SFA (22%-44%) and TF (23%-42%). Modelled compliance with NNPS standards could reduce the average content of SFA, Na and AS (10%, 8% and 6%, respectively) at the food supply level. Despite the good potential to stimulate reformulation across the five countries, the study highlights the need for better data quality and granularity of food composition databases.
Peacock, Amy; Degenhardt, Louisa; Hordern, Antonia; Larance, Briony; Cama, Elena; White, Nancy; Kihas, Ivana; Bruno, Raimondo
2015-12-01
In April 2014, a tamper-resistant controlled-release oxycodone formulation was introduced into the Australian market. This study aimed to identify the level and methods of tampering with reformulated oxycodone, demographic and clinical characteristics of those who reported tampering with reformulated oxycodone, and perceived attractiveness of original and reformulated oxycodone for misuse (via tampering). A prospective cohort of 522 people who regularly tampered with pharmaceutical opioids and had tampered with the original oxycodone product in their lifetime completed two interviews before (January-March 2014: Wave 1) and after (May-August 2014: Wave 2) introduction of reformulated oxycodone. Four-fifths (81%) had tampered with the original oxycodone formulation in the month prior to Wave 1; use and attempted tampering with reformulated oxycodone amongst the sample was comparatively low at Wave 2 (29% and 19%, respectively). Reformulated oxycodone was primarily swallowed (15%), with low levels of recent successful injection (6%), chewing (2%), drinking/dissolving (1%), and smoking (<1%). Participants who tampered with original and reformulated oxycodone were socio-demographically and clinically similar to those who had only tampered with the original formulation, except the former were more likely to report prescribed oxycodone use and stealing pharmaceutical opioid, and less likely to report moderate/severe anxiety. There was significant diversity in the methods for tampering, with attempts predominantly prompted by self-experimentation (rather than informed by word-of-mouth or the internet). Participants rated reformulated oxycodone as more difficult to prepare and inject and less pleasant to use compared to the original formulation. Current findings suggest that the introduction of the tamper-resistant product has been successful at reducing, although not necessarily eliminating, tampering with the controlled-release oxycodone formulation, with lower attractiveness for misuse. Appropriate, effective treatment options must be available with increasing availability of abuse-deterrent products, given the reduction of oxycodone tampering and use amongst a group with high rates of pharmaceutical opioid dependence. Copyright © 2015 Elsevier B.V. All rights reserved.
Otite, Fadar O.; Jacobson, Michael F.; Dahmubed, Aspan
2013-01-01
Introduction Although some US food manufacturers have reduced trans fatty acids (TFA) in their products, it is unknown how much TFA is being reduced, whether pace of reformulation has changed over time, or whether reformulations vary by food type or manufacturer. Methods In 2007, we identified 360 brand-name products in major US supermarkets that contained 0.5 g TFA or more per serving. In 2008, 2010, and 2011, product labels were re-examined to determine TFA content; ingredients lists were also examined in 2011 for partially hydrogenated vegetable oils (PHVO). We assessed changes in TFA content among the 270 products sold in all years between 2007 and 2011 and conducted sensitivity analyses on the 90 products discontinued after 2007. Results By 2011, 178 (66%) of the 270 products had reduced TFA content. Most reformulated products (146 of 178, 82%) reduced TFA to less than 0.5 g per serving, although half of these 146 still contained PHVO. Among all 270 products, mean TFA content decreased 49% between 2007 and 2011, from 1.9 to 0.9 g per serving. Yet, mean TFA reduction slowed over time, from 30.3% (2007–2008) to 12.1% (2008–2010) to 3.4% (2010–2011) (P value for trend < .001). This slowing pace was due to both fewer reformulations among TFA-containing products at start of each period and smaller TFA reductions among reformulated products. Reformulations also varied substantially by both food category and manufacturer, with some eliminating or nearly eliminating TFA and others showing no significant changes. Sensitivity analyses were similar to main findings. Conclusions Some US products and food manufacturers have made progress in reducing TFA, but substantial variation exists by food type and by parent company, and overall progress has significantly slowed over time. Because TFA consumption is harmful even at low levels, our results emphasize the need for continued efforts toward reformulating or discontinuing foods to eliminate PHVO. PMID:23701722
Automated Assistance in the Formulation of Search Statements for Bibliographic Databases.
ERIC Educational Resources Information Center
Oakes, Michael P.; Taylor, Malcolm J.
1998-01-01
Reports on the design of an automated query system to help pharmacologists access the Derwent Drug File (DDF). Topics include knowledge types; knowledge representation; role of the search intermediary; vocabulary selection, thesaurus, and user input in natural language; browsing; evaluation methods; and search statement generation for the World…
Image query and indexing for digital x rays
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Thoma, George R.
1998-12-01
The web-based medical information retrieval system (WebMIRS) allows interned access to databases containing 17,000 digitized x-ray spine images and associated text data from National Health and Nutrition Examination Surveys (NHANES). WebMIRS allows SQL query of the text, and viewing of the returned text records and images using a standard browser. We are now working (1) to determine utility of data directly derived from the images in our databases, and (2) to investigate the feasibility of computer-assisted or automated indexing of the images to support image retrieval of images of interest to biomedical researchers in the field of osteoarthritis. To build an initial database based on image data, we are manually segmenting a subset of the vertebrae, using techniques from vertebral morphometry. From this, we will derive and add to the database vertebral features. This image-derived data will enhance the user's data access capability by enabling the creation of combined SQL/image-content queries.
Modeled Dietary Impact of Pizza Reformulations in US Children and Adolescents.
Masset, Gabriel; Mathias, Kevin C; Vlassopoulos, Antonis; Mölenberg, Famke; Lehmann, Undine; Gibney, Mike; Drewnowski, Adam
2016-01-01
Approximately 20% of US children and adolescents consume pizza on any given day; and pizza intake is associated with higher intakes of energy, sodium, and saturated fat. The reformulation of pizza products has yet to be evaluated as a viable option to improve diets of the US youth. This study modeled the effect on nutrient intakes of two potential pizza reformulation strategies based on the standards established by the Nestlé Nutritional Profiling System (NNPS). Dietary intakes were retrieved from the first 24hr-recall of the National Health and Nutrition Examination Survey (NHANES) 2011-12, for 2655 participants aged 4-19 years. The composition of pizzas in the NHANES food database (n = 69) were compared against the NNPS standards for energy, total fat, saturated fat, sodium, added sugars, and protein. In a reformulation scenario, the nutrient content of pizzas was adjusted to the NNPS standards if these were not met. In a substitution scenario, pizzas that did not meet the standards were replaced by the closest pizza, based on nutrient content, that met all of the NNPS standards. Pizzas consistent with all the NNPS standards (29% of all pizzas) were significantly lower in energy, saturated fat and sodium than pizzas that were not. Among pizza consumers, modeled intakes in the reformulation and substitution scenarios were lower in energy (-14 and -45kcal, respectively), saturated fat (-1.2 and -2.7g), and sodium (-143 and -153mg) compared to baseline. Potential industry wide reformulation of a single food category or intra-category food substitutions may positively impact dietary intakes of US children and adolescents. Further promotion and support of these complimentary strategies may facilitate the adoption and implementation of reformulation standards.
Modeled Dietary Impact of Pizza Reformulations in US Children and Adolescents
Masset, Gabriel; Mathias, Kevin C.; Vlassopoulos, Antonis; Mölenberg, Famke; Lehmann, Undine; Gibney, Mike; Drewnowski, Adam
2016-01-01
Background and Objective Approximately 20% of US children and adolescents consume pizza on any given day; and pizza intake is associated with higher intakes of energy, sodium, and saturated fat. The reformulation of pizza products has yet to be evaluated as a viable option to improve diets of the US youth. This study modeled the effect on nutrient intakes of two potential pizza reformulation strategies based on the standards established by the Nestlé Nutritional Profiling System (NNPS). Methods Dietary intakes were retrieved from the first 24hr-recall of the National Health and Nutrition Examination Survey (NHANES) 2011–12, for 2655 participants aged 4–19 years. The composition of pizzas in the NHANES food database (n = 69) were compared against the NNPS standards for energy, total fat, saturated fat, sodium, added sugars, and protein. In a reformulation scenario, the nutrient content of pizzas was adjusted to the NNPS standards if these were not met. In a substitution scenario, pizzas that did not meet the standards were replaced by the closest pizza, based on nutrient content, that met all of the NNPS standards. Results Pizzas consistent with all the NNPS standards (29% of all pizzas) were significantly lower in energy, saturated fat and sodium than pizzas that were not. Among pizza consumers, modeled intakes in the reformulation and substitution scenarios were lower in energy (-14 and -45kcal, respectively), saturated fat (-1.2 and -2.7g), and sodium (-143 and -153mg) compared to baseline. Conclusions Potential industry wide reformulation of a single food category or intra-category food substitutions may positively impact dietary intakes of US children and adolescents. Further promotion and support of these complimentary strategies may facilitate the adoption and implementation of reformulation standards. PMID:27706221
Reformulations of the Yang-Mills theory toward quark confinement and mass gap
NASA Astrophysics Data System (ADS)
Kondo, Kei-Ichi; Kato, Seikou; Shibata, Akihiro; Shinohara, Toru
2016-01-01
We propose the reformulations of the SU (N) Yang-Mills theory toward quark confinement and mass gap. In fact, we have given a new framework for reformulating the SU (N) Yang-Mills theory using new field variables. This includes the preceding works given by Cho, Faddeev and Niemi, as a special case called the maximal option in our reformulations. The advantage of our reformulations is that the original non-Abelian gauge field variables can be changed into the new field variables such that one of them called the restricted field gives the dominant contribution to quark confinement in the gauge-independent way. Our reformulations can be combined with the SU (N) extension of the Diakonov-Petrov version of the non-Abelian Stokes theorem for the Wilson loop operator to give a gauge-invariant definition for the magnetic monopole in the SU (N) Yang-Mills theory without the scalar field. In the so-called minimal option, especially, the restricted field is non-Abelian and involves the non-Abelian magnetic monopole with the stability group U (N- 1). This suggests the non-Abelian dual superconductivity picture for quark confinement. This should be compared with the maximal option: the restricted field is Abelian and involves only the Abelian magnetic monopoles with the stability group U(1)N-1, just like the Abelian projection. We give some applications of this reformulation, e.g., the stability for the homogeneous chromomagnetic condensation of the Savvidy type, the large N treatment for deriving the dimensional transmutation and understanding the mass gap, and also the numerical simulations on a lattice which are given by Dr. Shibata in a subsequent talk.
Dugan, J M; Berrios, D C; Liu, X; Kim, D K; Kaizer, H; Fagan, L M
1999-01-01
Our group has built an information retrieval system based on a complex semantic markup of medical textbooks. We describe the construction of a set of web-based knowledge-acquisition tools that expedites the collection and maintenance of the concepts required for text markup and the search interface required for information retrieval from the marked text. In the text markup system, domain experts (DEs) identify sections of text that contain one or more elements from a finite set of concepts. End users can then query the text using a predefined set of questions, each of which identifies a subset of complementary concepts. The search process matches that subset of concepts to relevant points in the text. The current process requires that the DE invest significant time to generate the required concepts and questions. We propose a new system--called ACQUIRE (Acquisition of Concepts and Queries in an Integrated Retrieval Environment)--that assists a DE in two essential tasks in the text-markup process. First, it helps her to develop, edit, and maintain the concept model: the set of concepts with which she marks the text. Second, ACQUIRE helps her to develop a query model: the set of specific questions that end users can later use to search the marked text. The DE incorporates concepts from the concept model when she creates the questions in the query model. The major benefit of the ACQUIRE system is a reduction in the time and effort required for the text-markup process. We compared the process of concept- and query-model creation using ACQUIRE to the process used in previous work by rebuilding two existing models that we previously constructed manually. We observed a significant decrease in the time required to build and maintain the concept and query models.
New Perspectives: TA Preparation for Critical Literacy in First Year Composition.
ERIC Educational Resources Information Center
Duffelmeyer, Barb Blakely
2002-01-01
Notes that new teaching assistants (TAs) and first year composition students similarly grapple with ambiguity, multiplicity, and open-endedness. Contends that new TAs' queries and early classroom experiences can provide a valuable occasion to re-balance the emphasis in a pro-seminar between teaching and learning. Presents strategies for addressing…
Just-in-Time Web Searches for Trainers & Adult Educators.
ERIC Educational Resources Information Center
Kirk, James J.
Trainers and adult educators often need to quickly locate quality information on the World Wide Web (WWW) and need assistance in searching for such information. A "search engine" is an application used to query existing information on the WWW. The three types of search engines are computer-generated indexes, directories, and meta search…
Ratanawongsa, Neda; Quan, Judy; Handley, Margaret A; Sarkar, Urmimala; Schillinger, Dean
2018-04-06
Clinicians have difficulty accurately assessing medication non-adherence within chronic disease care settings. Health information technology (HIT) could offer novel tools to assess medication adherence in diverse populations outside of usual health care settings. In a multilingual urban safety net population, we examined the validity of assessing adherence using automated telephone self-management (ATSM) queries, when compared with non-adherence using continuous medication gap (CMG) on pharmacy claims. We hypothesized that patients reporting greater days of missed pills to ATSM queries would have higher rates of non-adherence as measured by CMG, and that ATSM adherence assessments would perform as well as structured interview assessments. As part of an ATSM-facilitated diabetes self-management program, low-income health plan members typed numeric responses to rotating weekly ATSM queries: "In the last 7 days, how many days did you MISS taking your …" diabetes, blood pressure, or cholesterol pill. Research assistants asked similar questions in computer-assisted structured telephone interviews. We measured continuous medication gap (CMG) by claims over 12 preceding months. To evaluate convergent validity, we compared rates of optimal adherence (CMG ≤ 20%) across respondents reporting 0, 1, and ≥ 2 missed pill days on ATSM and on structured interview. Among 210 participants, 46% had limited health literacy, 57% spoke Cantonese, and 19% Spanish. ATSM respondents reported ≥1 missed day for diabetes (33%), blood pressure (19%), and cholesterol (36%) pills. Interview respondents reported ≥1 missed day for diabetes (28%), blood pressure (21%), and cholesterol (26%) pills. Optimal adherence rates by CMG were lower among ATSM respondents reporting more missed days for blood pressure (p = 0.02) and cholesterol (p < 0.01); by interview, differences were significant for cholesterol (p = 0.01). Language-concordant ATSM demonstrated modest potential for assessing adherence. Studies should evaluate HIT assessments of medication beliefs and concerns in diverse populations. NCT00683020 , registered May 21, 2008.
Reformulated gasoline (RFG) is gasoline blended to burn cleaner and reduce smog-forming and toxic pollutants in the air we breathe. The Clean Air Act requires that RFG be used to reduce harmful emissions of ozone.
2013-01-01
Background The consumption of partially hydrogenated vegetable oils (PHVOs) high in trans fat is associated with an increased risk of cardiovascular disease and other non-communicable diseases. In response to high intakes of PHVOs, the Indian government has proposed regulation to set limits on the amount of trans fat permissible in PHVOs. Global recommendations are to replace PHVOs with polyunsaturated fatty acids (PUFAs) in order to optimise health benefits; however, little is known about the practicalities of implementation in low-income settings. The aim of this study was to examine the technical and economic feasibility of reducing trans fat in PHVOs and reformulating it using healthier fats. Methods Thirteen semi-structured interviews were conducted with manufacturers and technical experts of PHVOs in India. Data were open-coded and organised according to key themes. Results Interviewees indicated that reformulating PHVOs was both economically and technically feasible provided that trans fat regulation takes account of the food technology challenges associated with product reformulation. However, there will be challenges in maintaining the physical properties that consumers prefer while reducing the trans fat in PHVOs. The availability of input oils was not seen to be a problem because of the low cost and high availability of imported palm oil, which was the input oil of choice for industry. Most interviewees were not concerned about the potential increase in saturated fat associated with increased use of palm oil and were not planning to use PUFAs in product reformulation. Interviewees indicated that many smaller manufacturers would not have sufficient capacity to reformulate products to reduce trans fat. Conclusions Reformulating PHVOs to reduce trans fat in India is feasible; however, a collision course exists where the public health goal to replace PHVOs with PUFA are opposed to the goals of industry to produce a cheap alternative product that meets consumer preferences. Ensuring that product reformulation is done in a way that maximises health benefits will require shifts in knowledge and subsequent demand of products, decreased reliance on palm oil, investment in research and development and increased capacity for smaller manufacturers. PMID:24308642
Downs, Shauna M; Gupta, Vidhu; Ghosh-Jerath, Suparna; Lock, Karen; Thow, Anne Marie; Singh, Archna
2013-12-05
The consumption of partially hydrogenated vegetable oils (PHVOs) high in trans fat is associated with an increased risk of cardiovascular disease and other non-communicable diseases. In response to high intakes of PHVOs, the Indian government has proposed regulation to set limits on the amount of trans fat permissible in PHVOs. Global recommendations are to replace PHVOs with polyunsaturated fatty acids (PUFAs) in order to optimise health benefits; however, little is known about the practicalities of implementation in low-income settings. The aim of this study was to examine the technical and economic feasibility of reducing trans fat in PHVOs and reformulating it using healthier fats. Thirteen semi-structured interviews were conducted with manufacturers and technical experts of PHVOs in India. Data were open-coded and organised according to key themes. Interviewees indicated that reformulating PHVOs was both economically and technically feasible provided that trans fat regulation takes account of the food technology challenges associated with product reformulation. However, there will be challenges in maintaining the physical properties that consumers prefer while reducing the trans fat in PHVOs. The availability of input oils was not seen to be a problem because of the low cost and high availability of imported palm oil, which was the input oil of choice for industry. Most interviewees were not concerned about the potential increase in saturated fat associated with increased use of palm oil and were not planning to use PUFAs in product reformulation. Interviewees indicated that many smaller manufacturers would not have sufficient capacity to reformulate products to reduce trans fat. Reformulating PHVOs to reduce trans fat in India is feasible; however, a collision course exists where the public health goal to replace PHVOs with PUFA are opposed to the goals of industry to produce a cheap alternative product that meets consumer preferences. Ensuring that product reformulation is done in a way that maximises health benefits will require shifts in knowledge and subsequent demand of products, decreased reliance on palm oil, investment in research and development and increased capacity for smaller manufacturers.
McAdams, Hiramie T [Carrollton, IL; Crawford, Robert W [Tucson, AZ; Hadder, Gerald R [Oak Ridge, TN; McNutt, Barry D [Arlington, VA
2006-03-28
Reformulated diesel fuels for automotive diesel engines which meet the requirements of ASTM 975-02 and provide significantly reduced emissions of nitrogen oxides (NO.sub.x) and particulate matter (PM) relative to commercially available diesel fuels.
FORTRAN Versions of Reformulated HFGMC Codes
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Aboudi, Jacob; Bednarcyk, Brett A.
2006-01-01
Several FORTRAN codes have been written to implement the reformulated version of the high-fidelity generalized method of cells (HFGMC). Various aspects of the HFGMC and its predecessors were described in several prior NASA Tech Briefs articles, the most recent being HFGMC Enhancement of MAC/GMC (LEW-17818-1), NASA Tech Briefs, Vol. 30, No. 3 (March 2006), page 34. The HFGMC is a mathematical model of micromechanics for simulating stress and strain responses of fiber/matrix and other composite materials. The HFGMC overcomes a major limitation of a prior version of the GMC by accounting for coupling of shear and normal stresses and thereby affords greater accuracy, albeit at a large computational cost. In the reformulation of the HFGMC, the issue of computational efficiency was addressed: as a result, codes that implement the reformulated HFGMC complete their calculations about 10 times as fast as do those that implement the HFGMC. The present FORTRAN implementations of the reformulated HFGMC were written to satisfy a need for compatibility with other FORTRAN programs used to analyze structures and composite materials. The FORTRAN implementations also afford capabilities, beyond those of the basic HFGMC, for modeling inelasticity, fiber/matrix debonding, and coupled thermal, mechanical, piezo, and electromagnetic effects.
Reformulated Gasoline Market Affected Refiners Differently, 1995
1996-01-01
This article focuses on the costs of producing reformulated gasoline (RFG) as experienced by different types of refiners and on how these refiners fared this past summer, given the prices for RFG at the refinery gate.
Giordano, James
2017-01-01
Research in neuroscience and neurotechnology (neuroS/T) is progressing at a rapid pace with translational applications both in medicine, and more widely in the social milieu. Current and projected neuroS/T research and its applications evoke a number of neuroethicolegal and social issues (NELSI). This paper defines inherent and derivative NELSI of current and near-term neuroS/T development and engagement, and provides an overview of our group's ongoing work to develop a systematized approach to their address. Our proposed operational neuroethical risk assessment and mitigation paradigm (ONRAMP) is presented, which entails querying, framing, and modeling patterns and trajectories of neuroS/T research and translational uses, and the NELSI generated by such advancements and their applications. Extant ethical methods are addressed, with suggestion toward possible revision or re-formulation to meet the needs and exigencies fostered by neuroS/T and resultant NELSI in multi-cultural contexts. The relevance and importance of multi-disciplinary expertise in focusing upon NELSI is discussed, and the need for neuroethics education toward cultivating such a cadre of expertise is emphasized. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fujita, Yasunori
2007-09-01
Reformulation of economics by physics has been carried out intensively to reveal many features of the asset market, which were missed in the classical economic theories. The present paper attempts to shed new light on this field. That is, this paper aims at reformulating the international trade model by making use of the real option theory. Based on such a stochastic dynamic model, we examine how the fluctuation of the foreign exchange rate makes effect on the welfare of the exporting country.
Legal Decision-Making by People with Aphasia: Critical Incidents for Speech Pathologists
ERIC Educational Resources Information Center
Ferguson, Alison; Duffield, Gemma; Worrall, Linda
2010-01-01
Background: The assessment and management of a person with aphasia for whom decision-making capacity is queried represents a highly complex clinical issue. In addition, there are few published guidelines and even fewer published accounts of empirical research to assist. Aims: The research presented in this paper aimed to identify the main issues…
Efficient Reformulation of HOTFGM: Heat Conduction with Variable Thermal Conductivity
NASA Technical Reports Server (NTRS)
Zhong, Yi; Pindera, Marek-Jerzy; Arnold, Steven M. (Technical Monitor)
2002-01-01
Functionally graded materials (FGMs) have become one of the major research topics in the mechanics of materials community during the past fifteen years. FGMs are heterogeneous materials, characterized by spatially variable microstructure, and thus spatially variable macroscopic properties, introduced to enhance material or structural performance. The spatially variable material properties make FGMs challenging to analyze. The review of the various techniques employed to analyze the thermodynamical response of FGMs reveals two distinct and fundamentally different computational strategies, called uncoupled macromechanical and coupled micromechanical approaches by some investigators. The uncoupled macromechanical approaches ignore the effect of microstructural gradation by employing specific spatial variations of material properties, which are either assumed or obtained by local homogenization, thereby resulting in erroneous results under certain circumstances. In contrast, the coupled approaches explicitly account for the micro-macrostructural interaction, albeit at a significantly higher computational cost. The higher-order theory for functionally graded materials (HOTFGM) developed by Aboudi et al. is representative of the coupled approach. However, despite its demonstrated utility in applications where micro-macrostructural coupling effects are important, the theory's full potential is yet to be realized because the original formulation of HOTFGM is computationally intensive. This, in turn, limits the size of problems that can be solved due to the large number of equations required to mimic realistic material microstructures. Therefore, a basis for an efficient reformulation of HOTFGM, referred to as user-friendly formulation, is developed herein, and subsequently employed in the construction of the efficient reformulation using the local/global conductivity matrix approach. In order to extend HOTFGM's range of applicability, spatially variable thermal conductivity capability at the local level is incorporated into the efficient reformulation. Analytical solutions to validate both the user-friendly and efficient reformulations am also developed. Volume discretization sensitivity and validation studies, as well as a practical application of the developed efficient reformulation are subsequently carried out. The presented results illustrate the accuracy and implementability of both the user-friendly formulation and the efficient reformulation of HOTFGM.
Extension of the Reformulated Gasoline Program to Maine’s Southern Counties Additional Resources
Supporting documents on EPA's decision about extending the Clean Air Act prohibition against the sale of conventional gasoline in reformulated gasoline areas to the southern Maine counties of York, Cumberland,Sagadahoc
Semi-automatic semantic annotation of PubMed Queries: a study on quality, efficiency, satisfaction
Névéol, Aurélie; Islamaj-Doğan, Rezarta; Lu, Zhiyong
2010-01-01
Information processing algorithms require significant amounts of annotated data for training and testing. The availability of such data is often hindered by the complexity and high cost of production. In this paper, we investigate the benefits of a state-of-the-art tool to help with the semantic annotation of a large set of biomedical information queries. Seven annotators were recruited to annotate a set of 10,000 PubMed® queries with 16 biomedical and bibliographic categories. About half of the queries were annotated from scratch, while the other half were automatically pre-annotated and manually corrected. The impact of the automatic pre-annotations was assessed on several aspects of the task: time, number of actions, annotator satisfaction, inter-annotator agreement, quality and number of the resulting annotations. The analysis of annotation results showed that the number of required hand annotations is 28.9% less when using pre-annotated results from automatic tools. As a result, the overall annotation time was substantially lower when pre-annotations were used, while inter-annotator agreement was significantly higher. In addition, there was no statistically significant difference in the semantic distribution or number of annotations produced when pre-annotations were used. The annotated query corpus is freely available to the research community. This study shows that automatic pre-annotations are found helpful by most annotators. Our experience suggests using an automatic tool to assist large-scale manual annotation projects. This helps speed-up the annotation time and improve annotation consistency while maintaining high quality of the final annotations. PMID:21094696
Data augmentation-assisted deep learning of hand-drawn partially colored sketches for visual search
Muhammad, Khan; Baik, Sung Wook
2017-01-01
In recent years, image databases are growing at exponential rates, making their management, indexing, and retrieval, very challenging. Typical image retrieval systems rely on sample images as queries. However, in the absence of sample query images, hand-drawn sketches are also used. The recent adoption of touch screen input devices makes it very convenient to quickly draw shaded sketches of objects to be used for querying image databases. This paper presents a mechanism to provide access to visual information based on users’ hand-drawn partially colored sketches using touch screen devices. A key challenge for sketch-based image retrieval systems is to cope with the inherent ambiguity in sketches due to the lack of colors, textures, shading, and drawing imperfections. To cope with these issues, we propose to fine-tune a deep convolutional neural network (CNN) using augmented dataset to extract features from partially colored hand-drawn sketches for query specification in a sketch-based image retrieval framework. The large augmented dataset contains natural images, edge maps, hand-drawn sketches, de-colorized, and de-texturized images which allow CNN to effectively model visual contents presented to it in a variety of forms. The deep features extracted from CNN allow retrieval of images using both sketches and full color images as queries. We also evaluated the role of partial coloring or shading in sketches to improve the retrieval performance. The proposed method is tested on two large datasets for sketch recognition and sketch-based image retrieval and achieved better classification and retrieval performance than many existing methods. PMID:28859140
Stylistic Reformulation: Theoretical Premises and Practical Applications.
ERIC Educational Resources Information Center
Schultz, Jean Marie
1994-01-01
Various aspects of writing style are discussed to propose concrete methods for improving students' performance. Topics covered include the relationship between syntactic and cognitive complexity and classroom techniques and the reformulation technique as applied to student writing samples. (Contains 20 references.) (LB)
Motor fuels : issues related to reformulated gasoline, oxygenated fuels, and biofuels
DOT National Transportation Integrated Search
1996-06-01
This report by the General Accounting Office summarizes (1) the results of : federal and other studies on the cost-effectiveness of using reformulated : gasoline compared to other measures to control automotive emissions and compare : the price estim...
Bio-TDS: bioscience query tool discovery system.
Gnimpieba, Etienne Z; VanDiermen, Menno S; Gustafson, Shayla M; Conn, Bill; Lushbough, Carol M
2017-01-04
Bioinformatics and computational biology play a critical role in bioscience and biomedical research. As researchers design their experimental projects, one major challenge is to find the most relevant bioinformatics toolkits that will lead to new knowledge discovery from their data. The Bio-TDS (Bioscience Query Tool Discovery Systems, http://biotds.org/) has been developed to assist researchers in retrieving the most applicable analytic tools by allowing them to formulate their questions as free text. The Bio-TDS is a flexible retrieval system that affords users from multiple bioscience domains (e.g. genomic, proteomic, bio-imaging) the ability to query over 12 000 analytic tool descriptions integrated from well-established, community repositories. One of the primary components of the Bio-TDS is the ontology and natural language processing workflow for annotation, curation, query processing, and evaluation. The Bio-TDS's scientific impact was evaluated using sample questions posed by researchers retrieved from Biostars, a site focusing on BIOLOGICAL DATA ANALYSIS: The Bio-TDS was compared to five similar bioscience analytic tool retrieval systems with the Bio-TDS outperforming the others in terms of relevance and completeness. The Bio-TDS offers researchers the capacity to associate their bioscience question with the most relevant computational toolsets required for the data analysis in their knowledge discovery process. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Database Reports Over the Internet
NASA Technical Reports Server (NTRS)
Smith, Dean Lance
2002-01-01
Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.
The Armed Forces Casualty Assistance Readiness Enhancement System (CARES): Design for Flexibility
2006-06-01
Special Form SQL Structured Query Language SSA Social Security Administration U USMA United States Military Academy V VB Visual Basic VBA Visual Basic for...of Abbreviations ................................................................... 26 Appendix B: Key VBA Macros and MS Excel Coding...internet portal, CARES Version 1.0 is a MS Excel spreadsheet application that contains a considerable number of Visual Basic for Applications ( VBA
Towards ontology-driven navigation of the lipid bibliosphere
Baker, Christopher JO; Kanagasabai, Rajaraman; Ang, Wee Tiong; Veeramani, Anitha; Low, Hong-Sang; Wenk, Markus R
2008-01-01
Background The indexing of scientific literature and content is a relevant and contemporary requirement within life science information systems. Navigating information available in legacy formats continues to be a challenge both in enterprise and academic domains. The emergence of semantic web technologies and their fusion with artificial intelligence techniques has provided a new toolkit with which to address these data integration challenges. In the emerging field of lipidomics such navigation challenges are barriers to the translation of scientific results into actionable knowledge, critical to the treatment of diseases such as Alzheimer's syndrome, Mycobacterium infections and cancer. Results We present a literature-driven workflow involving document delivery and natural language processing steps generating tagged sentences containing lipid, protein and disease names, which are instantiated to custom designed lipid ontology. We describe the design challenges in capturing lipid nomenclature, the mandate of the ontology and its role as query model in the navigation of the lipid bibliosphere. We illustrate the extent of the description logic-based A-box query capability provided by the instantiated ontology using a graphical query composer to query sentences describing lipid-protein and lipid-disease correlations. Conclusion As scientists accept the need to readjust the manner in which we search for information and derive knowledge we illustrate a system that can constrain the literature explosion and knowledge navigation problems. Specifically we have focussed on solving this challenge for lipidomics researchers who have to deal with the lack of standardized vocabulary, differing classification schemes, and a wide array of synonyms before being able to derive scientific insights. The use of the OWL-DL variant of the Web Ontology Language (OWL) and description logic reasoning is pivotal in this regard, providing the lipid scientist with advanced query access to the results of text mining algorithms instantiated into the ontology. The visual query paradigm assists in the adoption of this technology. PMID:18315858
Dugan, J. M.; Berrios, D. C.; Liu, X.; Kim, D. K.; Kaizer, H.; Fagan, L. M.
1999-01-01
Our group has built an information retrieval system based on a complex semantic markup of medical textbooks. We describe the construction of a set of web-based knowledge-acquisition tools that expedites the collection and maintenance of the concepts required for text markup and the search interface required for information retrieval from the marked text. In the text markup system, domain experts (DEs) identify sections of text that contain one or more elements from a finite set of concepts. End users can then query the text using a predefined set of questions, each of which identifies a subset of complementary concepts. The search process matches that subset of concepts to relevant points in the text. The current process requires that the DE invest significant time to generate the required concepts and questions. We propose a new system--called ACQUIRE (Acquisition of Concepts and Queries in an Integrated Retrieval Environment)--that assists a DE in two essential tasks in the text-markup process. First, it helps her to develop, edit, and maintain the concept model: the set of concepts with which she marks the text. Second, ACQUIRE helps her to develop a query model: the set of specific questions that end users can later use to search the marked text. The DE incorporates concepts from the concept model when she creates the questions in the query model. The major benefit of the ACQUIRE system is a reduction in the time and effort required for the text-markup process. We compared the process of concept- and query-model creation using ACQUIRE to the process used in previous work by rebuilding two existing models that we previously constructed manually. We observed a significant decrease in the time required to build and maintain the concept and query models. Images Figure 1 Figure 2 Figure 4 Figure 5 PMID:10566457
Towards ontology-driven navigation of the lipid bibliosphere.
Baker, Christopher Jo; Kanagasabai, Rajaraman; Ang, Wee Tiong; Veeramani, Anitha; Low, Hong-Sang; Wenk, Markus R
2008-01-01
The indexing of scientific literature and content is a relevant and contemporary requirement within life science information systems. Navigating information available in legacy formats continues to be a challenge both in enterprise and academic domains. The emergence of semantic web technologies and their fusion with artificial intelligence techniques has provided a new toolkit with which to address these data integration challenges. In the emerging field of lipidomics such navigation challenges are barriers to the translation of scientific results into actionable knowledge, critical to the treatment of diseases such as Alzheimer's syndrome, Mycobacterium infections and cancer. We present a literature-driven workflow involving document delivery and natural language processing steps generating tagged sentences containing lipid, protein and disease names, which are instantiated to custom designed lipid ontology. We describe the design challenges in capturing lipid nomenclature, the mandate of the ontology and its role as query model in the navigation of the lipid bibliosphere. We illustrate the extent of the description logic-based A-box query capability provided by the instantiated ontology using a graphical query composer to query sentences describing lipid-protein and lipid-disease correlations. As scientists accept the need to readjust the manner in which we search for information and derive knowledge we illustrate a system that can constrain the literature explosion and knowledge navigation problems. Specifically we have focussed on solving this challenge for lipidomics researchers who have to deal with the lack of standardized vocabulary, differing classification schemes, and a wide array of synonyms before being able to derive scientific insights. The use of the OWL-DL variant of the Web Ontology Language (OWL) and description logic reasoning is pivotal in this regard, providing the lipid scientist with advanced query access to the results of text mining algorithms instantiated into the ontology. The visual query paradigm assists in the adoption of this technology.
NASA Astrophysics Data System (ADS)
Cho, Hyun-chong; Hadjiiski, Lubomir; Sahiner, Berkman; Chan, Heang-Ping; Paramagul, Chintana; Helvie, Mark; Nees, Alexis V.
2012-03-01
We designed a Content-Based Image Retrieval (CBIR) Computer-Aided Diagnosis (CADx) system to assist radiologists in characterizing masses on ultrasound images. The CADx system retrieves masses that are similar to a query mass from a reference library based on computer-extracted features that describe texture, width-to-height ratio, and posterior shadowing of a mass. Retrieval is performed with k nearest neighbor (k-NN) method using Euclidean distance similarity measure and Rocchio relevance feedback algorithm (RRF). In this study, we evaluated the similarity between the query and the retrieved masses with relevance feedback using our interactive CBIR CADx system. The similarity assessment and feedback were provided by experienced radiologists' visual judgment. For training the RRF parameters, similarities of 1891 image pairs obtained from 62 masses were rated by 3 MQSA radiologists using a 9-point scale (9=most similar). A leave-one-out method was used in training. For each query mass, 5 most similar masses were retrieved from the reference library using radiologists' similarity ratings, which were then used by RRF to retrieve another 5 masses for the same query. The best RRF parameters were chosen based on three simulated observer experiments, each of which used one of the radiologists' ratings for retrieval and relevance feedback. For testing, 100 independent query masses on 100 images and 121 reference masses on 230 images were collected. Three radiologists rated the similarity between the query and the computer-retrieved masses. Average similarity ratings without and with RRF were 5.39 and 5.64 on the training set and 5.78 and 6.02 on the test set, respectively. The average Az values without and with RRF were 0.86+/-0.03 and 0.87+/-0.03 on the training set and 0.91+/-0.03 and 0.90+/-0.03 on the test set, respectively. This study demonstrated that RRF improved the similarity of the retrieved masses.
Reformulations of the Yang-Mills theory toward quark confinement and mass gap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kondo, Kei-Ichi; Shinohara, Toru; Kato, Seikou
2016-01-22
We propose the reformulations of the SU (N) Yang-Mills theory toward quark confinement and mass gap. In fact, we have given a new framework for reformulating the SU (N) Yang-Mills theory using new field variables. This includes the preceding works given by Cho, Faddeev and Niemi, as a special case called the maximal option in our reformulations. The advantage of our reformulations is that the original non-Abelian gauge field variables can be changed into the new field variables such that one of them called the restricted field gives the dominant contribution to quark confinement in the gauge-independent way. Our reformulationsmore » can be combined with the SU (N) extension of the Diakonov-Petrov version of the non-Abelian Stokes theorem for the Wilson loop operator to give a gauge-invariant definition for the magnetic monopole in the SU (N) Yang-Mills theory without the scalar field. In the so-called minimal option, especially, the restricted field is non-Abelian and involves the non-Abelian magnetic monopole with the stability group U (N− 1). This suggests the non-Abelian dual superconductivity picture for quark confinement. This should be compared with the maximal option: the restricted field is Abelian and involves only the Abelian magnetic monopoles with the stability group U(1){sup N−1}, just like the Abelian projection. We give some applications of this reformulation, e.g., the stability for the homogeneous chromomagnetic condensation of the Savvidy type, the large N treatment for deriving the dimensional transmutation and understanding the mass gap, and also the numerical simulations on a lattice which are given by Dr. Shibata in a subsequent talk.« less
The Reformulated Model of Learned Helplessness: An Empirical Test.
ERIC Educational Resources Information Center
Rothblum, Esther D.; Green, Leon
Abramson, Seligman and Teasdale's reformulated model of learned helplessness hypothesized that an attribution of causality intervenes between the perception of noncontingency and the future expectation of future noncontingency. To test this model, relationships between attribution and performance under failure, success, and control conditions were…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-22
... Submitted to OMB for Review and Approval; Comment Request; Reformulated Gasoline Commingling Provisions... Protection Agency has submitted an information collection request (ICR), Reformulated Gasoline Commingling...: EPA would like to continue collecting notifications from gasoline retailers and wholesale purchaser...
This page summaries the final rule determining that the Atlanta metro area is no longer a federal reformulated gasoline (RFG) covered area and there is no requirement to use federal RFG in the Atlanta area.
Shan, Liran C; De Brún, Aoife; Henchion, Maeve; Li, Chenguang; Murrin, Celine; Wall, Patrick G; Monahan, Frank J
2017-09-01
Recent innovations in processed meats focus on healthier reformulations through reducing negative constituents and/or adding health beneficial ingredients. This study explored the influence of base meat product (ham, sausages, beef burger), salt and/or fat content (reduced or not), healthy ingredients (omega 3, vitamin E, none), and price (average or higher than average) on consumers' purchase intention and quality judgement of processed meats. A survey (n=481) using conjoint methodology and cluster analysis was conducted. Price and base meat product were most important for consumers' purchase intention, followed by healthy ingredient and salt and/or fat content. In reformulation, consumers had a preference for ham and sausages over beef burgers, and for reduced salt and/or fat over non reduction. In relation to healthy ingredients, omega 3 was preferred over none, and vitamin E was least preferred. Healthier reformulations improved the perceived healthiness of processed meats. Cluster analyses identified three consumer segments with different product preferences. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reformulated gasolines: The experience of Mexico City Metropolitan Zone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alvarez, H.B.; Jardon, R.T.; Echeverria, R.S.
1997-12-31
The introduction of several reformulated gasolines into the Mexico City Metropolitan Zone (MCMZ) in the middle 1986 is an example of using fuel composition to improve, in theory, the air quality. However, although these changes have resulted in an important reduction of lead airborne concentrations, a worsened situation has been created. Ozone levels in the atmosphere MCMZ have presented a sudden rise since the introduction of the first reformulated gasoline, reaching in the 1990`s an annual average of 1,700 exceedances to the Mexican Ozone Air Quality Standard (0.11 ppm not to be exceeded 1 hr. a day one day amore » year). The authors examine the tendency on ozone air quality in MCMZ in relation with the changes in gasoline composition since 1986. The authors also discuss the importance to perform an air quality impact analysis before the introduction of reformulated gasolines in countries where the local economy do not allow to change the old car fleet not fitted with exhaust treatment devices.« less
Process for conversion of lignin to reformulated, partially oxygenated gasoline
Shabtai, Joseph S.; Zmierczak, Wlodzimierz W.; Chornet, Esteban
2001-01-09
A high-yield process for converting lignin into reformulated, partially oxygenated gasoline compositions of high quality is provided. The process is a two-stage catalytic reaction process that produces a reformulated, partially oxygenated gasoline product with a controlled amount of aromatics. In the first stage of the process, a lignin feed material is subjected to a base-catalyzed depolymerization reaction, followed by a selective hydrocracking reaction which utilizes a superacid catalyst to produce a high oxygen-content depolymerized lignin product mainly composed of alkylated phenols, alkylated alkoxyphenols, and alkylbenzenes. In the second stage of the process, the depolymerized lignin product is subjected to an exhaustive etherification reaction, optionally followed by a partial ring hydrogenation reaction, to produce a reformulated, partially oxygenated/etherified gasoline product, which includes a mixture of substituted phenyl/methyl ethers, cycloalkyl methyl ethers, C.sub.7 -C.sub.10 alkylbenzenes, C.sub.6 -C.sub.10 branched and multibranched paraffins, and alkylated and polyalkylated cycloalkanes.
Pearson-Stuttard, Jonathan; Kypridemos, Chris; Collins, Brendan; Mozaffarian, Dariush; Huang, Yue; Bandosz, Piotr; Capewell, Simon; Whitsel, Laurie; Wilde, Parke; O'Flaherty, Martin; Micha, Renata
2018-04-01
Sodium consumption is a modifiable risk factor for higher blood pressure (BP) and cardiovascular disease (CVD). The US Food and Drug Administration (FDA) has proposed voluntary sodium reduction goals targeting processed and commercially prepared foods. We aimed to quantify the potential health and economic impact of this policy. We used a microsimulation approach of a close-to-reality synthetic population (US IMPACT Food Policy Model) to estimate CVD deaths and cases prevented or postponed, quality-adjusted life years (QALYs), and cost-effectiveness from 2017 to 2036 of 3 scenarios: (1) optimal, 100% compliance with 10-year reformulation targets; (2) modest, 50% compliance with 10-year reformulation targets; and (3) pessimistic, 100% compliance with 2-year reformulation targets, but with no further progress. We used the National Health and Nutrition Examination Survey and high-quality meta-analyses to inform model inputs. Costs included government costs to administer and monitor the policy, industry reformulation costs, and CVD-related healthcare, productivity, and informal care costs. Between 2017 and 2036, the optimal reformulation scenario achieving the FDA sodium reduction targets could prevent approximately 450,000 CVD cases (95% uncertainty interval: 240,000 to 740,000), gain approximately 2.1 million discounted QALYs (1.7 million to 2.4 million), and produce discounted cost savings (health savings minus policy costs) of approximately $41 billion ($14 billion to $81 billion). In the modest and pessimistic scenarios, health gains would be 1.1 million and 0.7 million QALYS, with savings of $19 billion and $12 billion, respectively. All the scenarios were estimated with more than 80% probability to be cost-effective (incremental cost/QALY < $100,000) by 2021 and to become cost-saving by 2031. Limitations include evaluating only diseases mediated through BP, while decreasing sodium consumption could have beneficial effects upon other health burdens such as gastric cancer. Further, the effect estimates in the model are based on interventional and prospective observational studies. They are therefore subject to biases and confounding that may have influenced also our model estimates. Implementing and achieving the FDA sodium reformulation targets could generate substantial health gains and net cost savings.
Reformulations of Yang–Mills theories with space–time tensor fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Zhi-Qiang, E-mail: gzhqedu@gmail.com
2016-01-15
We provide the reformulations of Yang–Mills theories in terms of gauge invariant metric-like variables in three and four dimensions. The reformulations are used to analyze the dimension two gluon condensate and give gauge invariant descriptions of gluon polarization. In three dimensions, we obtain a non-zero dimension two gluon condensate by one loop computation, whose value is similar to the square of photon mass in the Schwinger model. In four dimensions, we obtain a Lagrangian with the dual property, which shares the similar but different property with the dual superconductor scenario. We also make discussions on the effectiveness of one loopmore » approximation.« less
Reformulation of the relativistic conversion between coordinate time and atomic time
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1975-01-01
The relativistic conversion between coordinate time and atomic time is reformulated to allow simpler time calculations relating analysis in solar system barycentric coordinates (using coordinate time) with earth-fixed observations (measuring 'earth-bound' proper time or atomic time). After an interpretation in terms of relatively well-known concepts, this simplified formulation, which has a rate accuracy of about 10 to the minus 15th, is used to explain the conventions required in the synchronization of a worldwide clock network and to analyze two synchronization techniques - portable clocks and radio interferometry. Finally, pertinent experimental tests of relativity are briefly discussed in terms of the reformulated time conversion.
Acquisition of Expert/Non-Expert Vocabulary from Reformulations.
Antoine, Edwige; Grabar, Natalia
2017-01-01
Technical medical terms are complicated to be correctly understood by non-experts. Vocabulary, associating technical terms with layman expressions, can help in increasing the readability of technical texts and their understanding. The purpose of our work is to build this kind of vocabulary. We propose to exploit the notion of reformulation following two methods: extraction of abbreviations and of reformulations with specific markers. The segments associated thanks to these methods are aligned with medical terminologies. Our results allow to cover over 9,000 medical terms and show precision of extractions between 0.24 and 0.98. The results and analyzed and compared with the existing work.
40 CFR 80.75 - Reporting requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... reformulated gasoline or RBOB produced or imported during the following time periods: (i) The first quarterly... to gasoline produced or imported during 1994 shall be included in the first quarterly report in 1995...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.75 Reporting requirements. Any refiner or...
Uncritical Educational Theory in the Guise of Progressive Social Science?
ERIC Educational Resources Information Center
Lingelbach, Karl Christoph
1988-01-01
Examines H. E. Tenorth's critique of research on the Nazi educational system and states that Herman Nohl's historio-educational concept is being reformulated within Tenorth's argumentational framework. Discusses pedagogical and sociological deficits in these efforts of "reformulation" as well as difficulties which arise when applying…
Creativity, Problem Solving, and Solution Set Sightedness: Radically Reformulating BVSR
ERIC Educational Resources Information Center
Simonton, Dean Keith
2012-01-01
Too often, psychological debates become polarized into dichotomous positions. Such polarization may have occurred with respect to Campbell's (1960) blind variation and selective retention (BVSR) theory of creativity. To resolve this unnecessary controversy, BVSR was radically reformulated with respect to creative problem solving. The reformulation…
Reformulating Constraints for Compilability and Efficiency
NASA Technical Reports Server (NTRS)
Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin
1992-01-01
KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.
Combet, Emilie; Vlassopoulos, Antonis; Mölenberg, Famke; Gressier, Mathilde; Privet, Lisa; Wratten, Craig; Sharif, Sahar; Vieux, Florent; Lehmann, Undine; Masset, Gabriel
2017-01-01
Nutrient profiling ranks foods based on their nutrient composition, with applications in multiple aspects of food policy. We tested the capacity of a category-specific model developed for product reformulation to improve the average nutrient content of foods, using five national food composition datasets (UK, US, China, Brazil, France). Products (n = 7183) were split into 35 categories based on the Nestlé Nutritional Profiling Systems (NNPS) and were then classified as NNPS ‘Pass’ if all nutrient targets were met (energy (E), total fat (TF), saturated fat (SFA), sodium (Na), added sugars (AS), protein, calcium). In a modelling scenario, all NNPS Fail products were ‘reformulated’ to meet NNPS standards. Overall, a third (36%) of all products achieved the NNPS standard/pass (inter-country and inter-category range: 32%–40%; 5%–72%, respectively), with most products requiring reformulation in two or more nutrients. The most common nutrients to require reformulation were SFA (22%–44%) and TF (23%–42%). Modelled compliance with NNPS standards could reduce the average content of SFA, Na and AS (10%, 8% and 6%, respectively) at the food supply level. Despite the good potential to stimulate reformulation across the five countries, the study highlights the need for better data quality and granularity of food composition databases. PMID:28430118
Performance Evaluation of Existing Wedgewater and Vacuum-Assisted Bed Dewatering Systems
1992-01-01
prior to dewatering by the wedgewater method. Of the 20 satisfied users, 11 preferred aerobic digestion , two employed anaerobic digestion, and seven...did not further process their sludge. Of the seven dissatisfied users, four used aerobic digestion and three employed anaerobic digestion. A meelic...queried, 11 employed aerobic digestion , two employed anaerobic digestion, and three did not process their sludge. Eight dissatisfied users employed
Comparing User-Assisted and Automatic Query Translation
2005-01-01
do their strategies differ from those used in monolingual applications? How do individual differences in subject knowledge, language skills, search...Translation Ideally, we would prefer to provide the searcher with English definitions for each German translation alternative. Dictionaries with these...keeping with the common usage in monolingual contexts [1], we call this approach “key- word in context” or “KWIC.” For each German translation of an
Demand, Supply, and Price Outlook for Reformulated Motor Gasoline 1995
1994-01-01
Provisions of the Clean Air Act Amendments of 1990 designed to reduce ground-level ozone will increase the demand for reformulated motor gasoline in a number of U.S. metropolitan areas. This article discusses the effects of the new regulations on the motor gasoline market and the refining industry.
40 CFR 80.41 - Standards and requirements for compliance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... following standards apply for all reformulated gasoline: (1) The standard for heavy metals, including lead or manganese, on a per-gallon basis, is that reformulated gasoline may contain no heavy metals. The Administrator may waive this prohibition for a heavy metal (other than lead) if the Administrator determines...
40 CFR 80.41 - Standards and requirements for compliance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... following standards apply for all reformulated gasoline: (1) The standard for heavy metals, including lead or manganese, on a per-gallon basis, is that reformulated gasoline may contain no heavy metals. The Administrator may waive this prohibition for a heavy metal (other than lead) if the Administrator determines...
40 CFR 80.41 - Standards and requirements for compliance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... following standards apply for all reformulated gasoline: (1) The standard for heavy metals, including lead or manganese, on a per-gallon basis, is that reformulated gasoline may contain no heavy metals. The Administrator may waive this prohibition for a heavy metal (other than lead) if the Administrator determines...
40 CFR 80.41 - Standards and requirements for compliance.
Code of Federal Regulations, 2012 CFR
2012-07-01
... following standards apply for all reformulated gasoline: (1) The standard for heavy metals, including lead or manganese, on a per-gallon basis, is that reformulated gasoline may contain no heavy metals. The Administrator may waive this prohibition for a heavy metal (other than lead) if the Administrator determines...
40 CFR 80.41 - Standards and requirements for compliance.
Code of Federal Regulations, 2014 CFR
2014-07-01
... following standards apply for all reformulated gasoline: (1) The standard for heavy metals, including lead or manganese, on a per-gallon basis, is that reformulated gasoline may contain no heavy metals. The Administrator may waive this prohibition for a heavy metal (other than lead) if the Administrator determines...
An Approach to Revision and Evaluation of Student Writing.
ERIC Educational Resources Information Center
Duke, Charles R.
An approach to evaluating student writing that emphasizes reformulation and deemphasizes grades teaches students that reworking their writing is a necessary and acceptable part of the writing process. Reformulation is divided into rewriting, revising, and editing. The instructor diagnoses student papers to determine significant problems on a…
USDA-ARS?s Scientific Manuscript database
Reformulation of calcium chloride cover brine for cucumber fermentation was explored as a mean to minimize the incidence of bloater defect. This study particularly focused on cover brine supplementation with calcium hydroxide, sodium chloride (NaCl), and acids to enhance buffer capacity, inhibit the...
Increasing Scalability of Researcher Network Extraction from the Web
NASA Astrophysics Data System (ADS)
Asada, Yohei; Matsuo, Yutaka; Ishizuka, Mitsuru
Social networks, which describe relations among people or organizations as a network, have recently attracted attention. With the help of a social network, we can analyze the structure of a community and thereby promote efficient communications within it. We investigate the problem of extracting a network of researchers from the Web, to assist efficient cooperation among researchers. Our method uses a search engine to get the cooccurences of names of two researchers and calculates the streangth of the relation between them. Then we label the relation by analyzing the Web pages in which these two names cooccur. Research on social network extraction using search engines as ours, is attracting attention in Japan as well as abroad. However, the former approaches issue too many queries to search engines to extract a large-scale network. In this paper, we propose a method to filter superfluous queries and facilitates the extraction of large-scale networks. By this method we are able to extract a network of around 3000-nodes. Our experimental results show that the proposed method reduces the number of queries significantly while preserving the quality of the network as compared to former methods.
Recommending images of user interests from the biomedical literature
NASA Astrophysics Data System (ADS)
Clukey, Steven; Xu, Songhua
2013-03-01
Every year hundreds of thousands of biomedical images are published in journals and conferences. Consequently, finding images relevant to one's interests becomes an ever daunting task. This vast amount of literature creates a need for intelligent and easy-to-use tools that can help researchers effectively navigate through the content corpus and conveniently locate materials of their interests. Traditionally, literature search tools allow users to query content using topic keywords. However, manual query composition is often time and energy consuming. A better system would be one that can automatically deliver relevant content to a researcher without having the end user manually manifest one's search intent and interests via search queries. Such a computer-aided assistance for information access can be provided by a system that first determines a researcher's interests automatically and then recommends images relevant to the person's interests accordingly. The technology can greatly improve a researcher's ability to stay up to date in their fields of study by allowing them to efficiently browse images and documents matching their needs and interests among the vast amount of the biomedical literature. A prototype system implementation of the technology can be accessed via http://www.smartdataware.com.
40 CFR 80.74 - Recordkeeping requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
...; (8) In the case of butane blended into reformulated gasoline or RBOB under § 80.82, documentation of: (i) The volume of butane added; (ii) The volume of reformulated gasoline or RBOB both prior to and subsequent to the butane blending; (iii) The purity and properties of the butane specified in § 80.82(c) and...
Talking It through: Two French Immersion Learners' Response to Reformulation
ERIC Educational Resources Information Center
Swain, Merrill; Lapkin, Sharon
2002-01-01
This article documents the importance of collaborative dialogue as part of the process of second language learning. The stimulus for the dialogue we discuss in this article was a reformulation of a story written collaboratively in French by Nina and Dara, two adolescent French immersion students. A sociocultural theoretical perspective informs the…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
... operations to reformulate their products until October 21, 2012. SUPPLEMENTARY INFORMATION: The Organic Foods... processors are currently using amidated, non-organic pectin in their products. The industry indicated that these processors would need time to reformulate these products using either non-amidated, non-organic...
40 CFR 80.76 - Registration of refiners, importers or oxygenate blenders.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.76... required for any refiner and importer that produces or imports any reformulated gasoline or RBOB, and any... November 1, 1994, or not later than three months in advance of the first date that such person will produce...
40 CFR 80.76 - Registration of refiners, importers or oxygenate blenders.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.76... required for any refiner and importer that produces or imports any reformulated gasoline or RBOB, and any... November 1, 1994, or not later than three months in advance of the first date that such person will produce...
Reformulating Testing to Measure Thinking and Learning. Technical Report No. 6898.
ERIC Educational Resources Information Center
Collins, Allan
This paper discusses systemic problems with testing and outlines two scenarios for reformulating testing based on intelligent tutoring systems. Five desiderata are provided to underpin the type of testing proposed: (1) tests should emphasize learning and thinking; (2) tests should require generation as well as selection; (3) tests should be…
Life Event Types and Attributional Styles as Predictors of Depression in the Elderly.
ERIC Educational Resources Information Center
Patrick, Linda F.; Moore, Janet S.
The reformulated learned helplessness model for the prediction of depression has been investigated extensively in young adults. Results have linked attributions made to undesirable, controllable events to depression in this age group. This reformulated model was investigated in 97 elderly women and was contrasted to the original learned…
Stereotyping in he Representation of Narrative Texts through Visual Reformulation.
ERIC Educational Resources Information Center
Porto, Melina
2003-01-01
Investigated the process of stereotyping in the representation of the content of narrative texts through visual reformulations. Subjects were Argentine college students enrolled in an English course at a university in Argentina. Reveals students' inability to transcend heir cultural biases and points to an urgent need to address stereotypes in the…
Improving e-book access via a library-developed full-text search tool.
Foust, Jill E; Bergen, Phillip; Maxeiner, Gretchen L; Pawlowski, Peter N
2007-01-01
This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single "Google-style" query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products.
Improving e-book access via a library-developed full-text search tool*
Foust, Jill E.; Bergen, Phillip; Maxeiner, Gretchen L.; Pawlowski, Peter N.
2007-01-01
Purpose: This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. Setting: The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. Brief Description: The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single “Google-style” query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. Results/Evaluation: A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. Conclusion: This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products. PMID:17252065
Pigat, S; Connolly, A; Cushen, M; Cullen, M; O'Mahony, C
2018-02-19
This project quantified the impact that voluntary reformulation efforts of the food industry had on the Irish population's nutrient intake. Nutrient composition data on reformulated products were collected from 14 major food companies for two years, 2005 and 2012. Probabilistic intake assessments were performed using the Irish national food consumption surveys as dietary intake data. The nutrient data were weighted by market shares replacing existing food composition data for these products. The reformulation efforts assessed, significantly reduced mean energy intakes by up to 12 kcal/d (adults), 15 kcal/d (teens), 19 kcal/d (children) and 9 kcal/d (pre-schoolers). Mean daily fat intakes were reduced by up to 1.3 g/d, 1.3 g/d, 0.9 g/d and 0.6 g/d, saturated fat intakes by up to 1.7 g/d, 2.3 g/d, 1.8 g/d and 1 g/d, sugar intakes by up to 1 g/d, 2 g/d, 3.5 g/d and 1 g/d and sodium intakes by up to 0.6 g/d, 0.5 g/d, 0.2 g/d, 0.3 g/d for adults, teenagers, children and pre-school children, respectively. This model enables to assess the impact of industry reformulation amongst Irish consumers' nutrient intakes, using consumption, food composition and market share data.
Huang, Yue; Bandosz, Piotr; Capewell, Simon; Wilde, Parke
2018-01-01
Background Sodium consumption is a modifiable risk factor for higher blood pressure (BP) and cardiovascular disease (CVD). The US Food and Drug Administration (FDA) has proposed voluntary sodium reduction goals targeting processed and commercially prepared foods. We aimed to quantify the potential health and economic impact of this policy. Methods and findings We used a microsimulation approach of a close-to-reality synthetic population (US IMPACT Food Policy Model) to estimate CVD deaths and cases prevented or postponed, quality-adjusted life years (QALYs), and cost-effectiveness from 2017 to 2036 of 3 scenarios: (1) optimal, 100% compliance with 10-year reformulation targets; (2) modest, 50% compliance with 10-year reformulation targets; and (3) pessimistic, 100% compliance with 2-year reformulation targets, but with no further progress. We used the National Health and Nutrition Examination Survey and high-quality meta-analyses to inform model inputs. Costs included government costs to administer and monitor the policy, industry reformulation costs, and CVD-related healthcare, productivity, and informal care costs. Between 2017 and 2036, the optimal reformulation scenario achieving the FDA sodium reduction targets could prevent approximately 450,000 CVD cases (95% uncertainty interval: 240,000 to 740,000), gain approximately 2.1 million discounted QALYs (1.7 million to 2.4 million), and produce discounted cost savings (health savings minus policy costs) of approximately $41 billion ($14 billion to $81 billion). In the modest and pessimistic scenarios, health gains would be 1.1 million and 0.7 million QALYS, with savings of $19 billion and $12 billion, respectively. All the scenarios were estimated with more than 80% probability to be cost-effective (incremental cost/QALY < $100,000) by 2021 and to become cost-saving by 2031. Limitations include evaluating only diseases mediated through BP, while decreasing sodium consumption could have beneficial effects upon other health burdens such as gastric cancer. Further, the effect estimates in the model are based on interventional and prospective observational studies. They are therefore subject to biases and confounding that may have influenced also our model estimates. Conclusions Implementing and achieving the FDA sodium reformulation targets could generate substantial health gains and net cost savings. PMID:29634725
Generic entry, reformulations and promotion of SSRIs in the US.
Huskamp, Haiden A; Donohue, Julie M; Koss, Catherine; Berndt, Ernst R; Frank, Richard G
2008-01-01
Previous research has shown that a manufacturer's promotional strategy for a brand name drug is typically affected by generic entry. However, little is known about how newer strategies to extend patent life, including product reformulation introduction or obtaining approval to market for additional clinical indications, influence promotion. To examine the relationships among promotional expenditures, generic entry, reformulation entry and new indication approval. We used quarterly data on national product-level promotional spending (including expenditures for physician detailing and direct-to-consumer advertising [DTCA], and the retail value of free samples distributed in physician offices) for selective serotonin reuptake inhibitors (SSRIs) over the period 1997-2004. We estimated econometric models of detailing, DTCA and total quarterly promotional expenditures as a function of the timing of generic entry, entry of new product formulations and US FDA approval for new clinical indications for existing medications in the SSRI class. Expenditures by pharmaceutical manufacturers for promotion of antidepressant medications was the main outcome measure. Over the period 1997-2004, there was considerable variation in the composition of promotional expenditures across the SSRIs. Promotional expenditures for the original brand molecule decreased dramatically when a reformulation of the molecule was introduced. Promotional spending (both total and detailing alone) for a specific molecule was generally lower after generic entry than before, although the effect of generic entry on promotional spending appears to be closely linked with the choice of product reformulation strategy pursued by the manufacturer. Detailing expenditures for Paxil were increased after the manufacturer received FDA approval to market the drug for generalized anxiety disorder (GAD), while the likelihood of DTCA outlays for the drug was not changed. In contrast, FDA approval to market Paxil and Zoloft for social anxiety disorder (SAD) did not affect the manufacturers' detailing expenditures but did result in a greater likelihood of DTCA outlays. The introduction of new product formulations appears to be a common strategy for attempting to extend market exclusivity for medications facing impending generic entry. Manufacturers who introduced a reformulation before generic entry shifted most promotion dollars from the original brand to the reformulation long before generic entry, and in some cases manufacturers appeared to target a particular promotion type for a given indication. Given the significant impact that pharmaceutical promotion has on demand for prescription drugs in the US, these findings have important implications for prescription drug spending and public health.
Semantics Enabled Queries in EuroGEOSS: a Discovery Augmentation Approach
NASA Astrophysics Data System (ADS)
Santoro, M.; Mazzetti, P.; Fugazza, C.; Nativi, S.; Craglia, M.
2010-12-01
One of the main challenges in Earth Science Informatics is to build interoperability frameworks which allow users to discover, evaluate, and use information from different scientific domains. This needs to address multidisciplinary interoperability challenges concerning both technological and scientific aspects. From the technological point of view, it is necessary to provide a set of special interoperability arrangement in order to develop flexible frameworks that allow a variety of loosely-coupled services to interact with each other. From a scientific point of view, it is necessary to document clearly the theoretical and methodological assumptions underpinning applications in different scientific domains, and develop cross-domain ontologies to facilitate interdisciplinary dialogue and understanding. In this presentation we discuss a brokering approach that extends the traditional Service Oriented Architecture (SOA) adopted by most Spatial Data Infrastructures (SDIs) to provide the necessary special interoperability arrangements. In the EC-funded EuroGEOSS (A European approach to GEOSS) project, we distinguish among three possible functional brokering components: discovery, access and semantics brokers. This presentation focuses on the semantics broker, the Discovery Augmentation Component (DAC), which was specifically developed to address the three thematic areas covered by the EuroGEOSS project: biodiversity, forestry and drought. The EuroGEOSS DAC federates both semantics (e.g. SKOS repositories) and ISO-compliant geospatial catalog services. The DAC can be queried using common geospatial constraints (i.e. what, where, when, etc.). Two different augmented discovery styles are supported: a) automatic query expansion; b) user assisted query expansion. In the first case, the main discovery steps are: i. the query keywords (the what constraint) are “expanded” with related concepts/terms retrieved from the set of federated semantic services. A default expansion regards the multilinguality relationship; ii. The resulting queries are submitted to the federated catalog services; iii. The DAC performs a “smart” aggregation of the queries results and provides them back to the client. In the second case, the main discovery steps are: i. the user browses the federated semantic repositories and selects the concepts/terms-of-interest; ii. The DAC creates the set of geospatial queries based on the selected concepts/terms and submits them to the federated catalog services; iii. The DAC performs a “smart” aggregation of the queries results and provides them back to the client. A Graphical User Interface (GUI) was also developed for testing and interacting with the DAC. The entire brokering framework is deployed in the context of EuroGEOSS infrastructure and it is used in a couple of GEOSS AIP-3 use scenarios: the “e-Habitat Use Scenario” for the Biodiversity and Climate Change topic, and the “Comprehensive Drought Index Use Scenario” for Water/Drought topic
Active learning reduces annotation time for clinical concept extraction.
Kholghi, Mahnoosh; Sitbon, Laurianne; Zuccon, Guido; Nguyen, Anthony
2017-10-01
To investigate: (1) the annotation time savings by various active learning query strategies compared to supervised learning and a random sampling baseline, and (2) the benefits of active learning-assisted pre-annotations in accelerating the manual annotation process compared to de novo annotation. There are 73 and 120 discharge summary reports provided by Beth Israel institute in the train and test sets of the concept extraction task in the i2b2/VA 2010 challenge, respectively. The 73 reports were used in user study experiments for manual annotation. First, all sequences within the 73 reports were manually annotated from scratch. Next, active learning models were built to generate pre-annotations for the sequences selected by a query strategy. The annotation/reviewing time per sequence was recorded. The 120 test reports were used to measure the effectiveness of the active learning models. When annotating from scratch, active learning reduced the annotation time up to 35% and 28% compared to a fully supervised approach and a random sampling baseline, respectively. Reviewing active learning-assisted pre-annotations resulted in 20% further reduction of the annotation time when compared to de novo annotation. The number of concepts that require manual annotation is a good indicator of the annotation time for various active learning approaches as demonstrated by high correlation between time rate and concept annotation rate. Active learning has a key role in reducing the time required to manually annotate domain concepts from clinical free text, either when annotating from scratch or reviewing active learning-assisted pre-annotations. Copyright © 2017 Elsevier B.V. All rights reserved.
What Friedrich Nietzsche Cannot Stand about Education: Toward a Pedagogy of Self-Reformulation.
ERIC Educational Resources Information Center
Bingham, Charles
2001-01-01
Examines Nietzsche's rejection of mass education, arguing that it was based on his desire for education to be more self- reformulative than he thought possible, and concluding that education in schools is beneficial because it can foster radical forms of selfhood. This process can begin by listening to Nietzsche's philosophy while ignoring his…
Addiction Motivation Reformulated: An Affective Processing Model of Negative Reinforcement
ERIC Educational Resources Information Center
Baker, Timothy B.; Piper, Megan E.; McCarthy, Danielle E.; Majeskie, Matthew R.; Fiore, Michael C.
2004-01-01
This article offers a reformulation of the negative reinforcement model of drug addiction and proposes that the escape and avoidance of negative affect is the prepotent motive for addictive drug use. The authors posit that negative affect is the motivational core of the withdrawal syndrome and argue that, through repeated cycles of drug use and…
Exploring the Role of Reformulations and a Model Text in EFL Students' Writing Performance
ERIC Educational Resources Information Center
Yang, Luxin; Zhang, Ling
2010-01-01
This study examined the effectiveness of reformulation and model text in a three-stage writing task (composing-comparison-revising) in an EFL writing class in a Beijing university. The study documented 10 university students' writing performance from the composing (Stage 1) and comparing (Stage 2, where students compare their own text to a…
Karayiannis, N B
2000-01-01
This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.
Net Improvement of Correct Answers to Therapy Questions After PubMed Searches: Pre/Post Comparison
Keepanasseril, Arun
2013-01-01
Background Clinicians search PubMed for answers to clinical questions although it is time consuming and not always successful. Objective To determine if PubMed used with its Clinical Queries feature to filter results based on study quality would improve search success (more correct answers to clinical questions related to therapy). Methods We invited 528 primary care physicians to participate, 143 (27.1%) consented, and 111 (21.0% of the total and 77.6% of those who consented) completed the study. Participants answered 14 yes/no therapy questions and were given 4 of these (2 originally answered correctly and 2 originally answered incorrectly) to search using either the PubMed main screen or PubMed Clinical Queries narrow therapy filter via a purpose-built system with identical search screens. Participants also picked 3 of the first 20 retrieved citations that best addressed each question. They were then asked to re-answer the original 14 questions. Results We found no statistically significant differences in the rates of correct or incorrect answers using the PubMed main screen or PubMed Clinical Queries. The rate of correct answers increased from 50.0% to 61.4% (95% CI 55.0%-67.8%) for the PubMed main screen searches and from 50.0% to 59.1% (95% CI 52.6%-65.6%) for Clinical Queries searches. These net absolute increases of 11.4% and 9.1%, respectively, included previously correct answers changing to incorrect at a rate of 9.5% (95% CI 5.6%-13.4%) for PubMed main screen searches and 9.1% (95% CI 5.3%-12.9%) for Clinical Queries searches, combined with increases in the rate of being correct of 20.5% (95% CI 15.2%-25.8%) for PubMed main screen searches and 17.7% (95% CI 12.7%-22.7%) for Clinical Queries searches. Conclusions PubMed can assist clinicians answering clinical questions with an approximately 10% absolute rate of improvement in correct answers. This small increase includes more correct answers partially offset by a decrease in previously correct answers. PMID:24217329
Net improvement of correct answers to therapy questions after pubmed searches: pre/post comparison.
McKibbon, Kathleen Ann; Lokker, Cynthia; Keepanasseril, Arun; Wilczynski, Nancy L; Haynes, R Brian
2013-11-08
Clinicians search PubMed for answers to clinical questions although it is time consuming and not always successful. To determine if PubMed used with its Clinical Queries feature to filter results based on study quality would improve search success (more correct answers to clinical questions related to therapy). We invited 528 primary care physicians to participate, 143 (27.1%) consented, and 111 (21.0% of the total and 77.6% of those who consented) completed the study. Participants answered 14 yes/no therapy questions and were given 4 of these (2 originally answered correctly and 2 originally answered incorrectly) to search using either the PubMed main screen or PubMed Clinical Queries narrow therapy filter via a purpose-built system with identical search screens. Participants also picked 3 of the first 20 retrieved citations that best addressed each question. They were then asked to re-answer the original 14 questions. We found no statistically significant differences in the rates of correct or incorrect answers using the PubMed main screen or PubMed Clinical Queries. The rate of correct answers increased from 50.0% to 61.4% (95% CI 55.0%-67.8%) for the PubMed main screen searches and from 50.0% to 59.1% (95% CI 52.6%-65.6%) for Clinical Queries searches. These net absolute increases of 11.4% and 9.1%, respectively, included previously correct answers changing to incorrect at a rate of 9.5% (95% CI 5.6%-13.4%) for PubMed main screen searches and 9.1% (95% CI 5.3%-12.9%) for Clinical Queries searches, combined with increases in the rate of being correct of 20.5% (95% CI 15.2%-25.8%) for PubMed main screen searches and 17.7% (95% CI 12.7%-22.7%) for Clinical Queries searches. PubMed can assist clinicians answering clinical questions with an approximately 10% absolute rate of improvement in correct answers. This small increase includes more correct answers partially offset by a decrease in previously correct answers.
Information retrieval for a document writing assistance program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corral, M.L.; Simon, A.; Julien, C.
This paper presents an Information Retrieval mechanism to facilitate the writing of technical documents in the space domain. To address the need for document exchange between partners in a given project, documents are standardized. The writing of a new document requires the re-use of existing documents or parts thereof. These parts can be identified by {open_quotes}tagging{close_quotes} the logical structure of documents and restored by means of a purpose-built Information Retrieval System (I.R.S.). The I.R.S. implemented in our writing assistance tool uses natural language queries and is based on a statistical linguistic approach which is enhanced by the use of documentmore » structure module.« less
Chilcoat, Howard D; Coplan, Paul M; Harikrishnan, Venkatesh; Alexander, Louis
2016-08-01
Doctor-shopping (obtaining prescriptions from multiple prescribers/pharmacies) for opioid analgesics produces a supply for diversion and abuse, and represents a major public health issue. An open cohort study assessed changes in doctor-shopping in the U.S. for a brand extended release (ER) oxycodone product (OxyContin) and comparator opioids before (July, 2009 to June, 2010) versus after (January, 2011 to June, 2013) introduction of reformulated brand ER oxycodone with abuse-deterrent properties, using IMS LRx longitudinal data covering >150 million patients and 65% of retail U.S. prescriptions. After its reformulation, the rate of doctor-shopping decreased 50% (for 2+ prescribers/3+ pharmacies) for brand ER oxycodone, but not for comparators. The largest decreases in rates occurred among young adults (73%), those paying with cash (61%) and those receiving the highest available dose (62%), with a 90% decrease when stratifying by all three characteristics. The magnitude of doctor-shopping reductions increased with increasing number of prescribers/pharmacies (e.g., 75% reduction for ≥2 prescribers/≥4 pharmacies). The rate of doctor-shopping for brand ER oxycodone decreased substantially after its reformulation, which did not occur for other prescription opioids. The largest reductions in doctor-shopping occurred with characteristics associated with higher abuse risk such as youth, cash payment and high dose, and with more specific thresholds of doctor-shopping. A higher prescriber and/or pharmacy threshold also increased the magnitude of the decrease, suggesting that it better captured the effect of the reformulation on actual doctor-shoppers. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Reformulating Non-Monotonic Theories for Inference and Updating
NASA Technical Reports Server (NTRS)
Grosof, Benjamin N.
1992-01-01
We aim to help build programs that do large-scale, expressive non-monotonic reasoning (NMR): especially, 'learning agents' that store, and revise, a body of conclusions while continually acquiring new, possibly defeasible, premise beliefs. Currently available procedures for forward inference and belief revision are exhaustive, and thus impractical: they compute the entire non-monotonic theory, then re-compute from scratch upon updating with new axioms. These methods are thus badly intractable. In most theories of interest, even backward reasoning is combinatoric (at least NP-hard). Here, we give theoretical results for prioritized circumscription that show how to reformulate default theories so as to make forward inference be selective, as well as concurrent; and to restrict belief revision to a part of the theory. We elaborate a detailed divide-and-conquer strategy. We develop concepts of structure in NM theories, by showing how to reformulate them in a particular fashion: to be conjunctively decomposed into a collection of smaller 'part' theories. We identify two well-behaved special cases that are easily recognized in terms of syntactic properties: disjoint appearances of predicates, and disjoint appearances of individuals (terms). As part of this, we also definitionally reformulate the global axioms, one by one, in addition to applying decomposition. We identify a broad class of prioritized default theories, generalizing default inheritance, for which our results especially bear fruit. For this asocially monadic class, decomposition permits reasoning to be localized to individuals (ground terms), and reduced to propositional. Our reformulation methods are implementable in polynomial time, and apply to several other NM formalisms beyond circumscription.
Industry Approach to Nutrition-Based Product Development and Reformulation in Asia.
Vlassopoulos, Antonis; Masset, Gabriel; Leroy, Fabienne; Spieldenner, Jörg
2015-01-01
In the recent years there has been a proliferation of initiatives to classify food products according to their nutritional composition (e.g., high in fat/sugar) to better guide consumer choices and regulate the food environment. This global trend, lately introduced in Asia as well, utilizes nutrient profiling (NP) to set compositional criteria for food products. Even though the use of NP to set targets for product reformulation has been proposed for years, to date only two NP systems have been specifically developed for that purpose. The majority of the NP applications, especially in Asia, focus on marketing and/or health claim regulation, as well as front-of-pack labeling. Product reformulation has been identified, by the World Health Organization and other official bodies, as a key tool for the food industry to help address public health nutrition priorities and provide support towards the reduction of excessive dietary sugar, salt and fats. In the United Kingdom, the Responsibility Deal is an excellent example of a public-private collaborative initiative that successfully reduced the salt content of products available in the supermarkets by 20-30%, resulting in an estimated 10% reduction in salt intake at the population level. Validation of NP systems targeted towards reformulation supports the hypothesis that, by adopting them, the industry can actively support existing policies in the direction of lowering consumptions in public health-sensitive nutrients. The symposium presented a discussion on the current NP landscape in Asia, the importance of reformulation for public health and the Nestlé approach to improve the food environment in Asia through NP.
The Ruby UCSC API: accessing the UCSC genome database using Ruby.
Mishima, Hiroyuki; Aerts, Jan; Katayama, Toshiaki; Bonnal, Raoul J P; Yoshiura, Koh-ichiro
2012-09-21
The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast.The API uses the bin index-if available-when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/.
The Ruby UCSC API: accessing the UCSC genome database using Ruby
2012-01-01
Background The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. Results The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast. The API uses the bin index—if available—when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Conclusions Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/. PMID:22994508
Study of Automatic Image Rectification and Registration of Scanned Historical Aerial Photographs
NASA Astrophysics Data System (ADS)
Chen, H. R.; Tseng, Y. H.
2016-06-01
Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.
ERIC Educational Resources Information Center
Dintzer, Leonard; Wortman, Camille B.
1978-01-01
The reformulated learned helplessness model of depression (Abramson, Seligman, Teasdale 1978) was examined. Argues that unless it is possible to specify the conditions under which a given attribution will be made, the model becomes circular and lacks predictive power. Discusses Abramson et al.'s suggestions for therapy and prevention. (Editor/RK)
ERIC Educational Resources Information Center
Bozlee, Brian J.
2007-01-01
The impact of raising Gibbs energy of the enzyme-substrate complex (G[subscript 3]) and the reformulation of the Michaelis-Menten equation are discussed. The maximum velocity of the reaction (v[subscript m]) and characteristic constant for the enzyme (K[subscript M]) will increase with increase in Gibbs energy, indicating that the rate of reaction…
ERIC Educational Resources Information Center
Liu, Ming-Chi; Huang, Yueh-Min; Kinshuk; Wen, Dunwei
2013-01-01
It is critical that students learn how to retrieve useful information in hypermedia environments, a task that is often especially difficult when it comes to image retrieval, as little text feedback is given that allows them to reformulate keywords they need to use. This situation may make students feel disorientated while attempting image…
ERIC Educational Resources Information Center
Rubilar, Álvaro Sebastián Bustos; Badillo, Gonzalo Zubieta
2017-01-01
In this article, we report how a geometric task based on the ACODESA methodology (collaborative learning, scientific debate and self-reflection) promotes the reformulation of the students' validations and allows revealing the students' aims in each of the stages of the methodology. To do so, we present the case of a team and, particularly, one of…
Dunlap, Eloise; Graves, Jennifer; Benoit, Ellen
2012-01-01
In recent years, numerous weather disasters have crippled many cities and towns across the United States of America. Such disasters present a unique opportunity for analyses of the disintegration and reformulation of drug markets. Disasters present new facts which cannot be “explained” by existing theories. Recent and continuing disasters present a radically different picture from that of police crack downs where market disruptions are carried out on a limited basis (both use and sales). Generally, users and sellers move to other locations and business continues as usual. The Katrina Disaster in 2005 offered a larger opportunity to understand the functioning and processes by which drug markets may or may not survive. This manuscript presents a paradigm which uses stages as a testable concept to scientifically examine the disintegration and reformulation of drug markets during disaster or crisis situations. It describes the specific processes – referred to as stages – which drug markets must go through in order to function and survive during and after a natural disaster. Prior to Hurricane Katrina, there had never before been a situation in which a drug market was struck by a disaster that forced its disintegration and reformulation.i PMID:22728093
How Abbott’s Fenofibrate Franchise Avoided Generic Competition
Downing, Nicholas S.; Ross, Joseph S.; Jackevicius, Cynthia A.; Krumholz, Harlan M.
2013-01-01
The ongoing debate concerning the efficacy of fenofibrate has overshadowed an important aspect of the drug’s history: Abbott, the maker of branded fenofibrate, has produced several bioequivalent reformulations, which dominate the market even though generic fenofibrate has been available for almost a decade. This continued use of branded formulations, which cost twice as much as generic versions of fenofibrate, imposes an annual cost of approximately $700 million on our healthcare system. Abbott maintained its dominance of the fenofibrate market, in part, through a complex switching strategy involving the sequential launch of branded reformulations that had not been shown to be superior to the first generation product and patent litigation that delayed the approval of generic formulations. The small differences in dose of the newer branded formulations prevented substitution with generics of older generation products. As soon as direct generic competition seemed likely at the new dose level where substitution would be allowed, Abbott would launch another reformulation and the cycle would repeat. Our objective, using the fenofibrate example, is to describe how current policy can allow pharmaceutical companies to maintain market share using reformulations of branded medications without demonstrating the superiority of next generation products. PMID:22493409
Avoidance of generic competition by Abbott Laboratories' fenofibrate franchise.
Downing, Nicholas S; Ross, Joseph S; Jackevicius, Cynthia A; Krumholz, Harlan M
2012-05-14
The ongoing debate concerning the efficacy of fenofibrate has overshadowed an important aspect of the drug's history: Abbott Laboratories, the maker of branded fenofibrate, has produced several bioequivalent reformulations that dominate the market, although generic fenofibrate has been available for almost a decade. This continued use of branded formulations, which cost twice as much as generic versions of fenofibrate, imposes an annual cost of approximately $700 million on the US health care system. Abbott Laboratories maintained its dominance of the fenofibrate market in part through a complex switching strategy involving the sequential launch of branded reformulations that had not been shown to be superior to the first-generation product and patent litigation that delayed the approval of generic formulations. The small differences in dose of the newer branded formulations prevented their substitution with generics of older-generation products. As soon as direct generic competition seemed likely at the new dose level, where substitution would be allowed, Abbott would launch another reformulation, and the cycle would repeat. Based on the fenofibrate example, our objective is to describe how current policy can allow pharmaceutical companies to maintain market share using reformulations of branded medications, without demonstrating the superiority of next-generation products.
The Battlefield Commander’s Assistant Project: Research in Terrain Reasoning
1987-05-22
order dissemination. In order to restrict the survey problem to a manageable level, we made the a priori decision to focus on activities related to...models Manages tools for: Conmander , tactical a explanations * situation assessment1Lplans s plan and plan option " a query/edit capabilities...from our work on the Air Land Battle Management Study ( ’Stachnick 87:) which was tasked to compare Al planning techniques with the requirements of
Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev
2010-01-01
Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming distance using the learned binary representation. A boosting algorithm is presented to efficiently learn the distance function. We evaluate the proposed algorithm on a mammographic image reference library with an Interactive Search-Assisted Decision Support (ISADS) system and on the medical image data set from ImageCLEF. Our results show that the boosting framework compares favorably to state-of-the-art approaches for distance metric learning in retrieval accuracy, with much lower computational cost. Additional evaluation with the COREL collection shows that our algorithm works well for regular image data sets.
Generic Entry, Reformulations, and Promotion of SSRIs
Donohue, Julie M.; Koss, Catherine; Berndt, Ernst R.; Frank, Richard G.
2009-01-01
Background Previous research has shown that a manufacturer’s promotional strategy for a brand-name drug is typically affected by generic entry. However, little is known about how newer strategies to extend patent life, including product reformulation introduction or obtaining approval to market for additional clinical indications, influence promotion. Objective To examine the relationship between promotional expenditures, generic entry, reformulation entry, and new indication approval. Study Design/Setting We used quarterly data on national product-level promotional spending (including expenditures for physician detailing and direct-to-consumer advertising (DTCA), and the retail value of free samples distributed in physician offices) for selective serotonin reuptake inhibitors (SSRIs) over the period 1997 through 2004. We estimated econometric models of detailing, DTCA, and total quarterly promotional expenditures as a function of the timing of generic entry, entry of new product formulations, and Food and Drug Administration (FDA) approval for new clinical indications for existing medications in the SSRI class. Main Outcome Measure Expenditures by pharmaceutical manufacturers for promotion of antidepressant medications. Results Over the period 1997–2004, there was considerable variation in the composition of promotional expenditures across the SSRIs. Promotional expenditures for the original brand molecule decreased dramatically when a reformulation of the molecule was introduced. Promotional spending (both total and detailing alone) for a specific molecule was generally lower after generic entry than before, although the effect of generic entry on promotional spending appears to be closely linked with the choice of product reformulation strategy pursued by the manufacturer. Detailing expenditures for Paxil were increased after the manufacturer received FDA approval to market the drug for generalized anxiety disorder (GAD), while the likelihood of DTCA outlays for the drug was not changed. In contrast, FDA approval to market Paxil and Zoloft for social anxiety disorder (SAD) did not affect the manufacturers’ detailing expenditures but did result in a greater likelihood of DTCA outlays. Conclusion The introduction of new product formulations appears to be a common strategy for attempting to extend market exclusivity for medications facing impending generic entry. Manufacturers that introduced a reformulation before generic entry shifted most promotion dollars from the original brand to the reformulation long before generic entry, and in some cases manufacturers appeared to target a particular promotion type for a given indication. Given the significant impact pharmaceutical promotion has on demand for prescription drugs, these findings have important implications for prescription drug spending and public health. PMID:18563951
ERIC Educational Resources Information Center
Santos, Maria; Lopez-Serrano, Sonia; Manchon, Rosa M.
2010-01-01
Framed in a cognitively-oriented strand of research on corrective feedback (CF) in SLA, the controlled three-stage (composition/comparison-noticing/revision) study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation) on noticing and uptake, as evidenced in the written output produced by a…
Buckley, Nicholas A.; Degenhardt, Louisa; Larance, Briony; Cairns, Rose; Dobbins, Timothy A.; Pearson, Sallie-Anne
2018-01-01
BACKGROUND: Australia introduced tamper-resistant controlled-release (CR) oxycodone in April 2014. We quantified the impact of the reformulation on dispensing, switching and poisonings. METHODS: We performed interrupted time-series analyses using population-representative national dispensing data from 2012 to 2016. We measured dispensing of oxycodone CR (≥ 10 mg), discontinuation of use of strong opioids and switching to other strong opioids after the reformulation compared with a historical control period. Similarly, we compared calls about intentional opioid poisoning using data from a regional poisons information centre. RESULTS: After the reformulation, dispensing decreased for 10–30 mg (total level shift −11.1%, 95% confidence interval [CI], −17.2% to −4.6%) and 40–80 mg oxycodone CR (total level shift −31.5%, 95% CI −37.5% to −24.9%) in participants less than 65 years of age but was unchanged in people 65 years of age or older. Compared with the previous year, discontinuation of use of strong opioids did not increase (adjusted hazard ratio [HR] 0.95, 95% CI 0.91 to 1.00), but switching to oxycodone/naloxone did increase (adjusted HR 1.54, 95% CI 1.32 to 1.79). Switching to morphine varied by age (p < 0.001), and the greatest increase was in participants less than 45 years of age (adjusted HR 4.33, 95% CI 2.13 to 8.80). Participants switching after the reformulation were more likely to be dispensed a tablet strength of 40 mg or more (adjusted odds ratio [OR] 1.40, 95% CI 1.09 to 1.79). Calls for intentional poisoning that involved oxycodone taken orally increased immediately after the reformulation (incidence rate ratio (IRR) 1.31, 95% CI 1.05–1.64), but there was no change for injected oxycodone. INTERPRETATION: The reformulation had a greater impact on opioid access patterns of people less than 65 years of age who were using higher strengths of oxycodone CR. This group has been identified as having an increased risk of problematic opioid use and warrants closer monitoring in clinical practice. PMID:29581162
Murteira, Susana; Millier, Aurélie; Ghezaiel, Zied; Lamure, Michel
2014-01-01
Background Repurposing has become a mainstream strategy in drug development, but it faces multiple challenges, amongst them the increasing and ever changing regulatory framework. This is the second study of a series of three-part publication project with the ultimate goal of understanding the market access rationale and conditions attributed to drug repurposing in the United States and in Europe. The aim of the current study to evaluate the regulatory path associated with each type of repurposing strategy according to the previously proposed nomenclature in the first article of this series. Methods From the cases identified, a selection process retrieved a total of 141 case studies in all countries, harmonized for data availability and common approval in the United States and in Europe. Regulatory information for each original and repurposed drug product was extracted, and several related regulatory attributes were also extracted such as, designation change and filing before or after patent expiry, among others. Descriptive analyses were conducted to determine trends and to investigate potential associations between the different regulatory paths and attributes of interest, for reformulation and repositioning cases separately. Results Within the studied European countries, most of the applications for reformulated products were filed through national applications. In contrast, for repositioned products, the centralized procedure was the most frequent regulatory pathway. Most of the repurposing cases were approved before patent expiry, and those cases have followed more complex regulatory pathways in the United States and in Europe. For new molecular entities filed in the United States, a similar number of cases were developed by serendipity and by a hypothesis-driven approach. However, for the new indication's regulatory pathway in the United States, most of the cases were developed through a hypothesis-driven approach. Conclusion The regulations in the United States and in Europe for drug repositionings and reformulations allowed confirming that repositioning strategies were usually filed under a more complex regulatory process than reformulations. Also, it seems that parameters such as patent expiry and type of repositioning approach or reformulation affect the regulatory pathways chosen for each case. PMID:27226839
Schaffer, Andrea L; Buckley, Nicholas A; Degenhardt, Louisa; Larance, Briony; Cairns, Rose; Dobbins, Timothy A; Pearson, Sallie-Anne
2018-03-26
Australia introduced tamper-resistant controlled-release (CR) oxycodone in April 2014. We quantified the impact of the reformulation on dispensing, switching and poisonings. We performed interrupted time-series analyses using population-representative national dispensing data from 2012 to 2016. We measured dispensing of oxycodone CR (≥ 10 mg), discontinuation of use of strong opioids and switching to other strong opioids after the reformulation compared with a historical control period. Similarly, we compared calls about intentional opioid poisoning using data from a regional poisons information centre. After the reformulation, dispensing decreased for 10-30 mg (total level shift -11.1%, 95% confidence interval [CI], -17.2% to -4.6%) and 40-80 mg oxycodone CR (total level shift -31.5%, 95% CI -37.5% to -24.9%) in participants less than 65 years of age but was unchanged in people 65 years of age or older. Compared with the previous year, discontinuation of use of strong opioids did not increase (adjusted hazard ratio [HR] 0.95, 95% CI 0.91 to 1.00), but switching to oxycodone/naloxone did increase (adjusted HR 1.54, 95% CI 1.32 to 1.79). Switching to morphine varied by age ( p < 0.001), and the greatest increase was in participants less than 45 years of age (adjusted HR 4.33, 95% CI 2.13 to 8.80). Participants switching after the reformulation were more likely to be dispensed a tablet strength of 40 mg or more (adjusted odds ratio [OR] 1.40, 95% CI 1.09 to 1.79). Calls for intentional poisoning that involved oxycodone taken orally increased immediately after the reformulation (incidence rate ratio (IRR) 1.31, 95% CI 1.05-1.64), but there was no change for injected oxycodone. The reformulation had a greater impact on opioid access patterns of people less than 65 years of age who were using higher strengths of oxycodone CR. This group has been identified as having an increased risk of problematic opioid use and warrants closer monitoring in clinical practice. © 2018 Joule Inc. or its licensors.
Metnitz, P G; Laback, P; Popow, C; Laback, O; Lenz, K; Hiesmayr, M
1995-01-01
Patient Data Management Systems (PDMS) for ICUs collect, present and store clinical data. Various intentions make analysis of those digitally stored data desirable, such as quality control or scientific purposes. The aim of the Intensive Care Data Evaluation project (ICDEV), was to provide a database tool for the analysis of data recorded at various ICUs at the University Clinics of Vienna. General Hospital of Vienna, with two different PDMSs used: CareVue 9000 (Hewlett Packard, Andover, USA) at two ICUs (one medical ICU and one neonatal ICU) and PICIS Chart+ (PICIS, Paris, France) at one Cardiothoracic ICU. CONCEPT AND METHODS: Clinically oriented analysis of the data collected in a PDMS at an ICU was the beginning of the development. After defining the database structure we established a client-server based database system under Microsoft Windows NI and developed a user friendly data quering application using Microsoft Visual C++ and Visual Basic; ICDEV was successfully installed at three different ICUs, adjustment to the different PDMS configurations were done within a few days. The database structure developed by us enables a powerful query concept representing an 'EXPERT QUESTION COMPILER' which may help to answer almost any clinical questions. Several program modules facilitate queries at the patient, group and unit level. Results from ICDEV-queries are automatically transferred to Microsoft Excel for display (in form of configurable tables and graphs) and further processing. The ICDEV concept is configurable for adjustment to different intensive care information systems and can be used to support computerized quality control. However, as long as there exists no sufficient artifact recognition or data validation software for automatically recorded patient data, the reliability of these data and their usage for computer assisted quality control remain unclear and should be further studied.
Zehrer, Cindy L; Holm, David; Solfest, Staci E; Walters, Shelley-Ann
2014-12-01
This study compared moisture vapour transmission rate (MVTR) and wear time or fluid-handling capacities of six adhesive foam dressings to a reformulated control dressing. Standardised in vitro MVTR methodology and a previously published in vivo artificial wound model (AWM) were used. Mean inverted MVTR for the reformulated dressing was 12 750 g/m(2) /24 hours and was significantly higher than four of the six comparator dressings (P < 0·0001), which ranged from 830 to 11 360 g/m(2) /24 hours. Mean upright MVTR for the reformulated dressing was 980 g/m(2) /24 hours and was significantly different than all of the comparator dressings (P < 0·0001), which ranged from 80 to 1620 g/m(2) /24 hours (three higher/three lower). The reformulated dressing median wear time ranged from 6·1 to >7·0 days, compared with 1·0 to 3·5 days for the comparator dressings (P = 0·0012 to P < 0·0001). The median fluid volume handled ranged from 78·0 to >87 ml compared with 13·0 to 44·5 ml for the comparator dressings (P = 0·0007 to P < 0·001). Interestingly, inverted MVTR did not correspond well to the AWM. These results suggest that marked differences exist between the dressings in terms of both MVTR and wear time or fluid-handling capacity. Furthermore, high inverted MVTR does not necessarily predict longer wear time or fluid-handling capacities of absorbent dressings. © 2013 The Authors. International Wound Journal © 2013 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
Energy compensation following consumption of sugar-reduced products: a randomized controlled trial.
Markey, Oonagh; Le Jeune, Julia; Lovegrove, Julie A
2016-09-01
Consumption of sugar-reformulated products (commercially available foods and beverages that have been reduced in sugar content through reformulation) is a potential strategy for lowering sugar intake at a population level. The impact of sugar-reformulated products on body weight, energy balance (EB) dynamics and cardiovascular disease risk indicators has yet to be established. The REFORMulated foods (REFORM) study examined the impact of an 8-week sugar-reformulated product exchange on body weight, EB dynamics, blood pressure, arterial stiffness, glycemia and lipemia. A randomized, controlled, double-blind, crossover dietary intervention study was performed with fifty healthy normal to overweight men and women (age 32.0 ± 9.8 year, BMI 23.5 ± 3.0 kg/m(2)) who were randomly assigned to consume either regular sugar or sugar-reduced foods and beverages for 8 weeks, separated by 4-week washout period. Body weight, energy intake (EI), energy expenditure and vascular markers were assessed at baseline and after both interventions. We found that carbohydrate (P < 0.001), total sugars (P < 0.001) and non-milk extrinsic sugars (P < 0.001) (% EI) were lower, whereas fat (P = 0.001) and protein (P = 0.038) intakes (% EI) were higher on the sugar-reduced than the regular diet. No effects on body weight, blood pressure, arterial stiffness, fasting glycemia or lipemia were observed. Consumption of sugar-reduced products, as part of a blinded dietary exchange for an 8-week period, resulted in a significant reduction in sugar intake. Body weight did not change significantly, which we propose was due to energy compensation.
Cassidy, Theresa A; Thorley, Eileen; Black, Ryan A; DeVeaugh-Geiss, Angela; Butler, Stephen F; Coplan, Paul
To examine abuse prevalence for OxyContin and comparator opioids over a 6-year period prior to and following market entry of reformulated OxyContin and assess consistency in abuse across treatment settings and geographic regions. An observational study examining longitudinal changes using cross-sectional data from treatment centers for substance use disorder. A total of 874 facilities in 39 states in the United States within the National Addictions Vigilance Intervention and Prevention Program (NAVIPPRO®) surveillance system. Adults (72,060) assessed for drug problems using the Addiction Severity Index-Multimedia Version (ASI-MV®) from January 2009 through December 2015 who abused prescription opioids. Percent change in past 30-day abuse. OxyContin had significantly lower abuse 5 years after reformulation compared to levels for original OxyContin. Consistency of magnitude in OxyContin abuse reductions across geographic regions, ranging from 41 to 52 percent with differences in abuse reductions in treatment setting categories occurred. Changes in geographic region and treatment settings across study years did not bias the estimate of lower OxyContin abuse through confounding. In the postmarket setting, limitations and methodologic challenges in abuse measurement exist and it is difficult to isolate singular impacts of any one intervention given the complexity of prescription opioid abuse. Expectations for a reasonable threshold of abuse for any one ADF product or ADF opioids as a class are still uncertain and undefined. A significant decline in abuse prevalence of reformulated OxyContin was observed 5 years after its reformulation among this treatment sample of individuals assessed for substance use disorder that was lower historically for the original formulation of this product.
Alander, Timo J A; Leskinen, Ari P; Raunemaa, Taisto M; Rantanen, Leena
2004-05-01
Diesel exhaust particles are the major constituent of urban carbonaceous aerosol being linked to a large range of adverse environmental and health effects. In this work, the effects of fuel reformulation, oxidation catalyst, engine type, and engine operation parameters on diesel particle emission characteristics were investigated. Particle emissions from an indirect injection (IDI) and a direct injection (DI) engine car operating under steady-state conditions with a reformulated low-sulfur, low-aromatic fuel and a standard-grade fuel were analyzed. Organic (OC) and elemental (EC) carbon fractions of the particles were quantified by a thermal-optical transmission analysis method and particle size distributions measured with a scanning mobility particle sizer (SMPS). The particle volatility characteristics were studied with a configuration that consisted of a thermal desorption unit and an SMPS. In addition, the volatility of size-selected particles was determined with a tandem differential mobility analyzer technique. The reformulated fuel was found to produce 10-40% less particulate carbon mass compared to the standard fuel. On the basis of the carbon analysis, the organic carbon contributed 27-61% to the carbon mass of the IDI engine particle emissions, depending on the fuel and engine operation parameters. The fuel reformulation reduced the particulate organic carbon emissions by 10-55%. In the particles of the DI engine, the organic carbon contributed 14-26% to the total carbon emissions, the advanced engine technology, and the oxidation catalyst, thus reducing the OC/EC ratio of particles considerably. A relatively good consistency between the particulate organic fraction quantified with the thermal optical method and the volatile fraction measured with the thermal desorption unit and SMPS was found.
Estimating Impacts of Diesel Fuel Reformulation with Vector-based Blending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadder, G.R.
2003-01-23
The Oak Ridge National Laboratory Refinery Yield Model has been used to study the refining cost, investment, and operating impacts of specifications for reformulated diesel fuel (RFD) produced in refineries of the U.S. Midwest in summer of year 2010. The study evaluates different diesel fuel reformulation investment pathways. The study also determines whether there are refinery economic benefits for producing an emissions reduction RFD (with flexibility for individual property values) compared to a vehicle performance RFD (with inflexible recipe values for individual properties). Results show that refining costs are lower with early notice of requirements for RFD. While advanced desulfurizationmore » technologies (with low hydrogen consumption and little effect on cetane quality and aromatics content) reduce the cost of ultra low sulfur diesel fuel, these technologies contribute to the increased costs of a delayed notice investment pathway compared to an early notice investment pathway for diesel fuel reformulation. With challenging RFD specifications, there is little refining benefit from producing emissions reduction RFD compared to vehicle performance RFD. As specifications become tighter, processing becomes more difficult, blendstock choices become more limited, and refinery benefits vanish for emissions reduction relative to vehicle performance specifications. Conversely, the emissions reduction specifications show increasing refinery benefits over vehicle performance specifications as specifications are relaxed, and alternative processing routes and blendstocks become available. In sensitivity cases, the refinery model is also used to examine the impact of RFD specifications on the economics of using Canadian synthetic crude oil. There is a sizeable increase in synthetic crude demand as ultra low sulfur diesel fuel displaces low sulfur diesel fuel, but this demand increase would be reversed by requirements for diesel fuel reformulation.« less
Efficient Reformulation of the Thermoelastic Higher-order Theory for Fgms
NASA Technical Reports Server (NTRS)
Bansal, Yogesh; Pindera, Marek-Jerzy; Arnold, Steven M. (Technical Monitor)
2002-01-01
Functionally graded materials (FGMs) are characterized by spatially variable microstructures which are introduced to satisfy given performance requirements. The microstructural gradation gives rise to continuously or discretely changing material properties which complicate FGM analysis. Various techniques have been developed during the past several decades for analyzing traditional composites and many of these have been adapted for the analysis of FGMs. Most of the available techniques use the so-called uncoupled approach in order to analyze graded structures. These techniques ignore the effect of microstructural gradation by employing specific spatial material property variations that are either assumed or obtained by local homogenization. The higher-order theory for functionally graded materials (HOTFGM) is a coupled approach developed by Aboudi et al. (1999) which takes the effect of microstructural gradation into consideration and does not ignore the local-global interaction of the spatially variable inclusion phase(s). Despite its demonstrated utility, however, the original formulation of the higher-order theory is computationally intensive. Herein, an efficient reformulation of the original higher-order theory for two-dimensional elastic problems is developed and validated. The use of the local-global conductivity and local-global stiffness matrix approach is made in order to reduce the number of equations involved. In this approach, surface-averaged quantities are the primary variables which replace volume-averaged quantities employed in the original formulation. The reformulation decreases the size of the global conductivity and stiffness matrices by approximately sixty percent. Various thermal, mechanical, and combined thermomechanical problems are analyzed in order to validate the accuracy of the reformulated theory through comparison with analytical and finite-element solutions. The presented results illustrate the efficiency of the reformulation and its advantages in analyzing functionally graded materials.
Degenhardt, Louisa; Bruno, Raimondo; Ali, Robert; Lintzeris, Nicholas; Farrell, Michael; Larance, Briony
2015-06-01
There is increasing concern about tampering of pharmaceutical opioids. We describe early findings from an Australian study examining the potential impact of the April 2014 introduction of an abuse-deterrent sustained-release oxycodone formulation (Reformulated OxyContin(®)). Data on pharmaceutical opioid sales; drug use by people who inject drugs regularly (PWID); client visits to the Sydney Medically Supervised Injecting Centre (MSIC); and last drug injected by clients of inner-Sydney needle-syringe programmes (NSPs) were obtained, 2009-2014. A cohort of n=606 people tampering with pharmaceutical opioids was formed pre-April 2014, and followed up May-August 2014. There were declines in pharmacy sales of 80mg OxyContin(®) post-introduction of the reformulated product, the dose most commonly diverted and injected by PWID. Reformulated OxyContin(®) was among the least commonly used and injected drugs among PWID. This was supported by Sydney NSP data. There was a dramatic reduction in MSIC visits for injection of OxyContin(®) post-introduction of the new formulation (from 62% of monthly visits pre-introduction to 5% of visits, August 2014). The NOMAD cohort confirmed a reduction in OxyContin(®) use/injection post-introduction. Reformulated OxyContin(®) was cheaper and less attractive for tampering than Original OxyContin(®). These data suggest that, in the short term, introduction of an abuse-deterrent formulation of OxyContin(®) in Australia was associated with a reduction in injection of OxyContin(®), with no clear switch to other drugs. Reformulated OxyContin(®), in this short follow-up, does not appear to be considered as attractive for tampering. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Adding intelligence to scientific data management
NASA Technical Reports Server (NTRS)
Campbell, William J.; Short, Nicholas M., Jr.; Treinish, Lloyd A.
1989-01-01
NASA plans to solve some of the problems of handling large-scale scientific data bases by turning to artificial intelligence (AI) are discussed. The growth of the information glut and the ways that AI can help alleviate the resulting problems are reviewed. The employment of the Intelligent User Interface prototype, where the user will generate his own natural language query with the assistance of the system, is examined. Spatial data management, scientific data visualization, and data fusion are discussed.
Rotenberg, Ken J; Costa, Paula; Trueman, Mark; Lattimore, Paul
2012-08-01
The study tested the Reformulated Helplessness model that individuals who show combined internal locus of control, high stability and high globality attributions for negative life events are prone to depression. Thirty-six women (M=29 years-8 months of age) receiving clinical treatment for eating disorders completed: the Attribution Style Questionnaire, the Beck Depression Inventory, and the Stirling Eating Disorder Scales. An HRA yielded a three-way interaction among the attributional dimensions on depressive symptoms. Plotting of the slopes showed that the attribution of negative life events to the combination of internal locus of control, high stability, and a high globality, was associated with the optimal level of depressive symptoms. The findings supported the Reformulated Helplessness as a model of depression. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Role of Reformulation in the Automatic Design of Satisfiability Procedures
NASA Technical Reports Server (NTRS)
VanBaalen, Jeffrey
1992-01-01
Recently there has been increasing interest in the problem of knowledge compilation (Selman & Kautz91). This is the problem of identifying tractable techniques for determining the consequences of a knowledge base. We have developed and implemented a technique, called DRAT, that given a theory, i.e., a collection of firstorder clauses, can often produce a type of decision procedure for that theory that can be used in the place of a general-purpose first-order theorem prover for determining many of the consequences of that theory. Hence, DRAT does a type of knowledge compilation. Central to the DRAT technique is a type of reformulation in which a problem's clauses are restated in terms of different nonlogical symbols. The reformulation is isomorphic in the sense that it does not change the semantics of a problem.
Newtonian Gravity Reformulated
NASA Astrophysics Data System (ADS)
Dehnen, H.
2018-01-01
With reference to MOND we propose a reformulation of Newton's theory of gravity in the sense of the static electrodynamics introducing a "material" quantity in analogy to the dielectric "constant". We propose that this quantity is induced by vacuum polarizations generated by the gravitational field itself. Herewith the flat rotation curves of the spiral galaxies can be explained as well as the observed high velocities near the center of the galaxy should be reconsidered.
De Keukeleire, Steven; Desmet, Stefanie; Lagrou, Katrien; Oosterlynck, Julie; Verhulst, Manon; Van Besien, Jessica; Saegeman, Veroniek; Reynders, Marijke
2017-03-01
The performance of Elecsys Syphilis was compared to Architect Syphilis TP and Reformulated Architect Syphilis TP. The overall sensitivity and specificity were 98.4% and 99.5%, 97.7% and 97.1%, and 99.2% and 99.7% respectively. The assays are comparable and considered adequate for syphilis screening. Copyright © 2016 Elsevier Inc. All rights reserved.
Reformulated gasoline deal with Venezuela draws heat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Begley, R.
A fight is brewing in Congress over a deal to let Venezuela off the hook in complying with the Clean Air Act reformulated gasoline rule. When Venezuela threatened to call for a GATT panel to challenge the rule as a trade barrier, the Clinton Administration negotiated to alter the rule, a deal that members of Congress are characterizing as {open_quotes}secret{close_quotes} and {open_quotes}back door.{close_quotes}
Reformulation of Stmerin(®) D CFC formulation using HFA propellants.
Murata, Saburo; Izumi, Takashi; Ito, Hideki
2013-01-01
Stmerin(®) D was reformulated using hydrofluoroalkanes (HFA-134a and HFA-227) as alternative propellants instead of chlorofluorocarbons (CFCs), where the active ingredients were suspended in mixed CFCs (CFC-11/CFC-12/CFC-114). Here, we report the suspension stability and spray performance of the original CFC formulation and a reformulation using HFAs. We prepared metered dose inhalers (MDI) using HFAs with different surfactants and co-solvents, and investigated the effect on suspension stability by visual testing. We found that the drug suspension stability was poor in both HFAs, but was improved, particularly for HFA-227, by adding a middle chain fatty acid triglycerides (MCT) to the formulation. However, the vapor pressure of HFA-227 is higher than a CFC mixture and this increased the fine particle dose (FPD). Spray performance was adjusted by altering the actuator configuration, and the performance of different actuators was tested by cascade impaction. We found the spray performance could be controlled by the configuration of the actuator. A spray performance comparable to the original formulation was obtained with a 0.8 mm orifice diameter and a 90° cone angle. These results demonstrate that the reformulation of Stmerin(®) D using HFA-227 is feasible, by using MCT as a suspending agent and modifying the actuator configuration.
Product reformulation in the food system to improve food safety. Evaluation of policy interventions.
Marotta, Giuseppe; Simeone, Mariarosaria; Nazzaro, Concetta
2014-03-01
The objective of this study is to understand the level of attention that the consumer awards to a balanced diet and to product ingredients, with a twofold purpose: to understand whether food product reformulation can generate a competitive advantage for companies that practice it and to evaluate the most appropriate policy interventions to promote a healthy diet. Reformulation strategy, in the absence of binding rules, could be generated by consumers. Results from qualitative research and from empirical analysis have shown that the question of health is a latent demand influenced by two main factors: a general lack of information, and the marketing strategies adopted by companies which bring about an increase in the information asymmetry between producers and consumers. In the absence of binding rules, it is therefore necessary that the government implement information campaigns (food education) aimed at increasing knowledge regarding the effects of unhealthy ingredients, in order to inform and improve consumer choice. It is only by means of widespread information campaigns that food product reformulation can become a strategic variable and allow companies to gain a competitive advantage. This may lead to virtuous results in terms of reducing the social costs related to an unhealthy diet. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cole, Curtis L; Kanter, Andrew S; Cummens, Michael; Vostinar, Sean; Naeymi-Rad, Frank
2004-01-01
To design and implement a real world application using a terminology server to assist patients and physicians who use common language search terms to find specialist physicians with a particular clinical expertise. Terminology servers have been developed to help users encoding of information using complicated structured vocabulary during data entry tasks, such as recording clinical information. We describe a methodology using Personal Health Terminology trade mark and a SNOMED CT-based hierarchical concept server. Construction of a pilot mediated-search engine to assist users who use vernacular speech in querying data which is more technical than vernacular. This approach, which combines theoretical and practical requirements, provides a useful example of concept-based searching for physician referrals.
Assessment of Utilization of Food Variety on the International Space Station
NASA Technical Reports Server (NTRS)
Cooper, M. R.; Paradis, R.; Zwart, S. R.; Smith, S. M.; Kloeris, V. L.; Douglas, G. L.
2018-01-01
Long duration missions will require astronauts to subsist on a closed food system for at least three years. Resupply will not be an option, and the food supply will be older at the time of consumption and more static in variety than previous missions. The space food variety requirements that will both supply nutrition and support continued interest in adequate consumption for a mission of this duration is unknown. Limited food variety of past space programs (Gemini, Apollo, International Space Station) as well as in military operations resulted in monotony, food aversion, and weight loss despite relatively short mission durations of a few days up to several months. In this study, food consumption data from 10 crew members on 3-6-month International Space Station missions was assessed to determine what percentage of the existing food variety was used by crew members, if the food choices correlated to the amount of time in orbit, and whether commonalities in food selections existed across crew members. Complete mission diet logs were recorded on ISS flights from 2008 - 2014, a period in which space food menu variety was consistent, but the food system underwent an extensive reformulation to reduce sodium content. Food consumption data was correlated to the Food on Orbit by Week logs, archived Data Usage Charts, and a food list categorization table using TRIFACTA software and queries in a SQL SERVER 2012 database.
Health search engine with e-document analysis for reliable search results.
Gaudinat, Arnaud; Ruch, Patrick; Joubert, Michel; Uziel, Philippe; Strauss, Anne; Thonnet, Michèle; Baud, Robert; Spahni, Stéphane; Weber, Patrick; Bonal, Juan; Boyer, Celia; Fieschi, Marius; Geissbuhler, Antoine
2006-01-01
After a review of the existing practical solution available to the citizen to retrieve eHealth document, the paper describes an original specialized search engine WRAPIN. WRAPIN uses advanced cross lingual information retrieval technologies to check information quality by synthesizing medical concepts, conclusions and references contained in the health literature, to identify accurate, relevant sources. Thanks to MeSH terminology [1] (Medical Subject Headings from the U.S. National Library of Medicine) and advanced approaches such as conclusion extraction from structured document, reformulation of the query, WRAPIN offers to the user a privileged access to navigate through multilingual documents without language or medical prerequisites. The results of an evaluation conducted on the WRAPIN prototype show that results of the WRAPIN search engine are perceived as informative 65% (59% for a general-purpose search engine), reliable and trustworthy 72% (41% for the other engine) by users. But it leaves room for improvement such as the increase of database coverage, the explanation of the original functionalities and an audience adaptability. Thanks to evaluation outcomes, WRAPIN is now in exploitation on the HON web site (http://www.healthonnet.org), free of charge. Intended to the citizen it is a good alternative to general-purpose search engines when the user looks up trustworthy health and medical information or wants to check automatically a doubtful content of a Web page.
Projection, introjection, and projective identification: a reformulation.
Malancharuvil, Joseph M
2004-12-01
In this essay, the author recommends a reformulation of the psychoanalytic concept of projection. The author proposes that projective processes are not merely defensive maneuvers that interfere with perception, but rather an essential means by which human perception is rendered possible. It is the manner in which human beings test and-evaluate reality in terms of their experiential structure, and their needs for survival and nourishment. Projection is the early phase of introjection.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
A Fine-Grained and Privacy-Preserving Query Scheme for Fog Computing-Enhanced Location-Based Service
Yin, Fan; Tang, Xiaohu
2017-01-01
Location-based services (LBS), as one of the most popular location-awareness applications, has been further developed to achieve low-latency with the assistance of fog computing. However, privacy issues remain a research challenge in the context of fog computing. Therefore, in this paper, we present a fine-grained and privacy-preserving query scheme for fog computing-enhanced location-based services, hereafter referred to as FGPQ. In particular, mobile users can obtain the fine-grained searching result satisfying not only the given spatial range but also the searching content. Detailed privacy analysis shows that our proposed scheme indeed achieves the privacy preservation for the LBS provider and mobile users. In addition, extensive performance analyses and experiments demonstrate that the FGPQ scheme can significantly reduce computational and communication overheads and ensure the low-latency, which outperforms existing state-of-the art schemes. Hence, our proposed scheme is more suitable for real-time LBS searching. PMID:28696395
Rosenbaum, Benjamin P; Silkin, Nikolay; Miller, Randolph A
2014-01-01
Real-time alerting systems typically warn providers about abnormal laboratory results or medication interactions. For more complex tasks, institutions create site-wide 'data warehouses' to support quality audits and longitudinal research. Sophisticated systems like i2b2 or Stanford's STRIDE utilize data warehouses to identify cohorts for research and quality monitoring. However, substantial resources are required to install and maintain such systems. For more modest goals, an organization desiring merely to identify patients with 'isolation' orders, or to determine patients' eligibility for clinical trials, may adopt a simpler, limited approach based on processing the output of one clinical system, and not a data warehouse. We describe a limited, order-entry-based, real-time 'pick off' tool, utilizing public domain software (PHP, MySQL). Through a web interface the tool assists users in constructing complex order-related queries and auto-generates corresponding database queries that can be executed at recurring intervals. We describe successful application of the tool for research and quality monitoring.
Yang, Xue; Yin, Fan; Tang, Xiaohu
2017-07-11
Location-based services (LBS), as one of the most popular location-awareness applications, has been further developed to achieve low-latency with the assistance of fog computing. However, privacy issues remain a research challenge in the context of fog computing. Therefore, in this paper, we present a fine-grained and privacy-preserving query scheme for fog computing-enhanced location-based services, hereafter referred to as FGPQ. In particular, mobile users can obtain the fine-grained searching result satisfying not only the given spatial range but also the searching content. Detailed privacy analysis shows that our proposed scheme indeed achieves the privacy preservation for the LBS provider and mobile users. In addition, extensive performance analyses and experiments demonstrate that the FGPQ scheme can significantly reduce computational and communication overheads and ensure the low-latency, which outperforms existing state-of-the art schemes. Hence, our proposed scheme is more suitable for real-time LBS searching.
Tourassi, Georgia D; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y; Floyd, Carey E
2007-01-01
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrieval precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrievalmore » precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.« less
An Active RBSE Framework to Generate Optimal Stimulus Sequences in a BCI for Spelling
NASA Astrophysics Data System (ADS)
Moghadamfalahi, Mohammad; Akcakaya, Murat; Nezamfar, Hooman; Sourati, Jamshid; Erdogmus, Deniz
2017-10-01
A class of brain computer interfaces (BCIs) employs noninvasive recordings of electroencephalography (EEG) signals to enable users with severe speech and motor impairments to interact with their environment and social network. For example, EEG based BCIs for typing popularly utilize event related potentials (ERPs) for inference. Presentation paradigm design in current ERP-based letter by letter typing BCIs typically query the user with an arbitrary subset characters. However, the typing accuracy and also typing speed can potentially be enhanced with more informed subset selection and flash assignment. In this manuscript, we introduce the active recursive Bayesian state estimation (active-RBSE) framework for inference and sequence optimization. Prior to presentation in each iteration, rather than showing a subset of randomly selected characters, the developed framework optimally selects a subset based on a query function. Selected queries are made adaptively specialized for users during each intent detection. Through a simulation-based study, we assess the effect of active-RBSE on the performance of a language-model assisted typing BCI in terms of typing speed and accuracy. To provide a baseline for comparison, we also utilize standard presentation paradigms namely, row and column matrix presentation paradigm and also random rapid serial visual presentation paradigms. The results show that utilization of active-RBSE can enhance the online performance of the system, both in terms of typing accuracy and speed.
Filipovic, M; Lukic, M; Djordjevic, S; Krstonosic, V; Pantelic, I; Vuleta, G; Savic, S
2017-10-01
Consumers' demand for improved products' performance, alongside with the obligation of meeting the safety and efficacy goals, presents a key reason for the reformulation, as well as a challenging task for formulators. Any change of the formulation, whether it is wanted - in order to innovate the product (new actives and raw materials) or necessary - due to, for example legislative changes (restriction of ingredients), ingredients market unavailability, new manufacturing equipment, may have a number of consequences, desired or otherwise. The aim of the study was to evaluate the influence of multiple factors - variations of the composition, manufacturing conditions and their interactions, on emulsion textural and rheological characteristics, applying the general experimental factorial design and, subsequently, to establish the approach that could replace, to some extent, certain expensive and time-consuming tests (e.g. certain sensory analysis), often required, partly or completely, after the reformulation. An experimental design strategy was utilized to reveal the influence of reformulation factors (addition of new actives, preparation method change) on textural and rheological properties of cosmetic emulsions, especially those linked to certain sensorial attributes, and droplet size. The general experimental factorial design revealed a significant direct effect of each factor, as well as their interaction effects, on certain characteristics of the system and provided some valuable information necessary for fine-tuning reformulation conditions. Upon addition of STEM-liposomes, consistency, index of viscosity, firmness and cohesiveness were decreased, as along with certain rheology parameters (elastic and viscous modulus), whereas maximal and minimal apparent viscosities and droplet size were increased. The presence of an emollient (squalene) affected all the investigated parameters in a concentration-dependent manner. Modification of the preparation method (using Ultra Turrax instead of a propeller stirrer) produced emulsions with higher firmness and maximal apparent viscosity, but led to a decrease in minimal apparent viscosity, hysteresis loop area, all monitored parameters of oscillatory rheology and droplet size. The study showed that the established approach which combines a general experimental design and instrumental, rheological and textural measurements could be appropriate, more objective, repeatable and time and money-saving step towards developing cosmetic emulsions with satisfying, improved or unchanged, consumer-acceptable performance during the reformulation. © 2017 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Context-Aware Intelligent Assistant Approach to Improving Pilot's Situational Awareness
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Lodha, Suresh K.
2004-01-01
Faulty decision making due to inaccurate or incomplete awareness of the situation tends to be the prevailing cause of fatal general aviation accidents. Of these accidents, loss of weather situational awareness accounts for the largest number of fatalities. We describe a method for improving weather situational awareness through the support of a contextaware,domain and task knowledgeable, personalized and adaptive assistant. The assistant automatically monitors weather reports for the pilot's route of flight and warns her of detected anomalies. When and how warnings are issued is determined by phase of flight, the pilot s definition of acceptable weather conditions, and the pilot's preferences for automatic notification. In addition to automatic warnings, the pilot is able to verbally query for weather and airport information. By noting the requests she makes during the approach phase of flight, our system learns to provide the information without explicit requests on subsequent flights with similar conditions. We show that our weather assistant decreases the effort required to maintain situational awareness by more than 5.5 times when compared to the conventional method of in-flight weather briefings.
Addressing the Challenges of Multi-Domain Data Integration with the SemantEco Framework
NASA Astrophysics Data System (ADS)
Patton, E. W.; Seyed, P.; McGuinness, D. L.
2013-12-01
Data integration across multiple domains will continue to be a challenge with the proliferation of big data in the sciences. Data origination issues and how data are manipulated are critical to enable scientists to understand and consume disparate datasets as research becomes more multidisciplinary. We present the SemantEco framework as an exemplar for designing an integrative portal for data discovery, exploration, and interpretation that uses best practice W3C Recommendations. We use the Resource Description Framework (RDF) with extensible ontologies described in the Web Ontology Language (OWL) to provide graph-based data representation. Furthermore, SemantEco ingests data via the software package csv2rdf4lod, which generates data provenance using the W3C provenance recommendation (PROV). Our presentation will discuss benefits and challenges of semantic integration, their effect on runtime performance, and how the SemantEco framework assisted in identifying performance issues and improved query performance across multiple domains by an order of magnitude. SemantEco benefits from a semantic approach that provides an 'open world', which allows data to incrementally change just as it does in the real world. SemantEco modules may load new ontologies and data using the W3C's SPARQL Protocol and RDF Query Language via HTTP. Modules may also provide user interface elements for applications and query capabilities to support new use cases. Modules can associate with domains, which are first-class objects in SemantEco. This enables SemantEco to perform integration and reasoning both within and across domains on module-provided data. The SemantEco framework has been used to construct a web portal for environmental and ecological data. The portal includes water and air quality data from the U.S. Geological Survey (USGS) and Environmental Protection Agency (EPA) and species observation counts for birds and fish from the Avian Knowledge Network and the Santa Barbara Long Term Ecological Research, respectively. We provide regulation ontologies using OWL2 datatype facets to detect out-of-range measurements for environmental standards set by the EPA, i.a. Users adjust queries using module-defined facets and a map presents the resulting measurement sites. Custom icons identify sites that violate regulations, making them easy to locate. Selecting a site gives the option of charting spatially proximate data from different domains over time. Our portal currently provides 1.6 billion triples of scientific data in RDF. We segment data by ZIP code and reasoning over 2157 measurements with our EPA regulation ontology that contains 131 regulations takes 2.5 seconds on a 2.4 GHz Intel Core 2 Quad with 8 GB of RAM. SemantEco's modular design and reasoning capabilities make it an exemplar for building multidisciplinary data integration tools that provide data access to scientists and the general population alike. Its provenance tracking provides accountability and its reasoning services can assist users in interpreting data. Future work includes support for geographical queries using the Open Geospatial Consortium's GeoSPARQL standard.
Jaenke, Rachael; Barzi, Federica; McMahon, Emma; Webster, Jacqui; Brimblecombe, Julie
2017-11-02
Food product reformulation is promoted as an effective strategy to reduce population salt intake and address the associated burden of chronic disease. Salt has a number of functions in food processing, including impacting upon physical and sensory properties. Manufacturers must ensure that reformulation of foods to reduce salt does not compromise consumer acceptability. The aim of this systematic review is to determine to what extent foods can be reduced in salt without detrimental effect on consumer acceptability. Fifty studies reported on salt reduction, replacement or compensation in processed meats, breads, cheeses, soups, and miscellaneous products. For each product category, levels of salt reduction were collapsed into four groups: <40%, 40-59%, 60-79% and ≥80%. Random effects meta-analyses conducted on salt-reduced products showed that salt could be reduced by approximately 40% in breads [mean change in acceptability for reduction <40% (-0.27, 95% confidence interval (CI) -0.62, 0.08; p = 0.13)] and approximately 70% in processed meats [mean change in acceptability for reductions 60-69% (-0.18, 95% CI -0.44, 0.07; p = 0.15)] without significantly impacting consumer acceptability. Results varied for other products. These results will support manufacturers to make greater reductions in salt when reformulating food products, which in turn will contribute to a healthier food supply.
The Effects of Preference for Information on Consumers’ Online Health Information Search Behavior
2013-01-01
Background Preference for information is a personality trait that affects people’s tendency to seek information in health-related situations. Prior studies have focused primarily on investigating its impact on patient-provider communication and on the implications for designing information interventions that prepare patients for medical procedures. Few studies have examined its impact on general consumers’ interactions with Web-based search engines for health information or the implications for designing more effective health information search systems. Objective This study intends to fill this gap by investigating the impact of preference for information on the search behavior of general consumers seeking health information, their perceptions of search tasks (representing information needs), and user experience with search systems. Methods Forty general consumers who had previously searched for health information online participated in the study in our usability lab. Preference for information was measured using Miller’s Monitor-Blunter Style Scale (MBSS) and the Krantz Health Opinion Survey-Information Scale (KHOS-I). Each participant completed four simulated health information search tasks: two look-up (fact-finding) and two exploratory. Their behaviors while interacting with the search systems were automatically logged and ratings of their perceptions of tasks and user experience with the systems were collected using Likert-scale questionnaires. Results The MBSS showed low reliability with the participants (Monitoring subscale: Cronbach alpha=.53; Blunting subscale: Cronbach alpha=.35). Thus, no further analyses were performed based on the scale. KHOS-I had sufficient reliability (Cronbach alpha=.77). Participants were classified into low- and high-preference groups based on their KHOS-I scores. The high-preference group submitted significantly shorter queries when completing the look-up tasks (P=.02). The high-preference group made a significantly higher percentage of parallel movements in query reformulation than did the low-preference group (P=.04), whereas the low-preference group made a significantly higher percentage of new concept movements than the high-preference group when completing the exploratory tasks (P=.01). The high-preference group found the exploratory tasks to be significantly more difficult (P=.05) and the systems to be less useful (P=.04) than did the low-preference group. Conclusions Preference for information has an impact on the search behavior of general consumers seeking health information. Those with a high preference were more likely to use more general queries when searching for specific factual information and to develop more complex mental representations of health concerns of an exploratory nature and try different combinations of concepts to explore these concerns. High-preference users were also more demanding on the system. Health information search systems should be tailored to fit individuals’ information preferences. PMID:24284061
The effects of preference for information on consumers' online health information search behavior.
Zhang, Yan
2013-11-26
Preference for information is a personality trait that affects people's tendency to seek information in health-related situations. Prior studies have focused primarily on investigating its impact on patient-provider communication and on the implications for designing information interventions that prepare patients for medical procedures. Few studies have examined its impact on general consumers' interactions with Web-based search engines for health information or the implications for designing more effective health information search systems. This study intends to fill this gap by investigating the impact of preference for information on the search behavior of general consumers seeking health information, their perceptions of search tasks (representing information needs), and user experience with search systems. Forty general consumers who had previously searched for health information online participated in the study in our usability lab. Preference for information was measured using Miller's Monitor-Blunter Style Scale (MBSS) and the Krantz Health Opinion Survey-Information Scale (KHOS-I). Each participant completed four simulated health information search tasks: two look-up (fact-finding) and two exploratory. Their behaviors while interacting with the search systems were automatically logged and ratings of their perceptions of tasks and user experience with the systems were collected using Likert-scale questionnaires. The MBSS showed low reliability with the participants (Monitoring subscale: Cronbach alpha=.53; Blunting subscale: Cronbach alpha=.35). Thus, no further analyses were performed based on the scale. KHOS-I had sufficient reliability (Cronbach alpha=.77). Participants were classified into low- and high-preference groups based on their KHOS-I scores. The high-preference group submitted significantly shorter queries when completing the look-up tasks (P=.02). The high-preference group made a significantly higher percentage of parallel movements in query reformulation than did the low-preference group (P=.04), whereas the low-preference group made a significantly higher percentage of new concept movements than the high-preference group when completing the exploratory tasks (P=.01). The high-preference group found the exploratory tasks to be significantly more difficult (P=.05) and the systems to be less useful (P=.04) than did the low-preference group. Preference for information has an impact on the search behavior of general consumers seeking health information. Those with a high preference were more likely to use more general queries when searching for specific factual information and to develop more complex mental representations of health concerns of an exploratory nature and try different combinations of concepts to explore these concerns. High-preference users were also more demanding on the system. Health information search systems should be tailored to fit individuals' information preferences.
Gressier, Mathilde; Privet, Lisa; Mathias, Kevin Clark; Vlassopoulos, Antonis; Vieux, Florent; Masset, Gabriel
2017-07-01
Background: Food reformulation has been identified as a strategy to improve nutritional intakes; however, little is known about the potential impact of industry-wide reformulations. Objective: The aim of the study was to model the dietary impact of food and beverage reformulation following the Nestlé Nutritional Profiling System (NNPS) standards for children, adolescents, and adults in the United States and France. Design: Dietary intakes of individuals aged ≥4 y were retrieved from nationally representative surveys: the US NHANES 2011-2012 ( n = 7456) and the French Individual and National Survey on Food Consumption ( n = 3330). The composition of all foods and beverages consumed were compared with the NNPS standards for energy, total and saturated fats, sodium, added sugars, protein, fiber, and calcium. Two scenarios were modeled. In the first, the nutrient content of foods and beverages was adjusted to the NNPS standards if they were not met. In the second, products not meeting the standards were replaced by the most nutritionally similar alternative meeting the standards from the same category. Dietary intakes were assessed against local nutrient recommendations, and analyses were stratified by body mass index and socioeconomic status. Results: Scenarios 1 and 2 showed reductions in US adults' mean daily energy (-88 and -225 kcal, respectively), saturated fats (-4.2, -6.9 g), sodium (-406, -324 mg), and added sugars (-29.4, -35.8 g). Similar trends were observed for US youth and in France. The effects on fiber and calcium were limited. In the United States, the social gradient of added sugars intake was attenuated in both scenarios compared with the baseline values. Conclusions: Potential industry-wide reformulation of the food supply could lead to higher compliance with recommendations in both the United States and France, and across all socioeconomic groups. NNPS standards seemed to be especially effective for nutrients consumed in excess. © 2017 American Society for Nutrition.
Rippin, H L; Hutchinson, J; Ocke, M; Jewell, J; Breda, J J; Cade, J E
2017-01-01
Trans fatty acids (TFA) increase the risk of mortality and chronic diseases. TFA intakes have fallen since reformulation, but may still be high in certain, vulnerable, groups. This paper investigates socio-economic and food consumption characteristics of high TFA consumers after voluntary reformulation in the Netherlands and UK. Post-reformulation data of adults aged 19-64 was analysed in two national surveys: the Dutch National Food Consumption Survey (DNFCS) collected 2007-2010 using 2*24hr recalls (N = 1933) and the UK National Diet and Nutrition Survey (NDNS) years 3&4 collected 2010/11 and 2011/12 using 4-day food diaries (N = 848). The socio-economic and food consumption characteristics of the top 10% and remaining 90% TFA consumers were compared. Means of continuous data were compared using t-tests and categorical data means using chi-squared tests. Multivariate logistic regression models indicated which socio-demographic variables were associated with high TFA consumption. In the Dutch analyses, women and those born outside the Netherlands were more likely to be top 10% TFA consumers than men and Dutch-born. In the UK unadjusted analyses there was no significant trend in socio-economic characteristics between high and lower TFA consumers, but there were regional differences in the multivariate logistic regression analyses. In the Netherlands, high TFA consumers were more likely to be consumers of cakes, buns & pastries; cream; and fried potato than the remaining 90%. Whereas in the UK, high TFA consumers were more likely to be consumers of lamb; cheese; and dairy desserts and lower crisps and savoury snack consumers. Some socio-demographic differences between high and lower TFA consumers were evident post-reformulation. High TFA consumers in the Dutch 2007-10 survey appeared more likely to obtain TFA from artificial sources than those in the UK survey. Further analyses using more up-to-date food composition databases may be needed.
Rhinoplasty perioperative database using a personal digital assistant.
Kotler, Howard S
2004-01-01
To construct a reliable, accurate, and easy-to-use handheld computer database that facilitates the point-of-care acquisition of perioperative text and image data specific to rhinoplasty. A user-modified database (Pendragon Forms [v.3.2]; Pendragon Software Corporation, Libertyville, Ill) and graphic image program (Tealpaint [v.4.87]; Tealpaint Software, San Rafael, Calif) were used to capture text and image data, respectively, on a Palm OS (v.4.11) handheld operating with 8 megabytes of memory. The handheld and desktop databases were maintained secure using PDASecure (v.2.0) and GoldSecure (v.3.0) (Trust Digital LLC, Fairfax, Va). The handheld data were then uploaded to a desktop database of either FileMaker Pro 5.0 (v.1) (FileMaker Inc, Santa Clara, Calif) or Microsoft Access 2000 (Microsoft Corp, Redmond, Wash). Patient data were collected from 15 patients undergoing rhinoplasty in a private practice outpatient ambulatory setting. Data integrity was assessed after 6 months' disk and hard drive storage. The handheld database was able to facilitate data collection and accurately record, transfer, and reliably maintain perioperative rhinoplasty data. Query capability allowed rapid search using a multitude of keyword search terms specific to the operative maneuvers performed in rhinoplasty. Handheld computer technology provides a method of reliably recording and storing perioperative rhinoplasty information. The handheld computer facilitates the reliable and accurate storage and query of perioperative data, assisting the retrospective review of one's own results and enhancement of surgical skills.
Minimization of Roll Firings for Optimal Propellant Maneuvers
NASA Astrophysics Data System (ADS)
Leach, Parker C.
Attitude control of the International Space Station (ISS) is critical for operations, impacting power, communications, and thermal systems. The station uses gyroscopes and thrusters for attitude control, and reorientations are normally assisted by thrusters on docked vehicles. When the docked vehicles are unavailable, the reduction in control authority in the roll axis results in frequent jet firings and massive fuel consumption. To improve this situation, new guidance and control schemes are desired that provide control with fewer roll firings. Optimal control software was utilized to solve for potential candidates that satisfied desired conditions with the goal of minimizing total propellant. An ISS simulation too was then used to test these solutions for feasibility. After several problem reformulations, multiple candidate solutions minimizing or completely eliminating roll firings were found. Flight implementation would not only save massive amounts of fuel and thus money, but also reduce ISS wear and tear, thereby extending its lifetime.
A second look at the second law
NASA Astrophysics Data System (ADS)
Bejan, Adrian
1988-05-01
An account is given of Bejan's (1988) reformulation of the axioms of engineering thermodynamics in terms of heat transfer, rather than mechanics. Attention is given to graphic constructions that can be used to illustrate the properties in question, such as the 'stability star' diagram summarizing various extrema reached by certain thermodynamic properties when a closed system settles into stable (unconstrained) equilibrium. Also noted are the exergy analysis and refrigeration applications to which the present reformulation of the second law of thermodynamics can be put.
Abrão, A C; de Gutiérrez, M R; Marin, H F
1997-04-01
The present study aimed at describing the reformulated instrument used in the puerperal woman nursing consultation based on the identified diagnoses classification according to the Taxonomy-I reviewed by NANDA, and the identification of the most frequent nursing diagnoses concerning maternal breastfeeding, based on the reformulated instrument. The diagnoses found as being over 50% were: knowledge deficit (100%); sleep pattern disturbance (75%), altered sexuality patterns (75%), ineffective breastfeeding (66.6%) and impaired physical mobility (66.6%).
Daily Average Consumption of 2 Long-Acting Opioids: An Interrupted Time Series Analysis
Puenpatom, R. Amy; Szeinbach, Sheryl L.; Ma, Larry; Ben-Joseph, Rami H.; Summers, Kent H.
2012-01-01
Background Oxycodone controlled release (CR) and oxymorphone extended release (ER) are frequently prescribed long-acting opioids, which are approved for twice-daily dosing. The US Food and Drug Administration approved a reformulated crush-resistant version of oxycodone CR in April 2010. Objective To compare the daily average consumption (DACON) for oxycodone CR and for oxymorphone ER before and after the introduction of the reformulated, crush-resistant version of oxycodone CR. Methods This was a retrospective claims database analysis using pharmacy claims from the MarketScan database for the period from January 2010 through March 2011. The interrupted time series analysis was used to evaluate the impact of the introduction of reformulated oxycodone CR on the DACON of the 2 drugs—oxycodone CR and oxymorphone ER. The source of the databases included private-sector health data from more than 150 medium and large employers. All prescription claims containing oxycodone CR and oxymorphone ER dispensed to members from January 1, 2010, to March 31, 2011, were included in the analysis. Prescription claims containing duplicate National Drug Codes, missing member identification, invalid quantities or inaccurate days supply of either drug, and DACON values of <1 and >500 were removed. Results The database yielded 483,063 prescription claims for oxycodone CR and oxymorphone ER from January 1, 2010, to March 31, 2011. The final sample consisted of 411,404 oxycodone CR prescriptions (traditional and reformulated) dispensed to 85,150 members and 62,656 oxymorphone ER prescriptions dispensed to 11,931 members. Before the introduction of reformulated oxycodone CR, DACON values for the highest strength available for each of the 2 drugs were 0.51 tablets higher for oxycodone CR than for oxymorphone ER, with mean DACON values of 3.5 for oxycodone CR and 3.0 for oxymorphone ER (P <.001). The differences of mean DACON between the 2 drugs for all lower strengths were 0.46 tablets, with mean DACON values of 2.7 for oxycodone CR and 2.3 for oxymorphone ER (P <.001). After the introduction of the new formulation, the difference in mean DACON between the 2 drugs was slightly lower: 0.45 tablets for the highest-strength and 0.40 tablets for the lower-strength pairs. Regression analyses showed that the immediate and overall impact of the reformulation of oxycodone CR on the DACON of oxycodone CR was minimal, whereas no changes were seen in the DACON of oxymorphone ER. The estimated DACON for oxycodone CR decreased by 0.1 tablets, or 3.7% (P <.001), 6 months after the new formulation was introduced. Conclusion The mean DACON was 0.4 tablets per day higher for oxycodone CR compared with oxymorphone ER for all dosage strengths for the entire study period. After the introduction of the reformulated oxycodone CR, the DACON for this drug was slightly mitigated; however, there was a minimal impact on the mean differences between oxycodone CR and oxymorphone ER. PMID:24991311
Variance decomposition in stochastic simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Maître, O. P., E-mail: olm@limsi.fr; Knio, O. M., E-mail: knio@duke.edu; Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance.more » Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.« less
Chilcoat, HD; Butler, SF; Sellers, EM; Kadakia, A; Harikrishnan, V; Haddox, JD; Dart, RC
2016-01-01
An extended‐release opioid analgesic (OxyContin, OC) was reformulated with abuse‐deterrent properties to deter abuse. This report examines changes in abuse through oral and nonoral routes, doctor‐shopping, and fatalities in 10 studies 3.5 years after reformulation. Changes in OC abuse from 1 year before to 3 years after OC reformulation were calculated, adjusted for prescription changes. Abuse of OC decreased 48% in national poison center surveillance systems, decreased 32% in a national drug treatment system, and decreased 27% among individuals prescribed OC in claims databases. Doctor‐shopping for OC decreased 50%. Overdose fatalities reported to the manufacturer decreased 65%. Abuse of other opioids without abuse‐deterrent properties decreased 2 years later than OC and with less magnitude, suggesting OC decreases were not due to broader opioid interventions. Consistent with the formulation, decreases were larger for nonoral than oral abuse. Abuse‐deterrent opioids may mitigate abuse and overdose risks among chronic pain patients. PMID:27170195
Laboratory studies of sweets re-formulated to improve their dental properties.
Grenby, T H; Mistry, M
1996-03-01
To evaluate the potential dental effects of ten new types of sugar-free sweets formulated with Lycasin or isomalt as bulk sweeteners instead of sugars. Examination of the sweets for their acidity, fermentability by oral microorganisms, influence on the demineralisation of dental enamel, and their influence on human interdental plaque pH, compared with conventional sugar-containing sweets. The importance of reducing the levels of flavouring acids in the sweets was demonstrated. It was not straightforward to evaluate chocolate products in this system, but the potential benefits of re-formulating fruit gums, lollipops, chew-bars, toffee and fudge with Lycasin or isomalt in place of sugars were shown by determining their reduced acidogenicity and fermentability compared with conventional confectionery. The extent of demineralisation of dental enamel was related to both the acidity and the fermentability of the sweets. Re-formulating sweets with reduced acidity levels and bulk sweeteners not fermentable by dental plaque microorganisms can provide a basis for improving their potential dental effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan
This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less
Maurer, S A; Kussmann, J; Ochsenfeld, C
2014-08-07
We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N⁵) to O(N³) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.
Kovac, Boris; Knific, Maja
2017-03-01
The purpose of this study was to identify the possibility of unnoticed reduction in salt content of bread as a basic food in the diet of preschool children. The response of children to less salty bread and the role of teachers and teacher assistants in the introduction of novelties into children's nutrition ware studied. Using hedonic sensory evaluation in the case of bread, the perception of salty taste and responses of preschool children to salt reduction were observed. The combination of quantitative and qualitative data analysis obtained from the case study group, composed of 22 preschool children and 66 teachers and teacher assistants, was studied. The results show that a 30% salt reduction was not registered by the children, while a 50% reduction of the salt content, compared to the original recipe, though noted, was not disruptive. The perception of taste and development of good eating habits at an early age could be influenced by teachers and teacher assistants' verbal and non-verbal communication. Salt reduction does not significantly affect the rating of satisfaction with the tested product. Educational personnel must be aware of their decisive influence on children's perception of new and less salty products. Such an approach could represent a basis for creating children's eating habits, which will be of particular importance later in their lives. The findings may possibly result in an update of the national nutrition policy.
Query Auto-Completion Based on Word2vec Semantic Similarity
NASA Astrophysics Data System (ADS)
Shao, Taihua; Chen, Honghui; Chen, Wanyu
2018-04-01
Query auto-completion (QAC) is the first step of information retrieval, which helps users formulate the entire query after inputting only a few prefixes. Regarding the models of QAC, the traditional method ignores the contribution from the semantic relevance between queries. However, similar queries always express extremely similar search intention. In this paper, we propose a hybrid model FS-QAC based on query semantic similarity as well as the query frequency. We choose word2vec method to measure the semantic similarity between intended queries and pre-submitted queries. By combining both features, our experiments show that FS-QAC model improves the performance when predicting the user’s query intention and helping formulate the right query. Our experimental results show that the optimal hybrid model contributes to a 7.54% improvement in terms of MRR against a state-of-the-art baseline using the public AOL query logs.
EquiX-A Search and Query Language for XML.
ERIC Educational Resources Information Center
Cohen, Sara; Kanza, Yaron; Kogan, Yakov; Sagiv, Yehoshua; Nutt, Werner; Serebrenik, Alexander
2002-01-01
Describes EquiX, a search language for XML that combines querying with searching to query the data and the meta-data content of Web pages. Topics include search engines; a data model for XML documents; search query syntax; search query semantics; an algorithm for evaluating a query on a document; and indexing EquiX queries. (LRW)
Spatial and symbolic queries for 3D image data
NASA Astrophysics Data System (ADS)
Benson, Daniel C.; Zick, Gregory L.
1992-04-01
We present a query system for an object-oriented biomedical imaging database containing 3-D anatomical structures and their corresponding 2-D images. The graphical interface facilitates the formation of spatial queries, nonspatial or symbolic queries, and combined spatial/symbolic queries. A query editor is used for the creation and manipulation of 3-D query objects as volumes, surfaces, lines, and points. Symbolic predicates are formulated through a combination of text fields and multiple choice selections. Query results, which may include images, image contents, composite objects, graphics, and alphanumeric data, are displayed in multiple views. Objects returned by the query may be selected directly within the views for further inspection or modification, or for use as query objects in subsequent queries. Our image database query system provides visual feedback and manipulation of spatial query objects, multiple views of volume data, and the ability to combine spatial and symbolic queries. The system allows for incremental enhancement of existing objects and the addition of new objects and spatial relationships. The query system is designed for databases containing symbolic and spatial data. This paper discuses its application to data acquired in biomedical 3- D image reconstruction, but it is applicable to other areas such as CAD/CAM, geographical information systems, and computer vision.
GenoQuery: a new querying module for functional annotation in a genomic warehouse
Lemoine, Frédéric; Labedan, Bernard; Froidevaux, Christine
2008-01-01
Motivation: We have to cope with both a deluge of new genome sequences and a huge amount of data produced by high-throughput approaches used to exploit these genomic features. Crossing and comparing such heterogeneous and disparate data will help improving functional annotation of genomes. This requires designing elaborate integration systems such as warehouses for storing and querying these data. Results: We have designed a relational genomic warehouse with an original multi-layer architecture made of a databases layer and an entities layer. We describe a new querying module, GenoQuery, which is based on this architecture. We use the entities layer to define mixed queries. These mixed queries allow searching for instances of biological entities and their properties in the different databases, without specifying in which database they should be found. Accordingly, we further introduce the central notion of alternative queries. Such queries have the same meaning as the original mixed queries, while exploiting complementarities yielded by the various integrated databases of the warehouse. We explain how GenoQuery computes all the alternative queries of a given mixed query. We illustrate how useful this querying module is by means of a thorough example. Availability: http://www.lri.fr/~lemoine/GenoQuery/ Contact: chris@lri.fr, lemoine@lri.fr PMID:18586731
Making Space for Specialized Astronomy Resources
NASA Astrophysics Data System (ADS)
MacMillan, D.
2007-10-01
With the growth of both free and subscription-based resources, articles on astronomy have never been easier to find. Locating the best and most current materials for any given search, however, now requires multiple tools and strategies dependent on the query. An analysis of the tools currently available shows that while astronomy is well-served by Google Scholar, Scopus and Inspec, its literature is best accessed through specialized resources such as ADS (Astrophysics Data System). While no surprise to astronomers, this has major implications for those of us who teach information literacy skills to astronomy students and work in academic settings where astronomy is just one of many subjects for which our non-specialist colleagues at the reference desk provide assistance. This paper will examine some of the implications of this analysis for library instruction, reference assistance and training, and library webpage development.
Kossover, Rachel A; Chi, Carolyn J; Wise, Matthew E; Tran, Alvin H; Chande, Neha D; Perz, Joseph F
2014-01-01
Assisted living facilities (ALFs) provide housing and care to persons unable to live independently, and who often have increasing medical needs. Disease outbreaks illustrate challenges of maintaining adequate resident protections in these facilities. Describe current state laws on assisted living admissions criteria, medical oversight, medication administration, vaccination requirements, and standards for infection control training. We abstracted laws and regulations governing assisted living facilities for the 50 states using a structured abstraction tool. Selected characteristics were compared according to the time period in which the regulation took effect. Selected state health departments were queried regarding outbreaks identified in assisted living facilities. Of the 50 states, 84% specify health-based admissions criteria to assisted living facilities; 60% require licensed health care professionals to oversee medical care; 88% specifically allow subcontracting with outside entities to provide routine medical services onsite; 64% address medication administration by assisted living facility staff; 54% specify requirements for some form of initial infection control training for all staff; 50% require reporting of disease outbreaks to the health department; 18% specify requirements to offer or require vaccines to staff; 30% specify requirements to offer or require vaccines to residents. Twelve states identified approximately 1600 outbreaks from 2010 to 2013, with influenza or norovirus infections predominating. There is wide variation in how assisted living facilities are regulated in the United States. States may wish to consider regulatory changes that ensure safe health care delivery, and minimize risks of infections, outbreaks of disease, and other forms of harm among assisted living residents. Published by Elsevier Inc.
SPARK: Adapting Keyword Query to Semantic Search
NASA Astrophysics Data System (ADS)
Zhou, Qi; Wang, Chong; Xiong, Miao; Wang, Haofen; Yu, Yong
Semantic search promises to provide more accurate result than present-day keyword search. However, progress with semantic search has been delayed due to the complexity of its query languages. In this paper, we explore a novel approach of adapting keywords to querying the semantic web: the approach automatically translates keyword queries into formal logic queries so that end users can use familiar keywords to perform semantic search. A prototype system named 'SPARK' has been implemented in light of this approach. Given a keyword query, SPARK outputs a ranked list of SPARQL queries as the translation result. The translation in SPARK consists of three major steps: term mapping, query graph construction and query ranking. Specifically, a probabilistic query ranking model is proposed to select the most likely SPARQL query. In the experiment, SPARK achieved an encouraging translation result.
1981-04-01
east of Arcola Creek. The Interim Report gave a favorable recommendation for the harbor project and the results were published in House Document No. 91...Draft Reformulation Phase I GDM Re]port (Draft Stage 3 Report) The purpi se of this )rf t Stag. 3 Report Is to present the results of the Stage 3...Iirements for a small-boat harbor at Geneva State Park. Results of the bathymetric survey and sediment sampling program are presented in Appendix A. (3
Griffon, N; Schuers, M; Dhombres, F; Merabti, T; Kerdelhué, G; Rollin, L; Darmoni, S J
2016-08-02
Despite international initiatives like Orphanet, it remains difficult to find up-to-date information about rare diseases. The aim of this study is to propose an exhaustive set of queries for PubMed based on terminological knowledge and to evaluate it versus the queries based on expertise provided by the most frequently used resource in Europe: Orphanet. Four rare disease terminologies (MeSH, OMIM, HPO and HRDO) were manually mapped to each other permitting the automatic creation of expended terminological queries for rare diseases. For 30 rare diseases, 30 citations retrieved by Orphanet expert query and/or query based on terminological knowledge were assessed for relevance by two independent reviewers unaware of the query's origin. An adjudication procedure was used to resolve any discrepancy. Precision, relative recall and F-measure were all computed. For each Orphanet rare disease (n = 8982), there was a corresponding terminological query, in contrast with only 2284 queries provided by Orphanet. Only 553 citations were evaluated due to queries with 0 or only a few hits. There were no significant differences between the Orpha query and terminological query in terms of precision, respectively 0.61 vs 0.52 (p = 0.13). Nevertheless, terminological queries retrieved more citations more often than Orpha queries (0.57 vs. 0.33; p = 0.01). Interestingly, Orpha queries seemed to retrieve older citations than terminological queries (p < 0.0001). The terminological queries proposed in this study are now currently available for all rare diseases. They may be a useful tool for both precision or recall oriented literature search.
Driver head pose tracking with thermal camera
NASA Astrophysics Data System (ADS)
Bole, S.; Fournier, C.; Lavergne, C.; Druart, G.; Lépine, T.
2016-09-01
Head pose can be seen as a coarse estimation of gaze direction. In automotive industry, knowledge about gaze direction could optimize Human-Machine Interface (HMI) and Advanced Driver Assistance Systems (ADAS). Pose estimation systems are often based on camera when applications have to be contactless. In this paper, we explore uncooled thermal imagery (8-14μm) for its intrinsic night vision capabilities and for its invariance versus lighting variations. Two methods are implemented and compared, both are aided by a 3D model of the head. The 3D model, mapped with thermal texture, allows to synthesize a base of 2D projected models, differently oriented and labeled in yaw and pitch. The first method is based on keypoints. Keypoints of models are matched with those of the query image. These sets of matchings, aided with the 3D shape of the model, allow to estimate 3D pose. The second method is a global appearance approach. Among all 2D models of the base, algorithm searches the one which is the closest to the query image thanks to a weighted least squares difference.
Rapid Exploitation and Analysis of Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buttler, D J; Andrzejewski, D; Stevens, K D
Analysts are overwhelmed with information. They have large archives of historical data, both structured and unstructured, and continuous streams of relevant messages and documents that they need to match to current tasks, digest, and incorporate into their analysis. The purpose of the READ project is to develop technologies to make it easier to catalog, classify, and locate relevant information. We approached this task from multiple angles. First, we tackle the issue of processing large quantities of information in reasonable time. Second, we provide mechanisms that allow users to customize their queries based on latent topics exposed from corpus statistics. Third,more » we assist users in organizing query results, adding localized expert structure over results. Forth, we use word sense disambiguation techniques to increase the precision of matching user generated keyword lists with terms and concepts in the corpus. Fifth, we enhance co-occurrence statistics with latent topic attribution, to aid entity relationship discovery. Finally we quantitatively analyze the quality of three popular latent modeling techniques to examine under which circumstances each is useful.« less
An advanced web query interface for biological databases
Latendresse, Mario; Karp, Peter D.
2010-01-01
Although most web-based biological databases (DBs) offer some type of web-based form to allow users to author DB queries, these query forms are quite restricted in the complexity of DB queries that they can formulate. They can typically query only one DB, and can query only a single type of object at a time (e.g. genes) with no possible interaction between the objects—that is, in SQL parlance, no joins are allowed between DB objects. Writing precise queries against biological DBs is usually left to a programmer skillful enough in complex DB query languages like SQL. We present a web interface for building precise queries for biological DBs that can construct much more precise queries than most web-based query forms, yet that is user friendly enough to be used by biologists. It supports queries containing multiple conditions, and connecting multiple object types without using the join concept, which is unintuitive to biologists. This interactive web interface is called the Structured Advanced Query Page (SAQP). Users interactively build up a wide range of query constructs. Interactive documentation within the SAQP describes the schema of the queried DBs. The SAQP is based on BioVelo, a query language based on list comprehension. The SAQP is part of the Pathway Tools software and is available as part of several bioinformatics web sites powered by Pathway Tools, including the BioCyc.org site that contains more than 500 Pathway/Genome DBs. PMID:20624715
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee
2007-08-15
We have previously presented a knowledge-based computer-assisted detection (KB-CADe) system for the detection of mammographic masses. The system is designed to compare a query mammographic region with mammographic templates of known ground truth. The templates are stored in an adaptive knowledge database. Image similarity is assessed with information theoretic measures (e.g., mutual information) derived directly from the image histograms. A previous study suggested that the diagnostic performance of the system steadily improves as the knowledge database is initially enriched with more templates. However, as the database increases in size, an exhaustive comparison of the query case with each stored templatemore » becomes computationally burdensome. Furthermore, blind storing of new templates may result in redundancies that do not necessarily improve diagnostic performance. To address these concerns we investigated an entropy-based indexing scheme for improving the speed of analysis and for satisfying database storage restrictions without compromising the overall diagnostic performance of our KB-CADe system. The indexing scheme was evaluated on two different datasets as (i) a search mechanism to sort through the knowledge database, and (ii) a selection mechanism to build a smaller, concise knowledge database that is easier to maintain but still effective. There were two important findings in the study. First, entropy-based indexing is an effective strategy to identify fast a subset of templates that are most relevant to a given query. Only this subset could be analyzed in more detail using mutual information for optimized decision making regarding the query. Second, a selective entropy-based deposit strategy may be preferable where only high entropy cases are maintained in the knowledge database. Overall, the proposed entropy-based indexing scheme was shown to reduce the computational cost of our KB-CADe system by 55% to 80% while maintaining the system's diagnostic performance.« less
SPARQL Query Re-writing Using Partonomy Based Transformation Rules
NASA Astrophysics Data System (ADS)
Jain, Prateek; Yeh, Peter Z.; Verma, Kunal; Henson, Cory A.; Sheth, Amit P.
Often the information present in a spatial knowledge base is represented at a different level of granularity and abstraction than the query constraints. For querying ontology's containing spatial information, the precise relationships between spatial entities has to be specified in the basic graph pattern of SPARQL query which can result in long and complex queries. We present a novel approach to help users intuitively write SPARQL queries to query spatial data, rather than relying on knowledge of the ontology structure. Our framework re-writes queries, using transformation rules to exploit part-whole relations between geographical entities to address the mismatches between query constraints and knowledge base. Our experiments were performed on completely third party datasets and queries. Evaluations were performed on Geonames dataset using questions from National Geographic Bee serialized into SPARQL and British Administrative Geography Ontology using questions from a popular trivia website. These experiments demonstrate high precision in retrieval of results and ease in writing queries.
2006-06-01
SPARQL SPARQL Protocol and RDF Query Language SQL Structured Query Language SUMO Suggested Upper Merged Ontology SW... Query optimization algorithms are implemented in the Pellet reasoner in order to ensure querying a knowledge base is efficient . These algorithms...memory as a treelike structure in order for the data to be queried . XML Query (XQuery) is the standard language used when querying XML
Implementation of Quantum Private Queries Using Nuclear Magnetic Resonance
NASA Astrophysics Data System (ADS)
Wang, Chuan; Hao, Liang; Zhao, Lian-Jie
2011-08-01
We present a modified protocol for the realization of a quantum private query process on a classical database. Using one-qubit query and CNOT operation, the query process can be realized in a two-mode database. In the query process, the data privacy is preserved as the sender would not reveal any information about the database besides her query information, and the database provider cannot retain any information about the query. We implement the quantum private query protocol in a nuclear magnetic resonance system. The density matrix of the memory registers are constructed.
A study of medical and health queries to web search engines.
Spink, Amanda; Yang, Yin; Jansen, Jim; Nykanen, Pirrko; Lorence, Daniel P; Ozmutlu, Seda; Ozmutlu, H Cenk
2004-03-01
This paper reports findings from an analysis of medical or health queries to different web search engines. We report results: (i). comparing samples of 10000 web queries taken randomly from 1.2 million query logs from the AlltheWeb.com and Excite.com commercial web search engines in 2001 for medical or health queries, (ii). comparing the 2001 findings from Excite and AlltheWeb.com users with results from a previous analysis of medical and health related queries from the Excite Web search engine for 1997 and 1999, and (iii). medical or health advice-seeking queries beginning with the word 'should'. Findings suggest: (i). a small percentage of web queries are medical or health related, (ii). the top five categories of medical or health queries were: general health, weight issues, reproductive health and puberty, pregnancy/obstetrics, and human relationships, and (iii). over time, the medical and health queries may have declined as a proportion of all web queries, as the use of specialized medical/health websites and e-commerce-related queries has increased. Findings provide insights into medical and health-related web querying and suggests some implications for the use of the general web search engines when seeking medical/health information.
Monitoring Moving Queries inside a Safe Region
Al-Khalidi, Haidar; Taniar, David; Alamri, Sultan
2014-01-01
With mobile moving range queries, there is a need to recalculate the relevant surrounding objects of interest whenever the query moves. Therefore, monitoring the moving query is very costly. The safe region is one method that has been proposed to minimise the communication and computation cost of continuously monitoring a moving range query. Inside the safe region the set of objects of interest to the query do not change; thus there is no need to update the query while it is inside its safe region. However, when the query leaves its safe region the mobile device has to reevaluate the query, necessitating communication with the server. Knowing when and where the mobile device will leave a safe region is widely known as a difficult problem. To solve this problem, we propose a novel method to monitor the position of the query over time using a linear function based on the direction of the query obtained by periodic monitoring of its position. Periodic monitoring ensures that the query is aware of its location all the time. This method reduces the costs associated with communications in client-server architecture. Computational results show that our method is successful in handling moving query patterns. PMID:24696652
Eyles, Helen; Choi, Yeun-Hyang
2017-01-01
Interpretive, front-of-pack (FOP) nutrition labels may encourage reformulation of packaged foods. We aimed to evaluate the effects of the Health Star Rating (HSR), a new voluntary interpretive FOP labelling system, on food reformulation in New Zealand. Annual surveys of packaged food and beverage labelling and composition were undertaken in supermarkets before and after adoption of HSR i.e., 2014 to 2016. Outcomes assessed were HSR uptake by food group star ratings of products displaying a HSR label; nutritional composition of products displaying HSR compared with non-HSR products; and the composition of products displaying HSR labels in 2016 compared with their composition prior to introduction of HSR. In 2016, two years after adoption of the voluntary system, 5.3% of packaged food and beverage products surveyed (n = 807/15,357) displayed HSR labels. The highest rates of uptake were for cereals, convenience foods, packaged fruit and vegetables, sauces and spreads, and ‘Other’ products (predominantly breakfast beverages). Products displaying HSR labels had higher energy density but had significantly lower mean saturated fat, total sugar and sodium, and higher fibre, contents than non-HSR products (all p-values < 0.001). Small but statistically significant changes were observed in mean energy density (−29 KJ/100 g, p = 0.002), sodium (−49 mg/100 g, p = 0.03) and fibre (+0.5 g/100 g, p = 0.001) contents of HSR-labelled products compared with their composition prior to adoption of HSR. Reformulation of HSR-labelled products was greater than that of non-HSR-labelled products over the same period, e.g., energy reduction in HSR products was greater than in non-HSR products (−1.5% versus −0.4%), and sodium content of HSR products decreased by 4.6% while that of non-HSR products increased by 3.1%. We conclude that roll-out of the voluntary HSR labelling system is driving healthier reformulation of some products. Greater uptake across the full food supply should improve population diets. PMID:28829380
Mhurchu, Cliona Ni; Eyles, Helen; Choi, Yeun-Hyang
2017-08-22
Interpretive, front-of-pack (FOP) nutrition labels may encourage reformulation of packaged foods. We aimed to evaluate the effects of the Health Star Rating (HSR), a new voluntary interpretive FOP labelling system, on food reformulation in New Zealand. Annual surveys of packaged food and beverage labelling and composition were undertaken in supermarkets before and after adoption of HSR i.e., 2014 to 2016. Outcomes assessed were HSR uptake by food group star ratings of products displaying a HSR label; nutritional composition of products displaying HSR compared with non-HSR products; and the composition of products displaying HSR labels in 2016 compared with their composition prior to introduction of HSR. In 2016, two years after adoption of the voluntary system, 5.3% of packaged food and beverage products surveyed ( n = 807/15,357) displayed HSR labels. The highest rates of uptake were for cereals, convenience foods, packaged fruit and vegetables, sauces and spreads, and 'Other' products (predominantly breakfast beverages). Products displaying HSR labels had higher energy density but had significantly lower mean saturated fat, total sugar and sodium, and higher fibre, contents than non-HSR products (all p -values < 0.001). Small but statistically significant changes were observed in mean energy density (-29 KJ/100 g, p = 0.002), sodium (-49 mg/100 g, p = 0.03) and fibre (+0.5 g/100 g, p = 0.001) contents of HSR-labelled products compared with their composition prior to adoption of HSR. Reformulation of HSR-labelled products was greater than that of non-HSR-labelled products over the same period, e.g., energy reduction in HSR products was greater than in non-HSR products (-1.5% versus -0.4%), and sodium content of HSR products decreased by 4.6% while that of non-HSR products increased by 3.1%. We conclude that roll-out of the voluntary HSR labelling system is driving healthier reformulation of some products. Greater uptake across the full food supply should improve population diets.
RDF-GL: A SPARQL-Based Graphical Query Language for RDF
NASA Astrophysics Data System (ADS)
Hogenboom, Frederik; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay
This chapter presents RDF-GL, a graphical query language (GQL) for RDF. The GQL is based on the textual query language SPARQL and mainly focuses on SPARQL SELECT queries. The advantage of a GQL over textual query languages is that complexity is hidden through the use of graphical symbols. RDF-GL is supported by a Java-based editor, SPARQLinG, which is presented as well. The editor does not only allow for RDF-GL query creation, but also converts RDF-GL queries to SPARQL queries and is able to subsequently execute these. Experiments show that using the GQL in combination with the editor makes RDF querying more accessible for end users.
Lee, Nam-Ju; Cho, Eunhee; Bakken, Suzanne
2010-03-01
The purposes of this study were to develop a taxonomy for detection of errors related to hypertension management and to apply the taxonomy to retrospectively analyze the documentation of nurses in Advanced Practice Nurse (APN) training. We developed the Hypertension Diagnosis and Management Error Taxonomy and applied it in a sample of adult patient encounters (N = 15,862) that were documented in a personal digital assistant-based clinical log by registered nurses in APN training. We used Standard Query Language queries to retrieve hypertension-related data from the central database. The data were summarized using descriptive statistics. Blood pressure was documented in 77.5% (n = 12,297) of encounters; 21% had high blood pressure values. Missed diagnosis, incomplete diagnosis and misdiagnosis rates were 63.7%, 6.8% and 7.5% respectively. In terms of treatment, the omission rates were 17.9% for essential medications and 69.9% for essential patient teaching. Contraindicated anti-hypertensive medications were documented in 12% of encounters with co-occurring diagnoses of hypertension and asthma. The Hypertension Diagnosis and Management Error Taxonomy was useful for identifying errors based on documentation in a clinical log. The results provide an initial understanding of the nature of errors associated with hypertension diagnosis and management of nurses in APN training. The information gained from this study can contribute to educational interventions that promote APN competencies in identification and management of hypertension as well as overall patient safety and informatics competencies. Copyright © 2010 Korean Society of Nursing Science. Published by . All rights reserved.
A nutrient profiling system for the (re)formulation of a global food and beverage portfolio.
Vlassopoulos, Antonis; Masset, Gabriel; Charles, Veronique Rheiner; Hoover, Cassandra; Chesneau-Guillemont, Caroline; Leroy, Fabienne; Lehmann, Undine; Spieldenner, Jörg; Tee, E-Siong; Gibney, Mike; Drewnowski, Adam
2017-04-01
To describe the Nestlé Nutritional Profiling System (NNPS) developed to guide the reformulation of Nestlé products, and the results of its application in the USA and France. The NNPS is a category-specific system that calculates nutrient targets per serving as consumed, based on age-adjusted dietary guidelines. Products are aggregated into 32 food categories. The NNPS ensures that excessive amounts of nutrients to limit cannot be compensated for by adding nutrients to encourage. A study was conducted to measure changes in nutrient profiles of the most widely purchased Nestlé products from eight food categories (n = 99) in the USA and France. A comparison was made between the 2009-2010 and 2014-2015 products. The application of the NNPS between 2009-2010 and 2014-2015 was associated with an overall downwards trend for all nutrients to limit. Sodium and total sugars contents were reduced by up to 22 and 31 %, respectively. Saturated Fatty Acids and total fat reductions were less homogeneous across categories, with children products having larger reductions. Energy per serving was reduced by <10 % in most categories, while serving sizes remained unchanged. The NNPS sets feasible and yet challenging targets for public health-oriented reformulation of a varied product portfolio; its application was associated with improved nutrient density in eight major food categories in the USA and France. Confirmatory analyses are needed in other countries and food categories; the impact of such a large-scale reformulation on dietary intake and health remains to be investigated.
Cumulative query method for influenza surveillance using search engine data.
Seo, Dong-Woo; Jo, Min-Woo; Sohn, Chang Hwan; Shin, Soo-Yong; Lee, JaeHo; Yu, Maengsoo; Kim, Won Young; Lim, Kyoung Soo; Lee, Sang-Il
2014-12-16
Internet search queries have become an important data source in syndromic surveillance system. However, there is currently no syndromic surveillance system using Internet search query data in South Korea. The objective of this study was to examine correlations between our cumulative query method and national influenza surveillance data. Our study was based on the local search engine, Daum (approximately 25% market share), and influenza-like illness (ILI) data from the Korea Centers for Disease Control and Prevention. A quota sampling survey was conducted with 200 participants to obtain popular queries. We divided the study period into two sets: Set 1 (the 2009/10 epidemiological year for development set 1 and 2010/11 for validation set 1) and Set 2 (2010/11 for development Set 2 and 2011/12 for validation Set 2). Pearson's correlation coefficients were calculated between the Daum data and the ILI data for the development set. We selected the combined queries for which the correlation coefficients were .7 or higher and listed them in descending order. Then, we created a cumulative query method n representing the number of cumulative combined queries in descending order of the correlation coefficient. In validation set 1, 13 cumulative query methods were applied, and 8 had higher correlation coefficients (min=.916, max=.943) than that of the highest single combined query. Further, 11 of 13 cumulative query methods had an r value of ≥.7, but 4 of 13 combined queries had an r value of ≥.7. In validation set 2, 8 of 15 cumulative query methods showed higher correlation coefficients (min=.975, max=.987) than that of the highest single combined query. All 15 cumulative query methods had an r value of ≥.7, but 6 of 15 combined queries had an r value of ≥.7. Cumulative query method showed relatively higher correlation with national influenza surveillance data than combined queries in the development and validation set.
A Query Integrator and Manager for the Query Web
Brinkley, James F.; Detwiler, Landon T.
2012-01-01
We introduce two concepts: the Query Web as a layer of interconnected queries over the document web and the semantic web, and a Query Web Integrator and Manager (QI) that enables the Query Web to evolve. QI permits users to write, save and reuse queries over any web accessible source, including other queries saved in other installations of QI. The saved queries may be in any language (e.g. SPARQL, XQuery); the only condition for interconnection is that the queries return their results in some form of XML. This condition allows queries to chain off each other, and to be written in whatever language is appropriate for the task. We illustrate the potential use of QI for several biomedical use cases, including ontology view generation using a combination of graph-based and logical approaches, value set generation for clinical data management, image annotation using terminology obtained from an ontology web service, ontology-driven brain imaging data integration, small-scale clinical data integration, and wider-scale clinical data integration. Such use cases illustrate the current range of applications of QI and lead us to speculate about the potential evolution from smaller groups of interconnected queries into a larger query network that layers over the document and semantic web. The resulting Query Web could greatly aid researchers and others who now have to manually navigate through multiple information sources in order to answer specific questions. PMID:22531831
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, S. A.; Kussmann, J.; Ochsenfeld, C., E-mail: Christian.Ochsenfeld@cup.uni-muenchen.de
2014-08-07
We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N{sup 5}) to O(N{sup 3}) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows tomore » replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.« less
Specification Reformulation During Specification Validation
NASA Technical Reports Server (NTRS)
Benner, Kevin M.
1992-01-01
The goal of the ARIES Simulation Component (ASC) is to uncover behavioral errors by 'running' a specification at the earliest possible points during the specification development process. The problems to be overcome are the obvious ones the specification may be large, incomplete, underconstrained, and/or uncompilable. This paper describes how specification reformulation is used to mitigate these problems. ASC begins by decomposing validation into specific validation questions. Next, the specification is reformulated to abstract out all those features unrelated to the identified validation question thus creating a new specialized specification. ASC relies on a precise statement of the validation question and a careful application of transformations so as to preserve the essential specification semantics in the resulting specialized specification. This technique is a win if the resulting specialized specification is small enough so the user my easily handle any remaining obstacles to execution. This paper will: (1) describe what a validation question is; (2) outline analysis techniques for identifying what concepts are and are not relevant to a validation question; and (3) identify and apply transformations which remove these less relevant concepts while preserving those which are relevant.
Using Generalized Annotated Programs to Solve Social Network Diffusion Optimization Problems
2013-01-01
as follows: —Let kall be the k value for the SNDOP-ALL query and for each SNDOP query i, let ki be the k for that query. For each query i, set ki... kall − 1. —Number each element of vi ∈ V such that gI(vi) and V C(vi) are true. For the ith SNDOP query, let vi be the corresponding element of V —Let...vertices of S. PROOF. We set up |V | SNDOP-queries as follows: —Let kall be the k value for the SNDOP-ALL query and and for each SNDOP-query i, let ki be
A web-based data-querying tool based on ontology-driven methodology and flowchart-based model.
Ping, Xiao-Ou; Chung, Yufang; Tseng, Yi-Ju; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei
2013-10-08
Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, "degree of liver damage," "degree of liver damage when applying a mutually exclusive setting," and "treatments for liver cancer") was 100% for all four experiments (10 patients, 100 patients, 1000 patients, and 10,000 patients). Among the three measured query phases, (1) structured query language operations, (2) criteria verification, and (3) other, the first two had the longest execution time. The ontology-driven FBDQM-based approach enriched the capabilities of the data-querying system. The adoption of the GLIF3.5 increased the potential for interoperability, shareability, and reusability of the query tasks.
A relativistic analysis of clock synchronization
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1974-01-01
The relativistic conversion between coordinate time and atomic time is reformulated to allow simpler time calculations relating analysis in solar-system barycentric coordinates (using coordinate time) with earth-fixed observations (measuring earth-bound proper time or atomic time.) After an interpretation of terms, this simplified formulation, which has a rate accuracy of about 10 to the minus 15th power, is used to explain the conventions required in the synchronization of a world wide clock network and to analyze two synchronization techniques-portable clocks and radio interferometry. Finally, pertinent experiment tests of relativity are briefly discussed in terms of the reformulated time conversion.
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
Jadhav, Ashutosh; Andrews, Donna; Fiksdal, Alexander; Kumbamu, Ashok; McCormick, Jennifer B; Misitano, Andrew; Nelsen, Laurie; Ryu, Euijung; Sheth, Amit; Wu, Stephen
2014-01-01
Background The number of people using the Internet and mobile/smart devices for health information seeking is increasing rapidly. Although the user experience for online health information seeking varies with the device used, for example, smart devices (SDs) like smartphones/tablets versus personal computers (PCs) like desktops/laptops, very few studies have investigated how online health information seeking behavior (OHISB) may differ by device. Objective The objective of this study is to examine differences in OHISB between PCs and SDs through a comparative analysis of large-scale health search queries submitted through Web search engines from both types of devices. Methods Using the Web analytics tool, IBM NetInsight OnDemand, and based on the type of devices used (PCs or SDs), we obtained the most frequent health search queries between June 2011 and May 2013 that were submitted on Web search engines and directed users to the Mayo Clinic’s consumer health information website. We performed analyses on “Queries with considering repetition counts (QwR)” and “Queries without considering repetition counts (QwoR)”. The dataset contains (1) 2.74 million and 3.94 million QwoR, respectively for PCs and SDs, and (2) more than 100 million QwR for both PCs and SDs. We analyzed structural properties of the queries (length of the search queries, usage of query operators and special characters in health queries), types of search queries (keyword-based, wh-questions, yes/no questions), categorization of the queries based on health categories and information mentioned in the queries (gender, age-groups, temporal references), misspellings in the health queries, and the linguistic structure of the health queries. Results Query strings used for health information searching via PCs and SDs differ by almost 50%. The most searched health categories are “Symptoms” (1 in 3 search queries), “Causes”, and “Treatments & Drugs”. The distribution of search queries for different health categories differs with the device used for the search. Health queries tend to be longer and more specific than general search queries. Health queries from SDs are longer and have slightly fewer spelling mistakes than those from PCs. Users specify words related to women and children more often than that of men and any other age group. Most of the health queries are formulated using keywords; the second-most common are wh- and yes/no questions. Users ask more health questions using SDs than PCs. Almost all health queries have at least one noun and health queries from SDs are more descriptive than those from PCs. Conclusions This study is a large-scale comparative analysis of health search queries to understand the effects of device type (PCs vs SDs) used on OHISB. The study indicates that the device used for online health information search plays an important role in shaping how health information searches by consumers and patients are executed. PMID:25000537
Jadhav, Ashutosh; Andrews, Donna; Fiksdal, Alexander; Kumbamu, Ashok; McCormick, Jennifer B; Misitano, Andrew; Nelsen, Laurie; Ryu, Euijung; Sheth, Amit; Wu, Stephen; Pathak, Jyotishman
2014-07-04
The number of people using the Internet and mobile/smart devices for health information seeking is increasing rapidly. Although the user experience for online health information seeking varies with the device used, for example, smart devices (SDs) like smartphones/tablets versus personal computers (PCs) like desktops/laptops, very few studies have investigated how online health information seeking behavior (OHISB) may differ by device. The objective of this study is to examine differences in OHISB between PCs and SDs through a comparative analysis of large-scale health search queries submitted through Web search engines from both types of devices. Using the Web analytics tool, IBM NetInsight OnDemand, and based on the type of devices used (PCs or SDs), we obtained the most frequent health search queries between June 2011 and May 2013 that were submitted on Web search engines and directed users to the Mayo Clinic's consumer health information website. We performed analyses on "Queries with considering repetition counts (QwR)" and "Queries without considering repetition counts (QwoR)". The dataset contains (1) 2.74 million and 3.94 million QwoR, respectively for PCs and SDs, and (2) more than 100 million QwR for both PCs and SDs. We analyzed structural properties of the queries (length of the search queries, usage of query operators and special characters in health queries), types of search queries (keyword-based, wh-questions, yes/no questions), categorization of the queries based on health categories and information mentioned in the queries (gender, age-groups, temporal references), misspellings in the health queries, and the linguistic structure of the health queries. Query strings used for health information searching via PCs and SDs differ by almost 50%. The most searched health categories are "Symptoms" (1 in 3 search queries), "Causes", and "Treatments & Drugs". The distribution of search queries for different health categories differs with the device used for the search. Health queries tend to be longer and more specific than general search queries. Health queries from SDs are longer and have slightly fewer spelling mistakes than those from PCs. Users specify words related to women and children more often than that of men and any other age group. Most of the health queries are formulated using keywords; the second-most common are wh- and yes/no questions. Users ask more health questions using SDs than PCs. Almost all health queries have at least one noun and health queries from SDs are more descriptive than those from PCs. This study is a large-scale comparative analysis of health search queries to understand the effects of device type (PCs vs. SDs) used on OHISB. The study indicates that the device used for online health information search plays an important role in shaping how health information searches by consumers and patients are executed.
Prediction of user preference over shared-control paradigms for a robotic wheelchair.
Erdogan, Ahmetcan; Argall, Brenna D
2017-07-01
The design of intelligent powered wheelchairs has traditionally focused heavily on providing effective and efficient navigation assistance. Significantly less attention has been given to the end-user's preference between different assistance paradigms. It is possible to include these subjective evaluations in the design process, for example by soliciting feedback in post-experiment questionnaires. However, constantly querying the user for feedback during real-world operation is not practical. In this paper, we present a model that correlates objective performance metrics and subjective evaluations of autonomous wheelchair control paradigms. Using off-the-shelf machine learning techniques, we show that it is possible to build a model that can predict the most preferred shared-control method from task execution metrics such as effort, safety, performance and utilization. We further characterize the relative contributions of each of these metrics to the individual choice of most preferred assistance paradigm. Our evaluation includes Spinal Cord Injured (SCI) and uninjured subject groups. The results show that our proposed correlation model enables the continuous tracking of user preference and offers the possibility of autonomy that is customized to each user.
SkyQuery - A Prototype Distributed Query and Cross-Matching Web Service for the Virtual Observatory
NASA Astrophysics Data System (ADS)
Thakar, A. R.; Budavari, T.; Malik, T.; Szalay, A. S.; Fekete, G.; Nieto-Santisteban, M.; Haridas, V.; Gray, J.
2002-12-01
We have developed a prototype distributed query and cross-matching service for the VO community, called SkyQuery, which is implemented with hierarchichal Web Services. SkyQuery enables astronomers to run combined queries on existing distributed heterogeneous astronomy archives. SkyQuery provides a simple, user-friendly interface to run distributed queries over the federation of registered astronomical archives in the VO. The SkyQuery client connects to the portal Web Service, which farms the query out to the individual archives, which are also Web Services called SkyNodes. The cross-matching algorithm is run recursively on each SkyNode. Each archive is a relational DBMS with a HTM index for fast spatial lookups. The results of the distributed query are returned as an XML DataSet that is automatically rendered by the client. SkyQuery also returns the image cutout corresponding to the query result. SkyQuery finds not only matches between the various catalogs, but also dropouts - objects that exist in some of the catalogs but not in others. This is often as important as finding matches. We demonstrate the utility of SkyQuery with a brown-dwarf search between SDSS and 2MASS, and a search for radio-quiet quasars in SDSS, 2MASS and FIRST. The importance of a service like SkyQuery for the worldwide astronomical community cannot be overstated: data on the same objects in various archives is mapped in different wavelength ranges and looks very different due to different errors, instrument sensitivities and other peculiarities of each archive. Our cross-matching algorithm preforms a fuzzy spatial join across multiple catalogs. This type of cross-matching is currently often done by eye, one object at a time. A static cross-identification table for a set of archives would become obsolete by the time it was built - the exponential growth of astronomical data means that a dynamic cross-identification mechanism like SkyQuery is the only viable option. SkyQuery was funded by a grant from the NASA AISR program.
Layton, Natasha
2015-05-01
Substantial evidence supports assistive technology and environmental adaptations as key enablers to participation. In order to realise the potential of these interventions, they need to be both recognised in policy, and resourced in practice. This paper uses political theory to understand the complexities of assistive technology (AT) policy reform in Australia. AT research will not be influential in improving AT policy without consideration of political drivers. Theories of policy formation are considered, with Kingdon's (2003) theory of multiple streams identified as a useful lens through which to understand government actions. This theory is applied to the case of current AT policy reformulation in Australia. The convergence model of problem identification, policy formulation and political will is found to be an applicable construct with which to evaluate contemporary policy changes. This paper illustrates the cogency of this theory for the field of AT, in the case of Australia's recent disability and aged care reforms. Political theory provides a way of conceptualising the difficulties of consumers and AT practitioners experience in getting therapeutically valid solutions into public policy, and then getting policies prioritised and funded. It is suggested that AT practitioners must comprehend and consider political factors in working towards effective policies to support their practice. AT practitioners generally lack political awareness or an understanding of the drivers of policy. The effectiveness of AT practitioners at a systemic level will remain limited without consideration of policy drivers. AT practitioners must comprehend and consider political factors in working towards effective policies to support their practice.
NASA Taxonomy 2.0 Project Overview
NASA Technical Reports Server (NTRS)
Dutra, Jayne; Busch, Joseph
2004-01-01
This viewgraph presentation reviews the project to develop a Taxonomy for NASA. The benefits of this project are: Make it easy for various audiences to find relevant information from NASA programs quickly, specifically (1) Provide easy access for NASA Web resources (2) Information integration for unified queries and management reporting ve search results targeted to user interests the ability to move content through the enterprise to where it is needed most (3) Facilitate Records Management and Retention Requirements. In addition the project will assist NASA in complying with E-Government Act of 2002 and prepare NASA to participate in federal projects.
Three-dimensional spatiotemporal features for fast content-based retrieval of focal liver lesions.
Roy, Sharmili; Chi, Yanling; Liu, Jimin; Venkatesh, Sudhakar K; Brown, Michael S
2014-11-01
Content-based image retrieval systems for 3-D medical datasets still largely rely on 2-D image-based features extracted from a few representative slices of the image stack. Most 2 -D features that are currently used in the literature not only model a 3-D tumor incompletely but are also highly expensive in terms of computation time, especially for high-resolution datasets. Radiologist-specified semantic labels are sometimes used along with image-based 2-D features to improve the retrieval performance. Since radiological labels show large interuser variability, are often unstructured, and require user interaction, their use as lesion characterizing features is highly subjective, tedious, and slow. In this paper, we propose a 3-D image-based spatiotemporal feature extraction framework for fast content-based retrieval of focal liver lesions. All the features are computer generated and are extracted from four-phase abdominal CT images. Retrieval performance and query processing times for the proposed framework is evaluated on a database of 44 hepatic lesions comprising of five pathological types. Bull's eye percentage score above 85% is achieved for three out of the five lesion pathologies and for 98% of query lesions, at least one same type of lesion is ranked among the top two retrieved results. Experiments show that the proposed system's query processing is more than 20 times faster than other already published systems that use 2-D features. With fast computation time and high retrieval accuracy, the proposed system has the potential to be used as an assistant to radiologists for routine hepatic tumor diagnosis.
A bibliometric analysis of Australian general practice publications from 1980 to 2007 using PubMed.
Mendis, Kumara; Kidd, Michael R; Schattner, Peter; Canalese, Joseph
2010-01-01
We analysed Australian general practice (GP) publications in PubMed from 1980 to 2007 to determine journals, authors, publication types, national health priority areas (NHPA) and compared the results with those from three specialties (public health, cardiology and medical informatics) and two countries (the UK and New Zealand). Australian GP publications were downloaded in MEDLINE format using PubMed queries and were written to a Microsoft Access database using a software application. Search Query Language and online PubMed queries were used for further analysis. There were 4777 publications from 1980 to 2007. Australian Family Physician (38.1%) and the Medical Journal of Australia (17.6%) contributed 55.7% of publications. Reviews (12.7%), letters (6.6%), clinical trials (6.5%) and systematic reviews (5%) were the main PubMed publication types. Thirty five percent of publications addressed National Health Priority Areas with material on mental health (13.7%), neoplasms (6.5%) and cardiovascular conditions (5.9%). The comparable numbers of publications for the three specialties were: public health - 80 911, cardiology - 15 130 and medical informatics - 3338; total country GP comparisons were: UK - 14 658 and New Zealand - 1111. Australian GP publications have shown an impressive growth from 1980 to 2007 with a 15-fold increase. This increase may be due in part to the actions of the Australian government over the past decade to financially support research in primary care, as well as the maturing of academic general practice. This analysis can assist governments, researchers, policy makers and others to target resources so that further developments can be encouraged, supported and monitored.
Teng, Rui; Leibnitz, Kenji; Miura, Ryu
2013-01-01
An essential application of wireless sensor networks is to successfully respond to user queries. Query packet losses occur in the query dissemination due to wireless communication problems such as interference, multipath fading, packet collisions, etc. The losses of query messages at sensor nodes result in the failure of sensor nodes reporting the requested data. Hence, the reliable and successful dissemination of query messages to sensor nodes is a non-trivial problem. The target of this paper is to enable highly successful query delivery to sensor nodes by localized and energy-efficient discovery, and recovery of query losses. We adopt local and collective cooperation among sensor nodes to increase the success rate of distributed discoveries and recoveries. To enable the scalability in the operations of discoveries and recoveries, we employ a distributed name resolution mechanism at each sensor node to allow sensor nodes to self-detect the correlated queries and query losses, and then efficiently locally respond to the query losses. We prove that the collective discovery of query losses has a high impact on the success of query dissemination and reveal that scalability can be achieved by using the proposed approach. We further study the novel features of the cooperation and competition in the collective recovery at PHY and MAC layers, and show that the appropriate number of detectors can achieve optimal successful recovery rate. We evaluate the proposed approach with both mathematical analyses and computer simulations. The proposed approach enables a high rate of successful delivery of query messages and it results in short route lengths to recover from query losses. The proposed approach is scalable and operates in a fully distributed manner. PMID:23748172
NASA Astrophysics Data System (ADS)
Li, C.; Zhu, X.; Guo, W.; Liu, Y.; Huang, H.
2015-05-01
A method suitable for indoor complex semantic query considering the computation of indoor spatial relations is provided According to the characteristics of indoor space. This paper designs ontology model describing the space related information of humans, events and Indoor space objects (e.g. Storey and Room) as well as their relations to meet the indoor semantic query. The ontology concepts are used in IndoorSPARQL query language which extends SPARQL syntax for representing and querying indoor space. And four types specific primitives for indoor query, "Adjacent", "Opposite", "Vertical" and "Contain", are defined as query functions in IndoorSPARQL used to support quantitative spatial computations. Also a method is proposed to analysis the query language. Finally this paper adopts this method to realize indoor semantic query on the study area through constructing the ontology model for the study building. The experimental results show that the method proposed in this paper can effectively support complex indoor space semantic query.
VISAGE: Interactive Visual Graph Querying.
Pienta, Robert; Navathe, Shamkant; Tamersoy, Acar; Tong, Hanghang; Endert, Alex; Chau, Duen Horng
2016-06-01
Extracting useful patterns from large network datasets has become a fundamental challenge in many domains. We present VISAGE, an interactive visual graph querying approach that empowers users to construct expressive queries, without writing complex code (e.g., finding money laundering rings of bankers and business owners). Our contributions are as follows: (1) we introduce graph autocomplete , an interactive approach that guides users to construct and refine queries, preventing over-specification; (2) VISAGE guides the construction of graph queries using a data-driven approach, enabling users to specify queries with varying levels of specificity, from concrete and detailed (e.g., query by example), to abstract (e.g., with "wildcard" nodes of any types), to purely structural matching; (3) a twelve-participant, within-subject user study demonstrates VISAGE's ease of use and the ability to construct graph queries significantly faster than using a conventional query language; (4) VISAGE works on real graphs with over 468K edges, achieving sub-second response times for common queries.
VISAGE: Interactive Visual Graph Querying
Pienta, Robert; Navathe, Shamkant; Tamersoy, Acar; Tong, Hanghang; Endert, Alex; Chau, Duen Horng
2017-01-01
Extracting useful patterns from large network datasets has become a fundamental challenge in many domains. We present VISAGE, an interactive visual graph querying approach that empowers users to construct expressive queries, without writing complex code (e.g., finding money laundering rings of bankers and business owners). Our contributions are as follows: (1) we introduce graph autocomplete, an interactive approach that guides users to construct and refine queries, preventing over-specification; (2) VISAGE guides the construction of graph queries using a data-driven approach, enabling users to specify queries with varying levels of specificity, from concrete and detailed (e.g., query by example), to abstract (e.g., with “wildcard” nodes of any types), to purely structural matching; (3) a twelve-participant, within-subject user study demonstrates VISAGE’s ease of use and the ability to construct graph queries significantly faster than using a conventional query language; (4) VISAGE works on real graphs with over 468K edges, achieving sub-second response times for common queries. PMID:28553670
A Visual Interface for Querying Heterogeneous Phylogenetic Databases.
Jamil, Hasan M
2017-01-01
Despite the recent growth in the number of phylogenetic databases, access to these wealth of resources remain largely tool or form-based interface driven. It is our thesis that the flexibility afforded by declarative query languages may offer the opportunity to access these repositories in a better way, and to use such a language to pose truly powerful queries in unprecedented ways. In this paper, we propose a substantially enhanced closed visual query language, called PhyQL, that can be used to query phylogenetic databases represented in a canonical form. The canonical representation presented helps capture most phylogenetic tree formats in a convenient way, and is used as the storage model for our PhyloBase database for which PhyQL serves as the query language. We have implemented a visual interface for the end users to pose PhyQL queries using visual icons, and drag and drop operations defined over them. Once a query is posed, the interface translates the visual query into a Datalog query for execution over the canonical database. Responses are returned as hyperlinks to phylogenies that can be viewed in several formats using the tree viewers supported by PhyloBase. Results cached in PhyQL buffer allows secondary querying on the computed results making it a truly powerful querying architecture.
Which factors predict the time spent answering queries to a drug information centre?
Reppe, Linda A.; Spigset, Olav
2010-01-01
Objective To develop a model based upon factors able to predict the time spent answering drug-related queries to Norwegian drug information centres (DICs). Setting and method Drug-related queries received at 5 DICs in Norway from March to May 2007 were randomly assigned to 20 employees until each of them had answered a minimum of five queries. The employees reported the number of drugs involved, the type of literature search performed, and whether the queries were considered judgmental or not, using a specifically developed scoring system. Main outcome measures The scores of these three factors were added together to define a workload score for each query. Workload and its individual factors were subsequently related to the measured time spent answering the queries by simple or multiple linear regression analyses. Results Ninety-six query/answer pairs were analyzed. Workload significantly predicted the time spent answering the queries (adjusted R2 = 0.22, P < 0.001). Literature search was the individual factor best predicting the time spent answering the queries (adjusted R2 = 0.17, P < 0.001), and this variable also contributed the most in the multiple regression analyses. Conclusion The most important workload factor predicting the time spent handling the queries in this study was the type of literature search that had to be performed. The categorisation of queries as judgmental or not, also affected the time spent answering the queries. The number of drugs involved did not significantly influence the time spent answering drug information queries. PMID:20922480
Personalized query suggestion based on user behavior
NASA Astrophysics Data System (ADS)
Chen, Wanyu; Hao, Zepeng; Shao, Taihua; Chen, Honghui
Query suggestions help users refine their queries after they input an initial query. Previous work mainly concentrated on similarity-based and context-based query suggestion approaches. However, models that focus on adapting to a specific user (personalization) can help to improve the probability of the user being satisfied. In this paper, we propose a personalized query suggestion model based on users’ search behavior (UB model), where we inject relevance between queries and users’ search behavior into a basic probabilistic model. For the relevance between queries, we consider their semantical similarity and co-occurrence which indicates the behavior information from other users in web search. Regarding the current user’s preference to a query, we combine the user’s short-term and long-term search behavior in a linear fashion and deal with the data sparse problem with Bayesian probabilistic matrix factorization (BPMF). In particular, we also investigate the impact of different personalization strategies (the combination of the user’s short-term and long-term search behavior) on the performance of query suggestion reranking. We quantify the improvement of our proposed UB model against a state-of-the-art baseline using the public AOL query logs and show that it beats the baseline in terms of metrics used in query suggestion reranking. The experimental results show that: (i) for personalized ranking, users’ behavioral information helps to improve query suggestion effectiveness; and (ii) given a query, merging information inferred from the short-term and long-term search behavior of a particular user can result in a better performance than both plain approaches.
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-08-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.
Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce
Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel
2013-01-01
Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650
Woo, Hyekyung; Cho, Youngtae; Shim, Eunyoung; Lee, Jong-Koo; Lee, Chang-Gun; Kim, Seong Hwan
2016-07-04
As suggested as early as in 2006, logs of queries submitted to search engines seeking information could be a source for detection of emerging influenza epidemics if changes in the volume of search queries are monitored (infodemiology). However, selecting queries that are most likely to be associated with influenza epidemics is a particular challenge when it comes to generating better predictions. In this study, we describe a methodological extension for detecting influenza outbreaks using search query data; we provide a new approach for query selection through the exploration of contextual information gleaned from social media data. Additionally, we evaluate whether it is possible to use these queries for monitoring and predicting influenza epidemics in South Korea. Our study was based on freely available weekly influenza incidence data and query data originating from the search engine on the Korean website Daum between April 3, 2011 and April 5, 2014. To select queries related to influenza epidemics, several approaches were applied: (1) exploring influenza-related words in social media data, (2) identifying the chief concerns related to influenza, and (3) using Web query recommendations. Optimal feature selection by least absolute shrinkage and selection operator (Lasso) and support vector machine for regression (SVR) were used to construct a model predicting influenza epidemics. In total, 146 queries related to influenza were generated through our initial query selection approach. A considerable proportion of optimal features for final models were derived from queries with reference to the social media data. The SVR model performed well: the prediction values were highly correlated with the recent observed influenza-like illness (r=.956; P<.001) and virological incidence rate (r=.963; P<.001). These results demonstrate the feasibility of using search queries to enhance influenza surveillance in South Korea. In addition, an approach for query selection using social media data seems ideal for supporting influenza surveillance based on search query data.
Woo, Hyekyung; Shim, Eunyoung; Lee, Jong-Koo; Lee, Chang-Gun; Kim, Seong Hwan
2016-01-01
Background As suggested as early as in 2006, logs of queries submitted to search engines seeking information could be a source for detection of emerging influenza epidemics if changes in the volume of search queries are monitored (infodemiology). However, selecting queries that are most likely to be associated with influenza epidemics is a particular challenge when it comes to generating better predictions. Objective In this study, we describe a methodological extension for detecting influenza outbreaks using search query data; we provide a new approach for query selection through the exploration of contextual information gleaned from social media data. Additionally, we evaluate whether it is possible to use these queries for monitoring and predicting influenza epidemics in South Korea. Methods Our study was based on freely available weekly influenza incidence data and query data originating from the search engine on the Korean website Daum between April 3, 2011 and April 5, 2014. To select queries related to influenza epidemics, several approaches were applied: (1) exploring influenza-related words in social media data, (2) identifying the chief concerns related to influenza, and (3) using Web query recommendations. Optimal feature selection by least absolute shrinkage and selection operator (Lasso) and support vector machine for regression (SVR) were used to construct a model predicting influenza epidemics. Results In total, 146 queries related to influenza were generated through our initial query selection approach. A considerable proportion of optimal features for final models were derived from queries with reference to the social media data. The SVR model performed well: the prediction values were highly correlated with the recent observed influenza-like illness (r=.956; P<.001) and virological incidence rate (r=.963; P<.001). Conclusions These results demonstrate the feasibility of using search queries to enhance influenza surveillance in South Korea. In addition, an approach for query selection using social media data seems ideal for supporting influenza surveillance based on search query data. PMID:27377323
Schuers, Matthieu; Joulakian, Mher; Kerdelhué, Gaetan; Segas, Léa; Grosjean, Julien; Darmoni, Stéfan J; Griffon, Nicolas
2017-07-03
MEDLINE is the most widely used medical bibliographic database in the world. Most of its citations are in English and this can be an obstacle for some researchers to access the information the database contains. We created a multilingual query builder to facilitate access to the PubMed subset using a language other than English. The aim of our study was to assess the impact of this multilingual query builder on the quality of PubMed queries for non-native English speaking physicians and medical researchers. A randomised controlled study was conducted among French speaking general practice residents. We designed a multi-lingual query builder to facilitate information retrieval, based on available MeSH translations and providing users with both an interface and a controlled vocabulary in their own language. Participating residents were randomly allocated either the French or the English version of the query builder. They were asked to translate 12 short medical questions into MeSH queries. The main outcome was the quality of the query. Two librarians blind to the arm independently evaluated each query, using a modified published classification that differentiated eight types of errors. Twenty residents used the French version of the query builder and 22 used the English version. 492 queries were analysed. There were significantly more perfect queries in the French group vs. the English group (respectively 37.9% vs. 17.9%; p < 0.01). It took significantly more time for the members of the English group than the members of the French group to build each query, respectively 194 sec vs. 128 sec; p < 0.01. This multi-lingual query builder is an effective tool to improve the quality of PubMed queries in particular for researchers whose first language is not English.
A Web-Based Data-Querying Tool Based on Ontology-Driven Methodology and Flowchart-Based Model
Ping, Xiao-Ou; Chung, Yufang; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei
2013-01-01
Background Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. Objective The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. Methods The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. Results In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, “degree of liver damage,” “degree of liver damage when applying a mutually exclusive setting,” and “treatments for liver cancer”) was 100% for all four experiments (10 patients, 100 patients, 1000 patients, and 10,000 patients). Among the three measured query phases, (1) structured query language operations, (2) criteria verification, and (3) other, the first two had the longest execution time. Conclusions The ontology-driven FBDQM-based approach enriched the capabilities of the data-querying system. The adoption of the GLIF3.5 increased the potential for interoperability, shareability, and reusability of the query tasks. PMID:25600078
Reducing salt in food; setting product-specific criteria aiming at a salt intake of 5 g per day.
Dötsch-Klerk, M; Goossens, W P M M; Meijer, G W; van het Hof, K H
2015-07-01
There is an increasing public health concern regarding high salt intake, which is generally between 9 and 12 g per day, and much higher than the 5 g recommended by World Health Organization. Several relevant sectors of the food industry are engaged in salt reduction, but it is a challenge to reduce salt in products without compromising on taste, shelf-life or expense for consumers. The objective was to develop globally applicable salt reduction criteria as guidance for product reformulation. Two sets of product group-specific sodium criteria were developed to reduce salt levels in foods to help consumers reduce their intake towards an interim intake goal of 6 g/day, and—on the longer term—5 g/day. Data modelling using survey data from the United States, United Kingdom and Netherlands was performed to assess the potential impact on population salt intake of cross-industry food product reformulation towards these criteria. Modelling with 6 and 5 g/day criteria resulted in estimated reductions in population salt intake of 25 and 30% for the three countries, respectively, the latter representing an absolute decrease in the median salt intake of 1.8-2.2 g/day. The sodium criteria described in this paper can serve as guidance for salt reduction in foods. However, to enable achieving an intake of 5 g/day, salt reduction should not be limited to product reformulation. A multi-stakeholder approach is needed to make consumers aware of the need to reduce their salt intake. Nevertheless, dietary impact modelling shows that product reformulation by food industry has the potential to contribute substantially to salt-intake reduction.
Reducing salt in food; setting product-specific criteria aiming at a salt intake of 5 g per day
Dötsch-Klerk, M; PMM Goossens, W; Meijer, G W; van het Hof, K H
2015-01-01
Background/Objectives: There is an increasing public health concern regarding high salt intake, which is generally between 9 and 12 g per day, and much higher than the 5 g recommended by World Health Organization. Several relevant sectors of the food industry are engaged in salt reduction, but it is a challenge to reduce salt in products without compromising on taste, shelf-life or expense for consumers. The objective was to develop globally applicable salt reduction criteria as guidance for product reformulation. Subjects/Methods: Two sets of product group-specific sodium criteria were developed to reduce salt levels in foods to help consumers reduce their intake towards an interim intake goal of 6 g/day, and—on the longer term—5 g/day. Data modelling using survey data from the United States, United Kingdom and Netherlands was performed to assess the potential impact on population salt intake of cross-industry food product reformulation towards these criteria. Results: Modelling with 6 and 5 g/day criteria resulted in estimated reductions in population salt intake of 25 and 30% for the three countries, respectively, the latter representing an absolute decrease in the median salt intake of 1.8–2.2 g/day. Conclusions: The sodium criteria described in this paper can serve as guidance for salt reduction in foods. However, to enable achieving an intake of 5 g/day, salt reduction should not be limited to product reformulation. A multi-stakeholder approach is needed to make consumers aware of the need to reduce their salt intake. Nevertheless, dietary impact modelling shows that product reformulation by food industry has the potential to contribute substantially to salt-intake reduction. PMID:25690867
Northrop, Paul W. C.; Pathak, Manan; Rife, Derek; ...
2015-03-09
Lithium-ion batteries are an important technology to facilitate efficient energy storage and enable a shift from petroleum based energy to more environmentally benign sources. Such systems can be utilized most efficiently if good understanding of performance can be achieved for a range of operating conditions. Mathematical models can be useful to predict battery behavior to allow for optimization of design and control. An analytical solution is ideally preferred to solve the equations of a mathematical model, as it eliminates the error that arises when using numerical techniques and is usually computationally cheap. An analytical solution provides insight into the behaviormore » of the system and also explicitly shows the effects of different parameters on the behavior. However, most engineering models, including the majority of battery models, cannot be solved analytically due to non-linearities in the equations and state dependent transport and kinetic parameters. The numerical method used to solve the system of equations describing a battery operation can have a significant impact on the computational cost of the simulation. In this paper, a model reformulation of the porous electrode pseudo three dimensional (P3D) which significantly reduces the computational cost of lithium ion battery simulation, while maintaining high accuracy, is discussed. This reformulation enables the use of the P3D model into applications that would otherwise be too computationally expensive to justify its use, such as online control, optimization, and parameter estimation. Furthermore, the P3D model has proven to be robust enough to allow for the inclusion of additional physical phenomena as understanding improves. In this study, the reformulated model is used to allow for more complicated physical phenomena to be considered for study, including thermal effects.« less
Mantilla Herrera, Ana Maria; Crino, Michelle; Erskine, Holly E; Sacks, Gary; Ananthapavan, Jaithri; Mhurchu, Cliona Ni; Lee, Yong Yi
2018-05-14
The Health Star Rating (HSR) system is a voluntary front-of-pack labelling (FoPL) initiative endorsed by the Australian government in 2014. This study examines the impact of the HSR system on pre-packaged food reformulation measured by changes in energy density between products with and without HSR. The cost-effectiveness of the HSR system was modelled using a proportional multi-state life table Markov model for the 2010 Australian population. We evaluated scenarios in which the HSR system was implemented on a voluntary and mandatory basis (i.e., HSR uptake across 6.7% and 100% of applicable products, respectively). The main outcomes were health-adjusted life years (HALYs), net costs, and incremental cost-effectiveness ratios (ICERs). These were calculated with accompanying 95% uncertainty intervals (95% UI). The model predicted that HSR-attributable reformulation leads to small reductions in mean population energy intake (voluntary: 0.98 kJ/day [95% UI: -1.08 to 2.86]; mandatory: 11.81 kJ/day [95% UI: -11.24 to 36.13]). These are likely to result in reductions in mean body weight (voluntary: 0.01 kg [95% UI: -0.01 to 0.03]; mandatory: 0.11 kg [95% UI: -0.12 to 0.32], and HALYs (voluntary: 4207 HALYs [95% UI: 2438 to 6081]; mandatory: 49,949 HALYs [95% UI: 29,291 to 72,153]). The HSR system evaluated via changes in reformulation could be considered cost-effective relative to a willingness-to-pay threshold of A$50,000 per HALY (voluntary: A$1728 per HALY [95% UI: dominant to 10,445] and mandatory: A$4752 per HALY [95% UI: dominant to 16,236]).
Impact of abuse-deterrent OxyContin on prescription opioid utilization.
Hwang, Catherine S; Chang, Hsien-Yen; Alexander, G Caleb
2015-02-01
We quantified the degree to which the August 2010 reformulation of abuse-deterrent OxyContin affected its use, as well as the use of alternative extended-release and immediate-release opioids. We used the IMS Health National Prescription Audit, a nationally representative source of prescription activity in the USA, to conduct a segmented time-series analysis of the use of OxyContin and other prescription opioids. Our primary time period of interest was 12 months prior to and following August 2010. We performed model checks and sensitivity analyses, such as adjusting for marketing and promotion, using alternative lag periods, and adding extra observation points. OxyContin sales were similar before and after the August 2010 reformulation, with approximately 550 000 monthly prescriptions. After adjusting for declines in the generic extended-release oxycodone market, the formulation change was associated with a reduction of approximately 18 000 OxyContin prescription sales per month (p = 0.02). This decline corresponded to a change in the annual growth rate of OxyContin use, from 4.9% prior to the reformulation to -23.8% during the year after the reformulation. There were no statistically significant changes associated with the sales of alternative extended-release (p = 0.42) or immediate-release (p = 0.70) opioids. Multiple sensitivity analyses supported these findings and their substantive interpretation. The market debut of abuse-deterrent OxyContin was associated with declines in its use after accounting for the simultaneous contraction of the generic extended-release oxycodone market. Further scrutiny into the effect of abuse-deterrent formulations on medication use and health outcomes is vital given their popularity in opioid drug development. Copyright © 2014 John Wiley & Sons, Ltd.
Mining Longitudinal Web Queries: Trends and Patterns.
ERIC Educational Resources Information Center
Wang, Peiling; Berry, Michael W.; Yang, Yiheng
2003-01-01
Analyzed user queries submitted to an academic Web site during a four-year period, using a relational database, to examine users' query behavior, to identify problems they encounter, and to develop techniques for optimizing query analysis and mining. Linguistic analyses focus on query structures, lexicon, and word associations using statistical…
Optimizing a Query by Transformation and Expansion.
Glocker, Katrin; Knurr, Alexander; Dieter, Julia; Dominick, Friederike; Forche, Melanie; Koch, Christian; Pascoe Pérez, Analie; Roth, Benjamin; Ückert, Frank
2017-01-01
In the biomedical sector not only the amount of information produced and uploaded into the web is enormous, but also the number of sources where these data can be found. Clinicians and researchers spend huge amounts of time on trying to access this information and to filter the most important answers to a given question. As the formulation of these queries is crucial, automated query expansion is an effective tool to optimize a query and receive the best possible results. In this paper we introduce the concept of a workflow for an optimization of queries in the medical and biological sector by using a series of tools for expansion and transformation of the query. After the definition of attributes by the user, the query string is compared to previous queries in order to add semantic co-occurring terms to the query. Additionally, the query is enlarged by an inclusion of synonyms. The translation into database specific ontologies ensures the optimal query formulation for the chosen database(s). As this process can be performed in various databases at once, the results are ranked and normalized in order to achieve a comparable list of answers for a question.
WATCHMAN: A Data Warehouse Intelligent Cache Manager
NASA Technical Reports Server (NTRS)
Scheuermann, Peter; Shim, Junho; Vingralek, Radek
1996-01-01
Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.
2013 AAHA/AAFP fluid therapy guidelines for dogs and cats.
Davis, Harold; Jensen, Tracey; Johnson, Anthony; Knowles, Pamela; Meyer, Robert; Rucinsky, Renee; Shafford, Heidi
2013-01-01
Fluid therapy is important for many medical conditions in veterinary patients. The assessment of patient history, chief complaint, physical exam findings, and indicated additional testing will determine the need for fluid therapy. Fluid selection is dictated by the patient's needs, including volume, rate, fluid composition required, and location the fluid is needed (e.g., interstitial versus intravascular). Therapy must be individualized, tailored to each patient, and constantly re-evaluated and reformulated according to changes in status. Needs may vary according to the existence of either acute or chronic conditions, patient pathology (e.g., acid-base, oncotic, electrolyte abnormalities), and comorbid conditions. All patients should be assessed for three types of fluid disturbances: changes in volume, changes in content, and/or changes in distribution. The goals of these guidelines are to assist the clinician in prioritizing goals, selecting appropriate fluids and rates of administration, and assessing patient response to therapy. These guidelines provide recommendations for fluid administration for anesthetized patients and patients with fluid disturbances.
PAQ: Persistent Adaptive Query Middleware for Dynamic Environments
NASA Astrophysics Data System (ADS)
Rajamani, Vasanth; Julien, Christine; Payton, Jamie; Roman, Gruia-Catalin
Pervasive computing applications often entail continuous monitoring tasks, issuing persistent queries that return continuously updated views of the operational environment. We present PAQ, a middleware that supports applications' needs by approximating a persistent query as a sequence of one-time queries. PAQ introduces an integration strategy abstraction that allows composition of one-time query responses into streams representing sophisticated spatio-temporal phenomena of interest. A distinguishing feature of our middleware is the realization that the suitability of a persistent query's result is a function of the application's tolerance for accuracy weighed against the associated overhead costs. In PAQ, programmers can specify an inquiry strategy that dictates how information is gathered. Since network dynamics impact the suitability of a particular inquiry strategy, PAQ associates an introspection strategy with a persistent query, that evaluates the quality of the query's results. The result of introspection can trigger application-defined adaptation strategies that alter the nature of the query. PAQ's simple API makes developing adaptive querying systems easily realizable. We present the key abstractions, describe their implementations, and demonstrate the middleware's usefulness through application examples and evaluation.
NASA Astrophysics Data System (ADS)
Kuznetsov, Valentin; Riley, Daniel; Afaq, Anzar; Sekhri, Vijay; Guo, Yuyi; Lueking, Lee
2010-04-01
The CMS experiment has implemented a flexible and powerful system enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to CMS physics data. To this, we have added a generalized query system in addition to the existing web and programmatic interfaces to the DBS. This query system is based on a query language that hides the complexity of the underlying database structure by discovering the join conditions between database tables. This provides a way of querying the system that is simple and straightforward for CMS data managers and physicists to use without requiring knowledge of the database tables or keys. The DBS Query Language uses the ANTLR tool to build the input query parser and tokenizer, followed by a query builder that uses a graph representation of the DBS schema to construct the SQL query sent to underlying database. We will describe the design of the query system, provide details of the language components and overview of how this component fits into the overall data discovery system architecture.
Spatial aggregation query in dynamic geosensor networks
NASA Astrophysics Data System (ADS)
Yi, Baolin; Feng, Dayang; Xiao, Shisong; Zhao, Erdun
2007-11-01
Wireless sensor networks have been widely used for civilian and military applications, such as environmental monitoring and vehicle tracking. In many of these applications, the researches mainly aim at building sensor network based systems to leverage the sensed data to applications. However, the existing works seldom exploited spatial aggregation query considering the dynamic characteristics of sensor networks. In this paper, we investigate how to process spatial aggregation query over dynamic geosensor networks where both the sink node and sensor nodes are mobile and propose several novel improvements on enabling techniques. The mobility of sensors makes the existing routing protocol based on information of fixed framework or the neighborhood infeasible. We present an improved location-based stateless implicit geographic forwarding (IGF) protocol for routing a query toward the area specified by query window, a diameter-based window aggregation query (DWAQ) algorithm for query propagation and data aggregation in the query window, finally considering the location changing of the sink node, we present two schemes to forward the result to the sink node. Simulation results show that the proposed algorithms can improve query latency and query accuracy.
Hoogendam, Arjen; Stalenhoef, Anton FH; Robbé, Pieter F de Vries; Overbeke, A John PM
2008-01-01
Background The use of PubMed to answer daily medical care questions is limited because it is challenging to retrieve a small set of relevant articles and time is restricted. Knowing what aspects of queries are likely to retrieve relevant articles can increase the effectiveness of PubMed searches. The objectives of our study were to identify queries that are likely to retrieve relevant articles by relating PubMed search techniques and tools to the number of articles retrieved and the selection of articles for further reading. Methods This was a prospective observational study of queries regarding patient-related problems sent to PubMed by residents and internists in internal medicine working in an Academic Medical Centre. We analyzed queries, search results, query tools (Mesh, Limits, wildcards, operators), selection of abstract and full-text for further reading, using a portal that mimics PubMed. Results PubMed was used to solve 1121 patient-related problems, resulting in 3205 distinct queries. Abstracts were viewed in 999 (31%) of these queries, and in 126 (39%) of 321 queries using query tools. The average term count per query was 2.5. Abstracts were selected in more than 40% of queries using four or five terms, increasing to 63% if the use of four or five terms yielded 2–161 articles. Conclusion Queries sent to PubMed by physicians at our hospital during daily medical care contain fewer than three terms. Queries using four to five terms, retrieving less than 161 article titles, are most likely to result in abstract viewing. PubMed search tools are used infrequently by our population and are less effective than the use of four or five terms. Methods to facilitate the formulation of precise queries, using more relevant terms, should be the focus of education and research. PMID:18816391
Sabareesh, Varatharajan; Singh, Gurpreet
2013-04-01
Mass Spectrometry based Lipid(ome) Analyzer and Molecular Platform (MS-LAMP) is a new software capable of aiding in interpreting electrospray ionization (ESI) and/or matrix-assisted laser desorption/ionization (MALDI) mass spectrometric data of lipids. The graphical user interface (GUI) of this standalone programme is built using Perl::Tk. Two databases have been developed and constituted within MS-LAMP, on the basis of Mycobacterium tuberculosis (M. tb) lipid database (www.mrl.colostate.edu) and that of Lipid Metabolites and Pathways Strategy Consortium (LIPID MAPS; www.lipidmaps.org). Different types of queries entered through GUI would interrogate with a chosen database. The queries can be molecular mass(es) or mass-to-charge (m/z) value(s) and molecular formula. LIPID MAPS identifier also can be used to search but not for M. tb lipids. Multiple choices have been provided to select diverse ion types and lipids. Satisfying to input parameters, a glimpse of various lipid categories and their population distribution can be viewed in the output. Additionally, molecular structures of lipids in the output can be seen using ChemSketch (www.acdlabs.com), which has been linked to the programme. Furthermore, a version of MS-LAMP for use in Linux operating system is separately available, wherein PyMOL can be used to view molecular structures that result as output from General Lipidome MS-LAMP. The utility of this software is demonstrated using ESI mass spectrometric data of lipid extracts of M. tb grown under two different pH (5.5 and 7.0) conditions. Copyright © 2013 John Wiley & Sons, Ltd.
Discriminating between two reformulations of SU(3) Yang-Mills theory on a lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shibata, Akihiro; Kondo, Kei-Ichi; Shinohara, Toru
2016-01-22
In order to investigate quark confinement, we give a new reformulation of the SU (N) Yang-Mills theory on a lattice and present the results of the numerical simulations of the SU (3) Yang-Mills theory on a lattice. The numerical simulations include the derivation of the linear potential for static interquark potential, i.e., non-vanishing string tension, in which the “Abelian” dominance and magnetic monopole dominance are established, confirmation of the dual Meissner effect by measuring the chromoelectric flux tube between quark-antiquark pair, the induced magnetic-monopole current, and the type of dual superconductivity, etc.
Roache, Sarah A.; Gostin, Lawrence O.
2017-01-01
Globally, soda taxes are gaining momentum as powerful interventions to discourage sugar consumption and thereby reduce the growing burden of obesity and non-communicable diseases (NCDs). Evidence from early adopters including Mexico and Berkeley, California, confirms that soda taxes can disincentivize consumption through price increases and raise revenue to support government programs. The United Kingdom’s new graduated levy on sweetened beverages is yielding yet another powerful impact: soda manufacturers are reformulating their beverages to significantly reduce the sugar content. Product reformulation – whether incentivized or mandatory – helps reduce overconsumption of sugars at the societal level, moving away from the long-standing notion of individual responsibility in favor of collective strategies to promote health. But as a matter of health equity, soda product reformulation should occur globally, especially in low- and middleincome countries (LMICs), which are increasingly targeted as emerging markets for soda and junk food and are disproportionately impacted by NCDs. As global momentum for sugar reduction increases, governments and public health advocates should harness the power of soda taxes to tackle the economic, social, and informational drivers of soda consumption, driving improvements in food environments and the public’s health. PMID:28949460
A PDA study management tool (SMT) utilizing wireless broadband and full DICOM viewing capability
NASA Astrophysics Data System (ADS)
Documet, Jorge; Liu, Brent; Zhou, Zheng; Huang, H. K.; Documet, Luis
2007-03-01
During the last 4 years IPI (Image Processing and Informatics) Laboratory has been developing a web-based Study Management Tool (SMT) application that allows Radiologists, Film librarians and PACS-related (Picture Archiving and Communication System) users to dynamically and remotely perform Query/Retrieve operations in a PACS network. The users utilizing a regular PDA (Personal Digital Assistant) can remotely query a PACS archive to distribute any study to an existing DICOM (Digital Imaging and Communications in Medicine) node. This application which has proven to be convenient to manage the Study Workflow [1, 2] has been extended to include a DICOM viewing capability in the PDA. With this new feature, users can take a quick view of DICOM images providing them mobility and convenience at the same time. In addition, we are extending this application to Metropolitan-Area Wireless Broadband Networks. This feature requires Smart Phones that are capable of working as a PDA and have access to Broadband Wireless Services. With the extended application to wireless broadband technology and the preview of DICOM images, the Study Management Tool becomes an even more powerful tool for clinical workflow management.
Using background knowledge for picture organization and retrieval
NASA Astrophysics Data System (ADS)
Quintana, Yuri
1997-01-01
A picture knowledge base management system is described that is used to represent, organize and retrieve pictures from a frame knowledge base. Experiments with human test subjects were conducted to obtain further descriptions of pictures from news magazines. These descriptions were used to represent the semantic content of pictures in frame representations. A conceptual clustering algorithm is described which organizes pictures not only on the observable features, but also on implicit properties derived from the frame representations. The algorithm uses inheritance reasoning to take into account background knowledge in the clustering. The algorithm creates clusters of pictures using a group similarity function that is based on the gestalt theory of picture perception. For each cluster created, a frame is generated which describes the semantic content of pictures in the cluster. Clustering and retrieval experiments were conducted with and without background knowledge. The paper shows how the use of background knowledge and semantic similarity heuristics improves the speed, precision, and recall of queries processed. The paper concludes with a discussion of how natural language processing of can be used to assist in the development of knowledge bases and the processing of user queries.
CellAtlasSearch: a scalable search engine for single cells.
Srivastava, Divyanshu; Iyer, Arvind; Kumar, Vibhor; Sengupta, Debarka
2018-05-21
Owing to the advent of high throughput single cell transcriptomics, past few years have seen exponential growth in production of gene expression data. Recently efforts have been made by various research groups to homogenize and store single cell expression from a large number of studies. The true value of this ever increasing data deluge can be unlocked by making it searchable. To this end, we propose CellAtlasSearch, a novel search architecture for high dimensional expression data, which is massively parallel as well as light-weight, thus infinitely scalable. In CellAtlasSearch, we use a Graphical Processing Unit (GPU) friendly version of Locality Sensitive Hashing (LSH) for unmatched speedup in data processing and query. Currently, CellAtlasSearch features over 300 000 reference expression profiles including both bulk and single-cell data. It enables the user query individual single cell transcriptomes and finds matching samples from the database along with necessary meta information. CellAtlasSearch aims to assist researchers and clinicians in characterizing unannotated single cells. It also facilitates noise free, low dimensional representation of single-cell expression profiles by projecting them on a wide variety of reference samples. The web-server is accessible at: http://www.cellatlassearch.com.
Tao, Shiqiang; Cui, Licong; Wu, Xi; Zhang, Guo-Qiang
2017-01-01
To help researchers better access clinical data, we developed a prototype query engine called DataSphere for exploring large-scale integrated clinical data repositories. DataSphere expedites data importing using a NoSQL data management system and dynamically renders its user interface for concept-based querying tasks. DataSphere provides an interactive query-building interface together with query translation and optimization strategies, which enable users to build and execute queries effectively and efficiently. We successfully loaded a dataset of one million patients for University of Kentucky (UK) Healthcare into DataSphere with more than 300 million clinical data records. We evaluated DataSphere by comparing it with an instance of i2b2 deployed at UK Healthcare, demonstrating that DataSphere provides enhanced user experience for both query building and execution.
Tao, Shiqiang; Cui, Licong; Wu, Xi; Zhang, Guo-Qiang
2017-01-01
To help researchers better access clinical data, we developed a prototype query engine called DataSphere for exploring large-scale integrated clinical data repositories. DataSphere expedites data importing using a NoSQL data management system and dynamically renders its user interface for concept-based querying tasks. DataSphere provides an interactive query-building interface together with query translation and optimization strategies, which enable users to build and execute queries effectively and efficiently. We successfully loaded a dataset of one million patients for University of Kentucky (UK) Healthcare into DataSphere with more than 300 million clinical data records. We evaluated DataSphere by comparing it with an instance of i2b2 deployed at UK Healthcare, demonstrating that DataSphere provides enhanced user experience for both query building and execution. PMID:29854239
Improve Performance of Data Warehouse by Query Cache
NASA Astrophysics Data System (ADS)
Gour, Vishal; Sarangdevot, S. S.; Sharma, Anand; Choudhary, Vinod
2010-11-01
The primary goal of data warehouse is to free the information locked up in the operational database so that decision makers and business analyst can make queries, analysis and planning regardless of the data changes in operational database. As the number of queries is large, therefore, in certain cases there is reasonable probability that same query submitted by the one or multiple users at different times. Each time when query is executed, all the data of warehouse is analyzed to generate the result of that query. In this paper we will study how using query cache improves performance of Data Warehouse and try to find the common problems faced. These kinds of problems are faced by Data Warehouse administrators which are minimizes response time and improves the efficiency of query in data warehouse overall, particularly when data warehouse is updated at regular interval.
Safari, Leila; Patrick, Jon D
2018-06-01
This paper reports on a generic framework to provide clinicians with the ability to conduct complex analyses on elaborate research topics using cascaded queries to resolve internal time-event dependencies in the research questions, as an extension to the proposed Clinical Data Analytics Language (CliniDAL). A cascaded query model is proposed to resolve internal time-event dependencies in the queries which can have up to five levels of criteria starting with a query to define subjects to be admitted into a study, followed by a query to define the time span of the experiment. Three more cascaded queries can be required to define control groups, control variables and output variables which all together simulate a real scientific experiment. According to the complexity of the research questions, the cascaded query model has the flexibility of merging some lower level queries for simple research questions or adding a nested query to each level to compose more complex queries. Three different scenarios (one of them contains two studies) are described and used for evaluation of the proposed solution. CliniDAL's complex analyses solution enables answering complex queries with time-event dependencies at most in a few hours which manually would take many days. An evaluation of results of the research studies based on the comparison between CliniDAL and SQL solutions reveals high usability and efficiency of CliniDAL's solution. Copyright © 2018 Elsevier Inc. All rights reserved.
Reconfigurability in MDO Problem Synthesis. Part 1
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2004-01-01
Integrating autonomous disciplines into a problem amenable to solution presents a major challenge in realistic multidisciplinary design optimization (MDO). We propose a linguistic approach to MDO problem description, formulation, and solution we call reconfigurable multidisciplinary synthesis (REMS). With assistance from computer science techniques, REMS comprises an abstract language and a collection of processes that provide a means for dynamic reasoning about MDO problems in a range of contexts. The approach may be summarized as follows. Description of disciplinary data according to the rules of a grammar, followed by lexical analysis and compilation, yields basic computational components that can be assembled into various MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. The range of contexts for reasoning about MDO spans tasks from error checking and derivative computation to formulation and reformulation of optimization problem statements. In highly structured contexts, reconfigurability can mean a straightforward transformation among problem formulations with a single operation. We hope that REMS will enable experimentation with a variety of problem formulations in research environments, assist in the assembly of MDO test problems, and serve as a pre-processor in computational frameworks in production environments. This paper, Part 1 of two companion papers, discusses the fundamentals of REMS. Part 2 illustrates the methodology in more detail.
Reconfigurability in MDO Problem Synthesis. Part 2
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2004-01-01
Integrating autonomous disciplines into a problem amenable to solution presents a major challenge in realistic multidisciplinary design optimization (MDO). We propose a linguistic approach to MDO problem description, formulation, and solution we call reconfigurable multidisciplinary synthesis (REMS). With assistance from computer science techniques, REMS comprises an abstract language and a collection of processes that provide a means for dynamic reasoning about MDO problems in a range of contexts. The approach may be summarized as follows. Description of disciplinary data according to the rules of a grammar, followed by lexical analysis and compilation, yields basic computational components that can be assembled into various MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. The range of contexts for reasoning about MDO spans tasks from error checking and derivative computation to formulation and reformulation of optimization problem statements. In highly structured contexts, reconfigurability can mean a straightforward transformation among problem formulations with a single operation. We hope that REMS will enable experimentation with a variety of problem formulations in research environments, assist in the assembly of MDO test problems, and serve as a pre-processor in computational frameworks in production environments. Part 1 of two companion papers, discusses the fundamentals of REMS. This paper, Part 2 illustrates the methodology in more detail.
Evaluation of Sub Query Performance in SQL Server
NASA Astrophysics Data System (ADS)
Oktavia, Tanty; Sujarwo, Surya
2014-03-01
The paper explores several sub query methods used in a query and their impact on the query performance. The study uses experimental approach to evaluate the performance of each sub query methods combined with indexing strategy. The sub query methods consist of in, exists, relational operator and relational operator combined with top operator. The experimental shows that using relational operator combined with indexing strategy in sub query has greater performance compared with using same method without indexing strategy and also other methods. In summary, for application that emphasized on the performance of retrieving data from database, it better to use relational operator combined with indexing strategy. This study is done on Microsoft SQL Server 2012.
Secure Skyline Queries on Cloud Platform.
Liu, Jinfei; Yang, Juncheng; Xiong, Li; Pei, Jian
2017-04-01
Outsourcing data and computation to cloud server provides a cost-effective way to support large scale data storage and query processing. However, due to security and privacy concerns, sensitive data (e.g., medical records) need to be protected from the cloud server and other unauthorized users. One approach is to outsource encrypted data to the cloud server and have the cloud server perform query processing on the encrypted data only. It remains a challenging task to support various queries over encrypted data in a secure and efficient way such that the cloud server does not gain any knowledge about the data, query, and query result. In this paper, we study the problem of secure skyline queries over encrypted data. The skyline query is particularly important for multi-criteria decision making but also presents significant challenges due to its complex computations. We propose a fully secure skyline query protocol on data encrypted using semantically-secure encryption. As a key subroutine, we present a new secure dominance protocol, which can be also used as a building block for other queries. Finally, we provide both serial and parallelized implementations and empirically study the protocols in terms of efficiency and scalability under different parameter settings, verifying the feasibility of our proposed solutions.
Distributed query plan generation using multiobjective genetic algorithm.
Panicker, Shina; Kumar, T V Vijay
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.
Distributed Query Plan Generation Using Multiobjective Genetic Algorithm
Panicker, Shina; Vijay Kumar, T. V.
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513
Towards Hybrid Online On-Demand Querying of Realtime Data with Stateful Complex Event Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Qunzhi; Simmhan, Yogesh; Prasanna, Viktor K.
Emerging Big Data applications in areas like e-commerce and energy industry require both online and on-demand queries to be performed over vast and fast data arriving as streams. These present novel challenges to Big Data management systems. Complex Event Processing (CEP) is recognized as a high performance online query scheme which in particular deals with the velocity aspect of the 3-V’s of Big Data. However, traditional CEP systems do not consider data variety and lack the capability to embed ad hoc queries over the volume of data streams. In this paper, we propose H2O, a stateful complex event processing framework,more » to support hybrid online and on-demand queries over realtime data. We propose a semantically enriched event and query model to address data variety. A formal query algebra is developed to precisely capture the stateful and containment semantics of online and on-demand queries. We describe techniques to achieve the interactive query processing over realtime data featured by efficient online querying, dynamic stream data persistence and on-demand access. The system architecture is presented and the current implementation status reported.« less
Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce.
Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng
2013-11-01
The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS - a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing.
Query Health: standards-based, cross-platform population health surveillance
Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N
2014-01-01
Objective Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Materials and methods Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. Results We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. Discussions This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Conclusions Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. PMID:24699371
Query Health: standards-based, cross-platform population health surveillance.
Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N
2014-01-01
Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Using search engine query data to track pharmaceutical utilization: a study of statins.
Schuster, Nathaniel M; Rogers, Mary A M; McMahon, Laurence F
2010-08-01
To examine temporal and geographic associations between Google queries for health information and healthcare utilization benchmarks. Retrospective longitudinal study. Using Google Trends and Google Insights for Search data, the search terms Lipitor (atorvastatin calcium; Pfizer, Ann Arbor, MI) and simvastatin were evaluated for change over time and for association with Lipitor revenues. The relationship between query data and community-based resource use per Medicare beneficiary was assessed for 35 US metropolitan areas. Google queries for Lipitor significantly decreased from January 2004 through June 2009 and queries for simvastatin significantly increased (P <.001 for both), particularly after Lipitor came off patent (P <.001 for change in slope). The mean number of Google queries for Lipitor correlated (r = 0.98) with the percentage change in Lipitor global revenues from 2004 to 2008 (P <.001). Query preference for Lipitor over simvastatin was positively associated (r = 0.40) with a community's use of Medicare services. For every 1% increase in utilization of Medicare services in a community, there was a 0.2-unit increase in the ratio of Lipitor queries to simvastatin queries in that community (P = .02). Specific search engine queries for medical information correlate with pharmaceutical revenue and with overall healthcare utilization in a community. This suggests that search query data can track community-wide characteristics in healthcare utilization and have the potential for informing payers and policy makers regarding trends in utilization.
CSRQ: Communication-Efficient Secure Range Queries in Two-Tiered Sensor Networks
Dai, Hua; Ye, Qingqun; Yang, Geng; Xu, Jia; He, Ruiliang
2016-01-01
In recent years, we have seen many applications of secure query in two-tiered wireless sensor networks. Storage nodes are responsible for storing data from nearby sensor nodes and answering queries from Sink. It is critical to protect data security from a compromised storage node. In this paper, the Communication-efficient Secure Range Query (CSRQ)—a privacy and integrity preserving range query protocol—is proposed to prevent attackers from gaining information of both data collected by sensor nodes and queries issued by Sink. To preserve privacy and integrity, in addition to employing the encoding mechanisms, a novel data structure called encrypted constraint chain is proposed, which embeds the information of integrity verification. Sink can use this encrypted constraint chain to verify the query result. The performance evaluation shows that CSRQ has lower communication cost than the current range query protocols. PMID:26907293
SPARQLGraph: a web-based platform for graphically querying biological Semantic Web databases.
Schweiger, Dominik; Trajanoski, Zlatko; Pabinger, Stephan
2014-08-15
Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. SPARQLGraph offers an intuitive drag & drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers. This new graphical way of creating queries for biological Semantic Web databases considerably facilitates usability as it removes the requirement of knowing specific query languages and database structures. The system is freely available at http://sparqlgraph.i-med.ac.at.
Improving accuracy for identifying related PubMed queries by an integrated approach.
Lu, Zhiyong; Wilbur, W John
2009-10-01
PubMed is the most widely used tool for searching biomedical literature online. As with many other online search tools, a user often types a series of multiple related queries before retrieving satisfactory results to fulfill a single information need. Meanwhile, it is also a common phenomenon to see a user type queries on unrelated topics in a single session. In order to study PubMed users' search strategies, it is necessary to be able to automatically separate unrelated queries and group together related queries. Here, we report a novel approach combining both lexical and contextual analyses for segmenting PubMed query sessions and identifying related queries and compare its performance with the previous approach based solely on concept mapping. We experimented with our integrated approach on sample data consisting of 1539 pairs of consecutive user queries in 351 user sessions. The prediction results of 1396 pairs agreed with the gold-standard annotations, achieving an overall accuracy of 90.7%. This demonstrates that our approach is significantly better than the previously published method. By applying this approach to a one day query log of PubMed, we found that a significant proportion of information needs involved more than one PubMed query, and that most of the consecutive queries for the same information need are lexically related. Finally, the proposed PubMed distance is shown to be an accurate and meaningful measure for determining the contextual similarity between biological terms. The integrated approach can play a critical role in handling real-world PubMed query log data as is demonstrated in our experiments.
Improving accuracy for identifying related PubMed queries by an integrated approach
Lu, Zhiyong; Wilbur, W. John
2009-01-01
PubMed is the most widely used tool for searching biomedical literature online. As with many other online search tools, a user often types a series of multiple related queries before retrieving satisfactory results to fulfill a single information need. Meanwhile, it is also a common phenomenon to see a user type queries on unrelated topics in a single session. In order to study PubMed users’ search strategies, it is necessary to be able to automatically separate unrelated queries and group together related queries. Here, we report a novel approach combining both lexical and contextual analyses for segmenting PubMed query sessions and identifying related queries and compare its performance with the previous approach based solely on concept mapping. We experimented with our integrated approach on sample data consisting of 1,539 pairs of consecutive user queries in 351 user sessions. The prediction results of 1,396 pairs agreed with the gold-standard annotations, achieving an overall accuracy of 90.7%. This demonstrates that our approach is significantly better than the previously published method. By applying this approach to a one day query log of PubMed, we found that a significant proportion of information needs involved more than one PubMed query, and that most of the consecutive queries for the same information need are lexically related. Finally, the proposed PubMed distance is shown to be an accurate and meaningful measure for determining the contextual similarity between biological terms. The integrated approach can play a critical role in handling real-world PubMed query log data as is demonstrated in our experiments. PMID:19162232
Multi-Bit Quantum Private Query
NASA Astrophysics Data System (ADS)
Shi, Wei-Xu; Liu, Xing-Tong; Wang, Jian; Tang, Chao-Jing
2015-09-01
Most of the existing Quantum Private Queries (QPQ) protocols provide only single-bit queries service, thus have to be repeated several times when more bits are retrieved. Wei et al.'s scheme for block queries requires a high-dimension quantum key distribution system to sustain, which is still restricted in the laboratory. Here, based on Markus Jakobi et al.'s single-bit QPQ protocol, we propose a multi-bit quantum private query protocol, in which the user can get access to several bits within one single query. We also extend the proposed protocol to block queries, using a binary matrix to guard database security. Analysis in this paper shows that our protocol has better communication complexity, implementability and can achieve a considerable level of security.
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-09-18
A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CMrange monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods.
Estimating Missing Features to Improve Multimedia Information Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagherjeiran, A; Love, N S; Kamath, C
Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features.more » In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.« less
NASA Astrophysics Data System (ADS)
Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.
2015-07-01
Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.
A new method for the automatic retrieval of medical cases based on the RadLex ontology.
Spanier, A B; Cohen, D; Joskowicz, L
2017-03-01
The goal of medical case-based image retrieval (M-CBIR) is to assist radiologists in the clinical decision-making process by finding medical cases in large archives that most resemble a given case. Cases are described by radiology reports comprised of radiological images and textual information on the anatomy and pathology findings. The textual information, when available in standardized terminology, e.g., the RadLex ontology, and used in conjunction with the radiological images, provides a substantial advantage for M-CBIR systems. We present a new method for incorporating textual radiological findings from medical case reports in M-CBIR. The input is a database of medical cases, a query case, and the number of desired relevant cases. The output is an ordered list of the most relevant cases in the database. The method is based on a new case formulation, the Augmented RadLex Graph and an Anatomy-Pathology List. It uses a new case relatedness metric [Formula: see text] that prioritizes more specific medical terms in the RadLex tree over less specific ones and that incorporates the length of the query case. An experimental study on 8 CT queries from the 2015 VISCERAL 3D Case Retrieval Challenge database consisting of 1497 volumetric CT scans shows that our method has accuracy rates of 82 and 70% on the first 10 and 30 most relevant cases, respectively, thereby outperforming six other methods. The increasing amount of medical imaging data acquired in clinical practice constitutes a vast database of untapped diagnostically relevant information. This paper presents a new hybrid approach to retrieving the most relevant medical cases based on textual and image information.
Stennis visits Lake Cormorant school
2010-03-30
Alexis Harry, assistant director of Astro Camp at NASA's John C. Stennis Space Center, talks with students at Lake Cormorant (Miss.) Elementary School during a 'Living and Working in Space' presentation March 30. Stennis hosted the school presentation during a visit to the Oxford area. Harry, who also is a high school biology teacher in Slidell, La., spent time discussing space travel with students and answering questions they had about the experience, including queries about how astronauts eat, sleep and drink in space. The presentation was sponsored by the NASA Office of External Affairs and Education at Stennis. For more information about NASA education initiatives, visit: http://education.ssc.nasa.gov/.
Stennis visits Lake Cormorant school
NASA Technical Reports Server (NTRS)
2010-01-01
Alexis Harry, assistant director of Astro Camp at NASA's John C. Stennis Space Center, talks with students at Lake Cormorant (Miss.) Elementary School during a 'Living and Working in Space' presentation March 30. Stennis hosted the school presentation during a visit to the Oxford area. Harry, who also is a high school biology teacher in Slidell, La., spent time discussing space travel with students and answering questions they had about the experience, including queries about how astronauts eat, sleep and drink in space. The presentation was sponsored by the NASA Office of External Affairs and Education at Stennis. For more information about NASA education initiatives, visit: http://education.ssc.nasa.gov/.
NCBI GEO: archive for functional genomics data sets--update.
Barrett, Tanya; Wilhite, Stephen E; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F; Tomashevsky, Maxim; Marshall, Kimberly A; Phillippy, Katherine H; Sherman, Patti M; Holko, Michelle; Yefanov, Andrey; Lee, Hyeseung; Zhang, Naigong; Robertson, Cynthia L; Serova, Nadezhda; Davis, Sean; Soboleva, Alexandra
2013-01-01
The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data.
A Framework for WWW Query Processing
NASA Technical Reports Server (NTRS)
Wu, Binghui Helen; Wharton, Stephen (Technical Monitor)
2000-01-01
Query processing is the most common operation in a DBMS. Sophisticated query processing has been mainly targeted at a single enterprise environment providing centralized control over data and metadata. Submitting queries by anonymous users on the web is different in such a way that load balancing or DBMS' accessing control becomes the key issue. This paper provides a solution by introducing a framework for WWW query processing. The success of this framework lies in the utilization of query optimization techniques and the ontological approach. This methodology has proved to be cost effective at the NASA Goddard Space Flight Center Distributed Active Archive Center (GDAAC).
QBIC project: querying images by content, using color, texture, and shape
NASA Astrophysics Data System (ADS)
Niblack, Carlton W.; Barber, Ron; Equitz, Will; Flickner, Myron D.; Glasman, Eduardo H.; Petkovic, Dragutin; Yanker, Peter; Faloutsos, Christos; Taubin, Gabriel
1993-04-01
In the query by image content (QBIC) project we are studying methods to query large on-line image databases using the images' content as the basis of the queries. Examples of the content we use include color, texture, and shape of image objects and regions. Potential applications include medical (`Give me other images that contain a tumor with a texture like this one'), photo-journalism (`Give me images that have blue at the top and red at the bottom'), and many others in art, fashion, cataloging, retailing, and industry. Key issues include derivation and computation of attributes of images and objects that provide useful query functionality, retrieval methods based on similarity as opposed to exact match, query by image example or user drawn image, the user interfaces, query refinement and navigation, high dimensional database indexing, and automatic and semi-automatic database population. We currently have a prototype system written in X/Motif and C running on an RS/6000 that allows a variety of queries, and a test database of over 1000 images and 1000 objects populated from commercially available photo clip art images. In this paper we present the main algorithms for color texture, shape and sketch query that we use, show example query results, and discuss future directions.
Pentoney, Christopher; Harwell, Jeff; Leroy, Gondy
2014-01-01
Searching for medical information online is a common activity. While it has been shown that forming good queries is difficult, Google's query suggestion tool, a type of query expansion, aims to facilitate query formation. However, it is unknown how this expansion, which is based on what others searched for, affects the information gathering of the online community. To measure the impact of social-based query expansion, this study compared it with content-based expansion, i.e., what is really in the text. We used 138,906 medical queries from the AOL User Session Collection and expanded them using Google's Autocomplete method (social-based) and the content of the Google Web Corpus (content-based). We evaluated the specificity and ambiguity of the expansion terms for trigram queries. We also looked at the impact on the actual results using domain diversity and expansion edit distance. Results showed that the social-based method provided more precise expansion terms as well as terms that were less ambiguous. Expanded queries do not differ significantly in diversity when expanded using the social-based method (6.72 different domains returned in the first ten results, on average) vs. content-based method (6.73 different domains, on average).
a Novel Approach of Indexing and Retrieving Spatial Polygons for Efficient Spatial Region Queries
NASA Astrophysics Data System (ADS)
Zhao, J. H.; Wang, X. Z.; Wang, F. Y.; Shen, Z. H.; Zhou, Y. C.; Wang, Y. L.
2017-10-01
Spatial region queries are more and more widely used in web-based applications. Mechanisms to provide efficient query processing over geospatial data are essential. However, due to the massive geospatial data volume, heavy geometric computation, and high access concurrency, it is difficult to get response in real time. Spatial indexes are usually used in this situation. In this paper, based on k-d tree, we introduce a distributed KD-Tree (DKD-Tree) suitbable for polygon data, and a two-step query algorithm. The spatial index construction is recursive and iterative, and the query is an in memory process. Both the index and query methods can be processed in parallel, and are implemented based on HDFS, Spark and Redis. Experiments on a large volume of Remote Sensing images metadata have been carried out, and the advantages of our method are investigated by comparing with spatial region queries executed on PostgreSQL and PostGIS. Results show that our approach not only greatly improves the efficiency of spatial region query, but also has good scalability, Moreover, the two-step spatial range query algorithm can also save cluster resources to support a large number of concurrent queries. Therefore, this method is very useful when building large geographic information systems.
Secure Skyline Queries on Cloud Platform
Liu, Jinfei; Yang, Juncheng; Xiong, Li; Pei, Jian
2017-01-01
Outsourcing data and computation to cloud server provides a cost-effective way to support large scale data storage and query processing. However, due to security and privacy concerns, sensitive data (e.g., medical records) need to be protected from the cloud server and other unauthorized users. One approach is to outsource encrypted data to the cloud server and have the cloud server perform query processing on the encrypted data only. It remains a challenging task to support various queries over encrypted data in a secure and efficient way such that the cloud server does not gain any knowledge about the data, query, and query result. In this paper, we study the problem of secure skyline queries over encrypted data. The skyline query is particularly important for multi-criteria decision making but also presents significant challenges due to its complex computations. We propose a fully secure skyline query protocol on data encrypted using semantically-secure encryption. As a key subroutine, we present a new secure dominance protocol, which can be also used as a building block for other queries. Finally, we provide both serial and parallelized implementations and empirically study the protocols in terms of efficiency and scalability under different parameter settings, verifying the feasibility of our proposed solutions. PMID:28883710
NASA Astrophysics Data System (ADS)
Indrayana, I. N. E.; P, N. M. Wirasyanti D.; Sudiartha, I. KG
2018-01-01
Mobile application allow many users to access data from the application without being limited to space, space and time. Over time the data population of this application will increase. Data access time will cause problems if the data record has reached tens of thousands to millions of records.The objective of this research is to maintain the performance of data execution for large data records. One effort to maintain data access time performance is to apply query optimization method. The optimization used in this research is query heuristic optimization method. The built application is a mobile-based financial application using MySQL database with stored procedure therein. This application is used by more than one business entity in one database, thus enabling rapid data growth. In this stored procedure there is an optimized query using heuristic method. Query optimization is performed on a “Select” query that involves more than one table with multiple clausa. Evaluation is done by calculating the average access time using optimized and unoptimized queries. Access time calculation is also performed on the increase of population data in the database. The evaluation results shown the time of data execution with query heuristic optimization relatively faster than data execution time without using query optimization.
Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce
Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng
2016-01-01
The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS – a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing. PMID:27617325
A high performance, ad-hoc, fuzzy query processing system for relational databases
NASA Technical Reports Server (NTRS)
Mansfield, William H., Jr.; Fleischman, Robert M.
1992-01-01
Database queries involving imprecise or fuzzy predicates are currently an evolving area of academic and industrial research. Such queries place severe stress on the indexing and I/O subsystems of conventional database environments since they involve the search of large numbers of records. The Datacycle architecture and research prototype is a database environment that uses filtering technology to perform an efficient, exhaustive search of an entire database. It has recently been modified to include fuzzy predicates in its query processing. The approach obviates the need for complex index structures, provides unlimited query throughput, permits the use of ad-hoc fuzzy membership functions, and provides a deterministic response time largely independent of query complexity and load. This paper describes the Datacycle prototype implementation of fuzzy queries and some recent performance results.
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-01-01
A content-matched (CM) range monitoring query over moving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CM range monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. PMID:26393613
Systems and methods for an extensible business application framework
NASA Technical Reports Server (NTRS)
Bell, David G. (Inventor); Crawford, Michael (Inventor)
2012-01-01
Method and systems for editing data from a query result include requesting a query result using a unique collection identifier for a collection of individual files and a unique identifier for a configuration file that specifies a data structure for the query result. A query result is generated that contains a plurality of fields as specified by the configuration file, by combining each of the individual files associated with a unique identifier for a collection of individual files. The query result data is displayed with a plurality of labels as specified in the configuration file. Edits can be performed by querying a collection of individual files using the configuration file, editing a portion of the query result, and transmitting only the edited information for storage back into a data repository.
Graph cuts via l1 norm minimization.
Bhusnurmath, Arvind; Taylor, Camillo J
2008-10-01
Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latour, P.R.
Revolutionary changes in quality specifications (number, complexity, uncertainty, economic sensitivity) for reformulated gasolines (RFG) and low-sulfur diesels (LSD) are being addressed by powerful, new, computer-integrated manufacturing technology for Refinery Information Systems and Advanced Process Control (RIS/APC). This paper shows how the five active RIS/APC functions: performance measurement, optimization, scheduling, control and integration are used to manufacture new, clean fuels competitively. With current industry spending for this field averaging 2 to 3 cents/bbl crude, many refineries can capture 50 to 100 cents/bbl if the technology is properly employed and sustained throughout refining operations, organizations, and businesses.
Discrete mathematical physics and particle modeling
NASA Astrophysics Data System (ADS)
Greenspan, D.
The theory and application of the arithmetic approach to the foundations of both Newtonian and special relativistic mechanics are explored. Using only arithmetic, a reformulation of the Newtonian approach is given for: gravity; particle modeling of solids, liquids, and gases; conservative modeling of laminar and turbulent fluid flow, heat conduction, and elastic vibration; and nonconservative modeling of heat convection, shock-wave generation, the liquid drop problem, porous flow, the interface motion of a melting solid, soap films, string vibrations, and solitons. An arithmetic reformulation of special relativistic mechanics is given for theory in one space dimension, relativistic harmonic oscillation, and theory in three space dimensions. A speculative quantum mechanical model of vibrations in the water molecule is also discussed.
NASA Astrophysics Data System (ADS)
Ochsenfeld, Christian; Head-Gordon, Martin
1997-05-01
To exploit the exponential decay found in numerical studies for the density matrix and its derivative with respect to nuclear displacements, we reformulate the coupled perturbed self-consistent field (CPSCF) equations and a quadratically convergent SCF (QCSCF) method for Hartree-Fock and density functional theory within a local density matrix-based scheme. Our D-CPSCF (density matrix-based CPSCF) and D-QCSCF schemes open the way for exploiting sparsity and to achieve asymptotically linear scaling of computational complexity with molecular size ( M), in case of D-CPSCF for all O( M) derivative densities. Furthermore, these methods are even for small molecules strongly competitive to conventional algorithms.
Analysis of Information Needs of Users of MEDLINEplus, 2002 – 2003
Scott-Wright, Alicia; Crowell, Jon; Zeng, Qing; Bates, David W.; Greenes, Robert
2006-01-01
We analyzed query logs from use of MEDLINEplus to answer the questions: Are consumers’ health information needs stable over time? and To what extent do users’ queries change over time? To determine log stability, we assessed an Overlap Rate (OR) defined as the number of unique queries common to two adjacent months divided by the total number of unique queries in those months. All exactly matching queries were considered as one unique query. We measured ORs for the top 10 and 100 unique queries of a month and compared these to ORs for the following month. Over ten months, users submitted 12,234,737 queries; only 2,179,571 (17.8%) were unique and these had a mean word count of 2.73 (S.D., 0.24); 121 of 137 (88.3%) unique queries each comprised of exactly matching search term(s) used at least 5000 times were of only one word. We could predict with 95% confidence that the monthly OR for the top 100 unique queries would lie between 67% – 87% when compared with the top 100 from the previous month. The mean month-to-month OR for top 10 queries was 62% (S.D., 20%) indicating significant variability; the lowest OR of 33% between the top 10 in Mar. compared to Apr. was likely due to “new” interest in information about SARS pneumonia in Apr. 2003. Consumers’ health information needs are relatively stable and the 100 most common unique queries are about 77% the same from month to month. Website sponsors should provide a broad range of information about a relatively stable number of topics. Analyses of log similarity may identify media-induced, cyclical, or seasonal changes in areas of consumer interest. PMID:17238431
Big Data and Dysmenorrhea: What Questions Do Women and Men Ask About Menstrual Pain?
Chen, Chen X; Groves, Doyle; Miller, Wendy R; Carpenter, Janet S
2018-04-30
Menstrual pain is highly prevalent among women of reproductive age. As the general public increasingly obtains health information online, Big Data from online platforms provide novel sources to understand the public's perspectives and information needs about menstrual pain. The study's purpose was to describe salient queries about dysmenorrhea using Big Data from a question and answer platform. We performed text-mining of 1.9 billion queries from ChaCha, a United States-based question and answer platform. Dysmenorrhea-related queries were identified by using keyword searching. Each relevant query was split into token words (i.e., meaningful words or phrases) and stop words (i.e., not meaningful functional words). Word Adjacency Graph (WAG) modeling was used to detect clusters of queries and visualize the range of dysmenorrhea-related topics. We constructed two WAG models respectively from queries by women of reproductive age and bymen. Salient themes were identified through inspecting clusters of WAG models. We identified two subsets of queries: Subset 1 contained 507,327 queries from women aged 13-50 years. Subset 2 contained 113,888 queries from men aged 13 or above. WAG modeling revealed topic clusters for each subset. Between female and male subsets, topic clusters overlapped on dysmenorrhea symptoms and management. Among female queries, there were distinctive topics on approaching menstrual pain at school and menstrual pain-related conditions; while among male queries, there was a distinctive cluster of queries on menstrual pain from male's perspectives. Big Data mining of the ChaCha ® question and answer service revealed a series of information needs among women and men on menstrual pain. Findings may be useful in structuring the content and informing the delivery platform for educational interventions.
Multiple Query Evaluation Based on an Enhanced Genetic Algorithm.
ERIC Educational Resources Information Center
Tamine, Lynda; Chrisment, Claude; Boughanem, Mohand
2003-01-01
Explains the use of genetic algorithms to combine results from multiple query evaluations to improve relevance in information retrieval. Discusses niching techniques, relevance feedback techniques, and evolution heuristics, and compares retrieval results obtained by both genetic multiple query evaluation and classical single query evaluation…
Relational Algebra and SQL: Better Together
ERIC Educational Resources Information Center
McMaster, Kirby; Sambasivam, Samuel; Hadfield, Steven; Wolthuis, Stuart
2013-01-01
In this paper, we describe how database instructors can teach Relational Algebra and Structured Query Language together through programming. Students write query programs consisting of sequences of Relational Algebra operations vs. Structured Query Language SELECT statements. The query programs can then be run interactively, allowing students to…
A Firefly Algorithm-based Approach for Pseudo-Relevance Feedback: Application to Medical Database.
Khennak, Ilyes; Drias, Habiba
2016-11-01
The difficulty of disambiguating the sense of the incomplete and imprecise keywords that are extensively used in the search queries has caused the failure of search systems to retrieve the desired information. One of the most powerful and promising method to overcome this shortcoming and improve the performance of search engines is Query Expansion, whereby the user's original query is augmented by new keywords that best characterize the user's information needs and produce more useful query. In this paper, a new Firefly Algorithm-based approach is proposed to enhance the retrieval effectiveness of query expansion while maintaining low computational complexity. In contrast to the existing literature, the proposed approach uses a Firefly Algorithm to find the best expanded query among a set of expanded query candidates. Moreover, this new approach allows the determination of the length of the expanded query empirically. Experimental results on MEDLINE, the on-line medical information database, show that our proposed approach is more effective and efficient compared to the state-of-the-art.
RiPPAS: A Ring-Based Privacy-Preserving Aggregation Scheme in Wireless Sensor Networks
Zhang, Kejia; Han, Qilong; Cai, Zhipeng; Yin, Guisheng
2017-01-01
Recently, data privacy in wireless sensor networks (WSNs) has been paid increased attention. The characteristics of WSNs determine that users’ queries are mainly aggregation queries. In this paper, the problem of processing aggregation queries in WSNs with data privacy preservation is investigated. A Ring-based Privacy-Preserving Aggregation Scheme (RiPPAS) is proposed. RiPPAS adopts ring structure to perform aggregation. It uses pseudonym mechanism for anonymous communication and uses homomorphic encryption technique to add noise to the data easily to be disclosed. RiPPAS can handle both sum() queries and min()/max() queries, while the existing privacy-preserving aggregation methods can only deal with sum() queries. For processing sum() queries, compared with the existing methods, RiPPAS has advantages in the aspects of privacy preservation and communication efficiency, which can be proved by theoretical analysis and simulation results. For processing min()/max() queries, RiPPAS provides effective privacy preservation and has low communication overhead. PMID:28178197
Dynamic Querying of Mass-Storage RDF Data with Rule-Based Entailment Regimes
NASA Astrophysics Data System (ADS)
Ianni, Giovambattista; Krennwallner, Thomas; Martello, Alessandra; Polleres, Axel
RDF Schema (RDFS) as a lightweight ontology language is gaining popularity and, consequently, tools for scalable RDFS inference and querying are needed. SPARQL has become recently a W3C standard for querying RDF data, but it mostly provides means for querying simple RDF graphs only, whereas querying with respect to RDFS or other entailment regimes is left outside the current specification. In this paper, we show that SPARQL faces certain unwanted ramifications when querying ontologies in conjunction with RDF datasets that comprise multiple named graphs, and we provide an extension for SPARQL that remedies these effects. Moreover, since RDFS inference has a close relationship with logic rules, we generalize our approach to select a custom ruleset for specifying inferences to be taken into account in a SPARQL query. We show that our extensions are technically feasible by providing benchmark results for RDFS querying in our prototype system GiaBATA, which uses Datalog coupled with a persistent Relational Database as a back-end for implementing SPARQL with dynamic rule-based inference. By employing different optimization techniques like magic set rewriting our system remains competitive with state-of-the-art RDFS querying systems.
Mining the SDSS SkyServer SQL queries log
NASA Astrophysics Data System (ADS)
Hirota, Vitor M.; Santos, Rafael; Raddick, Jordan; Thakar, Ani
2016-05-01
SkyServer, the Internet portal for the Sloan Digital Sky Survey (SDSS) astronomic catalog, provides a set of tools that allows data access for astronomers and scientific education. One of SkyServer data access interfaces allows users to enter ad-hoc SQL statements to query the catalog. SkyServer also presents some template queries that can be used as basis for more complex queries. This interface has logged over 330 million queries submitted since 2001. It is expected that analysis of this data can be used to investigate usage patterns, identify potential new classes of queries, find similar queries, etc. and to shed some light on how users interact with the Sloan Digital Sky Survey data and how scientists have adopted the new paradigm of e-Science, which could in turn lead to enhancements on the user interfaces and experience in general. In this paper we review some approaches to SQL query mining, apply the traditional techniques used in the literature and present lessons learned, namely, that the general text mining approach for feature extraction and clustering does not seem to be adequate for this type of data, and, most importantly, we find that this type of analysis can result in very different queries being clustered together.
Applying Query Structuring in Cross-language Retrieval.
ERIC Educational Resources Information Center
Pirkola, Ari; Puolamaki, Deniz; Jarvelin, Kalervo
2003-01-01
Explores ways to apply query structuring in cross-language information retrieval. Tested were: English queries translated into Finnish using an electronic dictionary, and run in a Finnish newspaper databases; effects of compound-based structuring using a proximity operator for translation equivalents of query language compound components; and a…
Querying and Ranking XML Documents.
ERIC Educational Resources Information Center
Schlieder, Torsten; Meuss, Holger
2002-01-01
Discussion of XML, information retrieval, precision, and recall focuses on a retrieval technique that adopts the similarity measure of the vector space model, incorporates the document structure, and supports structured queries. Topics include a query model based on tree matching; structured queries and term-based ranking; and term frequency and…
Advanced Query Formulation in Deductive Databases.
ERIC Educational Resources Information Center
Niemi, Timo; Jarvelin, Kalervo
1992-01-01
Discusses deductive databases and database management systems (DBMS) and introduces a framework for advanced query formulation for end users. Recursive processing is described, a sample extensional database is presented, query types are explained, and criteria for advanced query formulation from the end user's viewpoint are examined. (31…
A Semantic Graph Query Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, I L
2006-10-16
Semantic graphs can be used to organize large amounts of information from a number of sources into one unified structure. A semantic query language provides a foundation for extracting information from the semantic graph. The graph query language described here provides a simple, powerful method for querying semantic graphs.
Augmenting Trust Establishment in Dynamic Systems with Social Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagesse, Brent J; Kumar, Mohan; Venkatesh, Svetha
2010-01-01
Social networking has recently flourished in popularity through the use of social websites. Pervasive computing resources have allowed people stay well-connected to each other through access to social networking resources. We take the position that utilizing information produced by relationships within social networks can assist in the establishment of trust for other pervasive computing applications. Furthermore, we describe how such a system can augment a sensor infrastructure used for event observation with information from mobile sensors (ie, mobile phones with cameras) controlled by potentially untrusted third parties. Pervasive computing systems are invisible systems, oriented around the user. As a result,more » many future pervasive systems are likely to include a social aspect to the system. The social communities that are developed in these systems can augment existing trust mechanisms with information about pre-trusted entities or entities to initially consider when beginning to establish trust. An example of such a system is the Collaborative Virtual Observation (CoVO) system fuses sensor information from disaparate sources in soft real-time to recreate a scene that provides observation of an event that has recently transpired. To accomplish this, CoVO must efficently access services whilst protecting the data from corruption from unknown remote nodes. CoVO combines dynamic service composition with virtual observation to utilize existing infrastructure with third party services available in the environment. Since these services are not under the control of the system, they may be unreliable or malicious. When an event of interest occurs, the given infrastructure (bus cameras, etc.) may not sufficiently cover the necessary information (be it in space, time, or sensor type). To enhance observation of the event, infrastructure is augmented with information from sensors in the environment that the infrastructure does not control. These sensors may be unreliable, uncooperative, or even malicious. Additionally, to execute queries in soft real-time, processing must be distributed to available systems in the environment. We propose to use information from social networks to satisfy these requirements. In this paper, we present our position that knowledge gained from social activities can be used to augment trust mechanisms in pervasive computing. The system uses social behavior of nodes to predict a subset that it wants to query for information. In this context, social behavior such as transit patterns and schedules (which can be used to determine if a queried node is likely to be reliable) or known relationships, such as a phone's address book, that can be used to determine networks of nodes that may also be able to assist in retrieving information. Neither implicit nor explicit relationships necessarily imply that the user trusts an entity, but rather will provide a starting place for establishing trust. The proposed framework utilizes social network information to assist in trust establishment when third-party sensors are used for sensing events.« less
Harris, Daniel R.; Henderson, Darren W.; Kavuluru, Ramakanth; Stromberg, Arnold J.; Johnson, Todd R.
2015-01-01
We present a custom, Boolean query generator utilizing common-table expressions (CTEs) that is capable of scaling with big datasets. The generator maps user-defined Boolean queries, such as those interactively created in clinical-research and general-purpose healthcare tools, into SQL. We demonstrate the effectiveness of this generator by integrating our work into the Informatics for Integrating Biology and the Bedside (i2b2) query tool and show that it is capable of scaling. Our custom generator replaces and outperforms the default query generator found within the Clinical Research Chart (CRC) cell of i2b2. In our experiments, sixteen different types of i2b2 queries were identified by varying four constraints: date, frequency, exclusion criteria, and whether selected concepts occurred in the same encounter. We generated non-trivial, random Boolean queries based on these 16 types; the corresponding SQL queries produced by both generators were compared by execution times. The CTE-based solution significantly outperformed the default query generator and provided a much more consistent response time across all query types (M=2.03, SD=6.64 vs. M=75.82, SD=238.88 seconds). Without costly hardware upgrades, we provide a scalable solution based on CTEs with very promising empirical results centered on performance gains. The evaluation methodology used for this provides a means of profiling clinical data warehouse performance. PMID:25192572
Dobrovolskaia, Marina A; McNeil, Scott E
2015-07-01
Clinical translation of nucleic acid-based therapeutics (NATs) is hampered by assorted challenges in immunotoxicity, hematotoxicity, pharmacokinetics, toxicology and formulation. Nanotechnology-based platforms are being considered to help address some of these challenges due to the nanoparticles' ability to change drug biodistribution, stability, circulation half-life, route of administration and dosage. Addressing toxicology and pharmacology concerns by various means including NATs reformulation using nanotechnology-based carriers has been reviewed before. However, little attention was given to the immunological and hematological issues associated with nanotechnology reformulation. This review focuses on application of nanotechnology carriers for delivery of various types of NATs, and how reformulation using nanoparticles affects immunological and hematological toxicities of this promising class of therapeutic agents. NATs share several immunological and hematological toxicities with common nanotechnology carriers. In order to avoid synergy or exaggeration of undesirable immunological and hematological effects of NATs by a nanocarrier, it is critical to consider the immunological compatibility of the nanotechnology platform and its components. Since receptors sensing nucleic acids are located essentially in all cellular compartments, a strategy for developing a nanoformulation with reduced immunotoxicity should first focus on precise delivery to the target site/cells and then on optimizing intracellular distribution.
A differentiable reformulation for E-optimal design of experiments in nonlinear dynamic biosystems.
Telen, Dries; Van Riet, Nick; Logist, Flip; Van Impe, Jan
2015-06-01
Informative experiments are highly valuable for estimating parameters in nonlinear dynamic bioprocesses. Techniques for optimal experiment design ensure the systematic design of such informative experiments. The E-criterion which can be used as objective function in optimal experiment design requires the maximization of the smallest eigenvalue of the Fisher information matrix. However, one problem with the minimal eigenvalue function is that it can be nondifferentiable. In addition, no closed form expression exists for the computation of eigenvalues of a matrix larger than a 4 by 4 one. As eigenvalues are normally computed with iterative methods, state-of-the-art optimal control solvers are not able to exploit automatic differentiation to compute the derivatives with respect to the decision variables. In the current paper a reformulation strategy from the field of convex optimization is suggested to circumvent these difficulties. This reformulation requires the inclusion of a matrix inequality constraint involving positive semidefiniteness. In this paper, this positive semidefiniteness constraint is imposed via Sylverster's criterion. As a result the maximization of the minimum eigenvalue function can be formulated in standard optimal control solvers through the addition of nonlinear constraints. The presented methodology is successfully illustrated with a case study from the field of predictive microbiology. Copyright © 2015. Published by Elsevier Inc.
Kinetics of binary nucleation of vapors in size and composition space.
Fisenko, Sergey P; Wilemski, Gerald
2004-11-01
We reformulate the kinetic description of binary nucleation in the gas phase using two natural independent variables: the total number of molecules g and the molar composition x of the cluster. The resulting kinetic equation can be viewed as a two-dimensional Fokker-Planck equation describing the simultaneous Brownian motion of the clusters in size and composition space. Explicit expressions for the Brownian diffusion coefficients in cluster size and composition space are obtained. For characterization of binary nucleation in gases three criteria are established. These criteria establish the relative importance of the rate processes in cluster size and composition space for different gas phase conditions and types of liquid mixtures. The equilibrium distribution function of the clusters is determined in terms of the variables g and x. We obtain an approximate analytical solution for the steady-state binary nucleation rate that has the correct limit in the transition to unary nucleation. To further illustrate our description, the nonequilibrium steady-state cluster concentrations are found by numerically solving the reformulated kinetic equation. For the reformulated transient problem, the relaxation or induction time for binary nucleation was calculated using Galerkin's method. This relaxation time is affected by processes in both size and composition space, but the contributions from each process can be separated only approximately.
Davies, Patrick T; Martin, Meredith J
2013-11-01
Although children's security in the context of the interparental relationship has been identified as a key explanatory mechanism in pathways between family discord and child psychopathology, little is known about the inner workings of emotional security as a goal system. Thus, the objective of this paper is to describe how our reformulation of emotional security theory within an ethological and evolutionary framework may advance the characterization of the architecture and operation of emotional security and, in the process, cultivate sustainable growing points in developmental psychopathology. The first section of the paper describes how children's security in the interparental relationship is organized around a distinctive behavioral system designed to defend against interpersonal threat. Building on this evolutionary foundation for emotional security, the paper offers an innovative taxonomy for identifying qualitatively different ways children try to preserve their security and its innovative implications for more precisely informing understanding of the mechanisms in pathways between family and developmental precursors and children's trajectories of mental health. In the final section, the paper highlights the potential of the reformulation of emotional security theory to stimulate new generations of research on understanding how children defend against social threats in ecologies beyond the interparental dyad, including both familial and extrafamilial settings.
Lou, Wendy; L’Abbe, Mary R.
2017-01-01
To align with broader public health initiatives, reformulation of products to be lower in sugars requires interventions that also aim to reduce calorie contents. Currently available foods and beverages with a range of nutrient levels can be used to project successful reformulation opportunities. The objective of this study was to examine the relationship between free sugars and calorie levels in Canadian prepackaged foods and beverages. This study was a cross-sectional analysis of the University of Toronto’s 2013 Food Label Database, limited to major sources of total sugar intake in Canada (n = 6755). Penalized B-spline regression modelling was used to examine the relationship between free sugar levels (g/100 g or 100 mL) and caloric density (kcal/100 g or 10mL), by subcategory. Significant relationships were observed for only 3 of 5 beverage subcategories and for 14 of 32 food subcategories. Most subcategories demonstrated a positive trend with varying magnitude, however, results were not consistent across related subcategories (e.g., dairy-based products). Findings highlight potential areas of concern for reformulation, and the need for innovative solutions to ensure free sugars are reduced in products within the context of improving overall nutritional quality of the diet. PMID:28872586
Bruins, Maaike J.; Dötsch-Klerk, Mariska; Matthee, Joep; Kearney, Mary; van Elk, Kathelijn; Weber, Peter; Eggersdorfer, Manfred
2015-01-01
Hypertension is a major modifiable risk factor for cardiovascular disease and mortality, which could be lowered by reducing dietary sodium. The potential health impact of a product reformulation in the Netherlands was modelled, selecting packaged soups containing on average 25% less sodium as an example of an achievable product reformulation when implemented gradually. First, the blood pressure lowering resulting from sodium intake reduction was modelled. Second, the predicted blood pressure lowering was translated into potentially preventable incidence and mortality cases from stroke, acute myocardial infarction (AMI), angina pectoris, and heart failure (HF) implementing one year salt reduction. Finally, the potentially preventable subsequent lifetime Disability-Adjusted Life Years (DALYs) were calculated. The sodium reduction in soups might potentially reduce the incidence and mortality of stroke by approximately 0.5%, AMI and angina by 0.3%, and HF by 0.2%. The related burden of disease could be reduced by approximately 800 lifetime DALYs. This modelling approach can be used to provide insight into the potential public health impact of sodium reduction in specific food products. The data demonstrate that an achievable food product reformulation to reduce sodium can potentially benefit public health, albeit modest. When implemented across multiple product categories and countries, a significant health impact could be achieved. PMID:26393647
RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay
The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.
Query Language for Location-Based Services: A Model Checking Approach
NASA Astrophysics Data System (ADS)
Hoareau, Christian; Satoh, Ichiro
We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H
2012-11-06
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.
2013-01-01
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719
Horsman, Graeme
2018-04-23
The forensic analysis of mobile handsets is becoming a more prominent factor in many criminal investigations. Despite such devices frequently storing relevant evidential content to support an investigation, accessing this information is becoming an increasingly difficult task due to enhanced effective security features. Where access to a device's resident data is not possible via traditional mobile forensic methods, in some cases it may still be possible to extract user information via queries made to an installed intelligent personal assistant. This article presents an evaluation of the information which is retrievable from Apple's Siri when interacted with on a locked iOS device running iOS 11.2.5 (the latest at the time of testing). The testing of verbal commands designed to elicit a response from Siri demonstrate the ability to recover call log, SMS, Contacts, Apple Maps, Calendar, and device information which may support any further investigation. © 2018 American Academy of Forensic Sciences.
Query Expansion and Query Translation as Logical Inference.
ERIC Educational Resources Information Center
Nie, Jian-Yun
2003-01-01
Examines query expansion during query translation in cross language information retrieval and develops a general framework for inferential information retrieval in two particular contexts: using fuzzy logic and probability theory. Obtains evaluation formulas that are shown to strongly correspond to those used in other information retrieval models.…
End-User Use of Data Base Query Language: Pros and Cons.
ERIC Educational Resources Information Center
Nicholes, Walter
1988-01-01
Man-machine interface, the concept of a computer "query," a review of database technology, and a description of the use of query languages at Brigham Young University are discussed. The pros and cons of end-user use of database query languages are explored. (Author/MLW)
Information Retrieval Using UMLS-based Structured Queries
Fagan, Lawrence M.; Berrios, Daniel C.; Chan, Albert; Cucina, Russell; Datta, Anupam; Shah, Maulik; Surendran, Sujith
2001-01-01
During the last three years, we have developed and described components of ELBook, a semantically based information-retrieval system [1-4]. Using these components, domain experts can specify a query model, indexers can use the query model to index documents, and end-users can search these documents for instances of indexed queries.
A Relational Algebra Query Language for Programming Relational Databases
ERIC Educational Resources Information Center
McMaster, Kirby; Sambasivam, Samuel; Anderson, Nicole
2011-01-01
In this paper, we describe a Relational Algebra Query Language (RAQL) and Relational Algebra Query (RAQ) software product we have developed that allows database instructors to teach relational algebra through programming. Instead of defining query operations using mathematical notation (the approach commonly taken in database textbooks), students…
Collins, Marissa; Mason, Helen; O'Flaherty, Martin; Guzman-Castillo, Maria; Critchley, Julia; Capewell, Simon
2014-07-01
Dietary salt intake has been causally linked to high blood pressure and increased risk of cardiovascular events. Cardiovascular disease causes approximately 35% of total UK deaths, at an estimated annual cost of £30 billion. The World Health Organization and the National Institute for Health and Care Excellence have recommended a reduction in the intake of salt in people's diets. This study evaluated the cost-effectiveness of four population health policies to reduce dietary salt intake on an English population to prevent coronary heart disease (CHD). The validated IMPACT CHD model was used to quantify and compare four policies: 1) Change4Life health promotion campaign, 2) front-of-pack traffic light labeling to display salt content, 3) Food Standards Agency working with the food industry to reduce salt (voluntary), and 4) mandatory reformulation to reduce salt in processed foods. The effectiveness of these policies in reducing salt intake, and hence blood pressure, was determined by systematic literature review. The model calculated the reduction in mortality associated with each policy, quantified as life-years gained over 10 years. Policy costs were calculated using evidence from published sources. Health care costs for specific CHD patient groups were estimated. Costs were compared against a "do nothing" baseline. All policies resulted in a life-year gain over the baseline. Change4life and labeling each gained approximately 1960 life-years, voluntary reformulation 14,560 life-years, and mandatory reformulation 19,320 life-years. Each policy appeared cost saving, with mandatory reformulation offering the largest cost saving, more than £660 million. All policies to reduce dietary salt intake could gain life-years and reduce health care expenditure on coronary heart disease. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
An Ensemble Approach for Expanding Queries
2012-11-01
0.39 pain^0.39 Hospital 15094 0.82 hospital^0.82 Miscarriage 45 3.35 miscarriage ^3.35 Radiotherapy 53 3.28 radiotherapy^3.28 Hypoaldosteronism 3...negated query is the expansion of the original query with negation terms preceding each word. For example, the negated version of “ miscarriage ^3.35...includes “no miscarriage ”^3.35 and “not miscarriage ”^3.35. If a document is the result of both original query and negated query, its score is
Systematic review of dietary salt reduction policies: Evidence for an effectiveness hierarchy?
Hyseni, Lirije; Elliot-Green, Alex; Lloyd-Williams, Ffion; Kypridemos, Chris; O'Flaherty, Martin; McGill, Rory; Orton, Lois; Bromley, Helen; Cappuccio, Francesco P; Capewell, Simon
2017-01-01
Non-communicable disease (NCD) prevention strategies now prioritise four major risk factors: food, tobacco, alcohol and physical activity. Dietary salt intake remains much higher than recommended, increasing blood pressure, cardiovascular disease and stomach cancer. Substantial reductions in salt intake are therefore urgently needed. However, the debate continues about the most effective approaches. To inform future prevention programmes, we systematically reviewed the evidence on the effectiveness of possible salt reduction interventions. We further compared "downstream, agentic" approaches targeting individuals with "upstream, structural" policy-based population strategies. We searched six electronic databases (CDSR, CRD, MEDLINE, SCI, SCOPUS and the Campbell Library) using a pre-piloted search strategy focussing on the effectiveness of population interventions to reduce salt intake. Retrieved papers were independently screened, appraised and graded for quality by two researchers. To facilitate comparisons between the interventions, the extracted data were categorised using nine stages along the agentic/structural continuum, from "downstream": dietary counselling (for individuals, worksites or communities), through media campaigns, nutrition labelling, voluntary and mandatory reformulation, to the most "upstream" regulatory and fiscal interventions, and comprehensive strategies involving multiple components. After screening 2,526 candidate papers, 70 were included in this systematic review (49 empirical studies and 21 modelling studies). Some papers described several interventions. Quality was variable. Multi-component strategies involving both upstream and downstream interventions, generally achieved the biggest reductions in salt consumption across an entire population, most notably 4g/day in Finland and Japan, 3g/day in Turkey and 1.3g/day recently in the UK. Mandatory reformulation alone could achieve a reduction of approximately 1.45g/day (three separate studies), followed by voluntary reformulation (-0.8g/day), school interventions (-0.7g/day), short term dietary advice (-0.6g/day) and nutrition labelling (-0.4g/day), but each with a wide range. Tax and community based counselling could, each typically reduce salt intake by 0.3g/day, whilst even smaller population benefits were derived from health education media campaigns (-0.1g/day). Worksite interventions achieved an increase in intake (+0.5g/day), however, with a very wide range. Long term dietary advice could achieve a -2g/day reduction under optimal research trial conditions; however, smaller reductions might be anticipated in unselected individuals. Comprehensive strategies involving multiple components (reformulation, food labelling and media campaigns) and "upstream" population-wide policies such as mandatory reformulation generally appear to achieve larger reductions in population-wide salt consumption than "downstream", individually focussed interventions. This 'effectiveness hierarchy' might deserve greater emphasis in future NCD prevention strategies.
A novel adaptive Cuckoo search for optimal query plan generation.
Gomathi, Ramalingam; Sharmila, Dhandapani
2014-01-01
The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.
Query-Based Outlier Detection in Heterogeneous Information Networks.
Kuck, Jonathan; Zhuang, Honglei; Yan, Xifeng; Cam, Hasan; Han, Jiawei
2015-03-01
Outlier or anomaly detection in large data sets is a fundamental task in data science, with broad applications. However, in real data sets with high-dimensional space, most outliers are hidden in certain dimensional combinations and are relative to a user's search space and interest. It is often more effective to give power to users and allow them to specify outlier queries flexibly, and the system will then process such mining queries efficiently. In this study, we introduce the concept of query-based outlier in heterogeneous information networks, design a query language to facilitate users to specify such queries flexibly, define a good outlier measure in heterogeneous networks, and study how to process outlier queries efficiently in large data sets. Our experiments on real data sets show that following such a methodology, interesting outliers can be defined and uncovered flexibly and effectively in large heterogeneous networks.
Query-Based Outlier Detection in Heterogeneous Information Networks
Kuck, Jonathan; Zhuang, Honglei; Yan, Xifeng; Cam, Hasan; Han, Jiawei
2015-01-01
Outlier or anomaly detection in large data sets is a fundamental task in data science, with broad applications. However, in real data sets with high-dimensional space, most outliers are hidden in certain dimensional combinations and are relative to a user’s search space and interest. It is often more effective to give power to users and allow them to specify outlier queries flexibly, and the system will then process such mining queries efficiently. In this study, we introduce the concept of query-based outlier in heterogeneous information networks, design a query language to facilitate users to specify such queries flexibly, define a good outlier measure in heterogeneous networks, and study how to process outlier queries efficiently in large data sets. Our experiments on real data sets show that following such a methodology, interesting outliers can be defined and uncovered flexibly and effectively in large heterogeneous networks. PMID:27064397
Querying and Extracting Timeline Information from Road Traffic Sensor Data
Imawan, Ardi; Indikawati, Fitri Indra; Kwon, Joonho; Rao, Praveen
2016-01-01
The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS) centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information) system—a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index) that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset. PMID:27563900
NASA Astrophysics Data System (ADS)
Kondo, Kei-Ichi; Kato, Seikou; Shibata, Akihiro; Shinohara, Toru
2015-05-01
The purpose of this paper is to review the recent progress in understanding quark confinement. The emphasis of this review is placed on how to obtain a manifestly gauge-independent picture for quark confinement supporting the dual superconductivity in the Yang-Mills theory, which should be compared with the Abelian projection proposed by 't Hooft. The basic tools are novel reformulations of the Yang-Mills theory based on change of variables extending the decomposition of the SU(N) Yang-Mills field due to Cho, Duan-Ge and Faddeev-Niemi, together with the combined use of extended versions of the Diakonov-Petrov version of the non-Abelian Stokes theorem for the SU(N) Wilson loop operator. Moreover, we give the lattice gauge theoretical versions of the reformulation of the Yang-Mills theory which enables us to perform the numerical simulations on the lattice. In fact, we present some numerical evidences for supporting the dual superconductivity for quark confinement. The numerical simulations include the derivation of the linear potential for static interquark potential, i.e., non-vanishing string tension, in which the "Abelian" dominance and magnetic monopole dominance are established, confirmation of the dual Meissner effect by measuring the chromoelectric flux tube between quark-antiquark pair, the induced magnetic-monopole current, and the type of dual superconductivity, etc. In addition, we give a direct connection between the topological configuration of the Yang-Mills field such as instantons/merons and the magnetic monopole. We show especially that magnetic monopoles in the Yang-Mills theory can be constructed in a manifestly gauge-invariant way starting from the gauge-invariant Wilson loop operator and thereby the contribution from the magnetic monopoles can be extracted from the Wilson loop in a gauge-invariant way through the non-Abelian Stokes theorem for the Wilson loop operator, which is a prerequisite for exhibiting magnetic monopole dominance for quark confinement. The Wilson loop average is calculated according to the new reformulation written in terms of new field variables obtained from the original Yang-Mills field based on change of variables. The Maximally Abelian gauge in the original Yang-Mills theory is also reproduced by taking a specific gauge fixing in the reformulated Yang-Mills theory. This observation justifies the preceding results obtained in the maximal Abelian gauge at least for gauge-invariant quantities for SU(2) gauge group, which eliminates the criticism of gauge artifact raised for the Abelian projection. The claim has been confirmed based on the numerical simulations. However, for SU(N) (N ≥ 3), such a gauge-invariant reformulation is not unique, although the extension along the line proposed by Cho, Faddeev and Niemi is possible. In fact, we have found that there are a number of possible options of the reformulations, which are discriminated by the maximal stability group H ˜ of G, while there is a unique option of H ˜ = U(1) for G = SU(2) . The maximal stability group depends on the representation of the gauge group, to that the quark source belongs. For the fundamental quark for SU(3) , the maximal stability group is U(2) , which is different from the maximal torus group U(1) × U(1) suggested from the Abelian projection. Therefore, the chromomagnetic monopole inherent in the Wilson loop operator responsible for confinement of quarks in the fundamental representation for SU(3) is the non-Abelian magnetic monopole, which is distinct from the Abelian magnetic monopole for the SU(2) case. Therefore, we claim that the mechanism for quark confinement for SU(N) (N ≥ 3) is the non-Abelian dual superconductivity caused by condensation of non-Abelian magnetic monopoles. We give some theoretical considerations and numerical results supporting this picture. Finally, we discuss some issues to be investigated in future studies.
Policy Compliance of Queries for Private Information Retrieval
2010-11-01
SPARQL, unfortunately, is not in RDF and so we had to develop tools to translate SPARQL queries into RDF to be used by our policy compliance prototype...policy-assurance/sparql2n3.py) that accepts SPARQL queries and returns the translated query in our simplified ontology. An example of a translated
Knowledge Query Language (KQL)
2016-02-12
Lexington Massachusetts This page intentionally left blank. iii EXECUTIVE SUMMARY Currently, queries for data ...retrieval from non-Structured Query Language (NoSQL) data stores are tightly coupled to the specific implementation of the data store implementation...independent of the storage content and format for querying NoSQL or relational data stores. This approach uses address expressions (or A-Expressions
Fragger: a protein fragment picker for structural queries.
Berenger, Francois; Simoncini, David; Voet, Arnout; Shrestha, Rojan; Zhang, Kam Y J
2017-01-01
Protein modeling and design activities often require querying the Protein Data Bank (PDB) with a structural fragment, possibly containing gaps. For some applications, it is preferable to work on a specific subset of the PDB or with unpublished structures. These requirements, along with specific user needs, motivated the creation of a new software to manage and query 3D protein fragments. Fragger is a protein fragment picker that allows protein fragment databases to be created and queried. All fragment lengths are supported and any set of PDB files can be used to create a database. Fragger can efficiently search a fragment database with a query fragment and a distance threshold. Matching fragments are ranked by distance to the query. The query fragment can have structural gaps and the allowed amino acid sequences matching a query can be constrained via a regular expression of one-letter amino acid codes. Fragger also incorporates a tool to compute the backbone RMSD of one versus many fragments in high throughput. Fragger should be useful for protein design, loop grafting and related structural bioinformatics tasks.
NASA Astrophysics Data System (ADS)
Skotniczny, Zbigniew
1989-12-01
The Query by Forms (QbF) system is a user-oriented interactive tool for querying large relational database with minimal queries difinition cost. The system was worked out under the assumption that user's time and effort for defining needed queries is the most severe bottleneck. The system may be applied in any Rdb/VMS databases system and is recommended for specific information systems of any project where end-user queries cannot be foreseen. The tool is dedicated to specialist of an application domain who have to analyze data maintained in database from any needed point of view, who do not need to know commercial databases languages. The paper presents the system developed as a compromise between its functionality and usability. User-system communication via a menu-driven "tree-like" structure of screen-forms which produces a query difinition and execution is discussed in detail. Output of query results (printed reports and graphics) is also discussed. Finally the paper shows one application of QbF to a HERA-project.
Multidimensional indexing structure for use with linear optimization queries
NASA Technical Reports Server (NTRS)
Bergman, Lawrence David (Inventor); Castelli, Vittorio (Inventor); Chang, Yuan-Chi (Inventor); Li, Chung-Sheng (Inventor); Smith, John Richard (Inventor)
2002-01-01
Linear optimization queries, which usually arise in various decision support and resource planning applications, are queries that retrieve top N data records (where N is an integer greater than zero) which satisfy a specific optimization criterion. The optimization criterion is to either maximize or minimize a linear equation. The coefficients of the linear equation are given at query time. Methods and apparatus are disclosed for constructing, maintaining and utilizing a multidimensional indexing structure of database records to improve the execution speed of linear optimization queries. Database records with numerical attributes are organized into a number of layers and each layer represents a geometric structure called convex hull. Such linear optimization queries are processed by searching from the outer-most layer of this multi-layer indexing structure inwards. At least one record per layer will satisfy the query criterion and the number of layers needed to be searched depends on the spatial distribution of records, the query-issued linear coefficients, and N, the number of records to be returned. When N is small compared to the total size of the database, answering the query typically requires searching only a small fraction of all relevant records, resulting in a tremendous speedup as compared to linearly scanning the entire dataset.
The role of economics in the QUERI program: QUERI Series
Smith, Mark W; Barnett, Paul G
2008-01-01
Background The United States (U.S.) Department of Veterans Affairs (VA) Quality Enhancement Research Initiative (QUERI) has implemented economic analyses in single-site and multi-site clinical trials. To date, no one has reviewed whether the QUERI Centers are taking an optimal approach to doing so. Consistent with the continuous learning culture of the QUERI Program, this paper provides such a reflection. Methods We present a case study of QUERI as an example of how economic considerations can and should be integrated into implementation research within both single and multi-site studies. We review theoretical and applied cost research in implementation studies outside and within VA. We also present a critique of the use of economic research within the QUERI program. Results Economic evaluation is a key element of implementation research. QUERI has contributed many developments in the field of implementation but has only recently begun multi-site implementation trials across multiple regions within the national VA healthcare system. These trials are unusual in their emphasis on developing detailed costs of implementation, as well as in the use of business case analyses (budget impact analyses). Conclusion Economics appears to play an important role in QUERI implementation studies, only after implementation has reached the stage of multi-site trials. Economic analysis could better inform the choice of which clinical best practices to implement and the choice of implementation interventions to employ. QUERI economics also would benefit from research on costing methods and development of widely accepted international standards for implementation economics. PMID:18430199
The role of economics in the QUERI program: QUERI Series.
Smith, Mark W; Barnett, Paul G
2008-04-22
The United States (U.S.) Department of Veterans Affairs (VA) Quality Enhancement Research Initiative (QUERI) has implemented economic analyses in single-site and multi-site clinical trials. To date, no one has reviewed whether the QUERI Centers are taking an optimal approach to doing so. Consistent with the continuous learning culture of the QUERI Program, this paper provides such a reflection. We present a case study of QUERI as an example of how economic considerations can and should be integrated into implementation research within both single and multi-site studies. We review theoretical and applied cost research in implementation studies outside and within VA. We also present a critique of the use of economic research within the QUERI program. Economic evaluation is a key element of implementation research. QUERI has contributed many developments in the field of implementation but has only recently begun multi-site implementation trials across multiple regions within the national VA healthcare system. These trials are unusual in their emphasis on developing detailed costs of implementation, as well as in the use of business case analyses (budget impact analyses). Economics appears to play an important role in QUERI implementation studies, only after implementation has reached the stage of multi-site trials. Economic analysis could better inform the choice of which clinical best practices to implement and the choice of implementation interventions to employ. QUERI economics also would benefit from research on costing methods and development of widely accepted international standards for implementation economics.
Processing SPARQL queries with regular expressions in RDF databases
2011-01-01
Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225
Processing SPARQL queries with regular expressions in RDF databases.
Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon
2011-03-29
As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.
Chen, R S; Nadkarni, P; Marenco, L; Levin, F; Erdos, J; Miller, P L
2000-01-01
The entity-attribute-value representation with classes and relationships (EAV/CR) provides a flexible and simple database schema to store heterogeneous biomedical data. In certain circumstances, however, the EAV/CR model is known to retrieve data less efficiently than conventionally based database schemas. To perform a pilot study that systematically quantifies performance differences for database queries directed at real-world microbiology data modeled with EAV/CR and conventional representations, and to explore the relative merits of different EAV/CR query implementation strategies. Clinical microbiology data obtained over a ten-year period were stored using both database models. Query execution times were compared for four clinically oriented attribute-centered and entity-centered queries operating under varying conditions of database size and system memory. The performance characteristics of three different EAV/CR query strategies were also examined. Performance was similar for entity-centered queries in the two database models. Performance in the EAV/CR model was approximately three to five times less efficient than its conventional counterpart for attribute-centered queries. The differences in query efficiency became slightly greater as database size increased, although they were reduced with the addition of system memory. The authors found that EAV/CR queries formulated using multiple, simple SQL statements executed in batch were more efficient than single, large SQL statements. This paper describes a pilot project to explore issues in and compare query performance for EAV/CR and conventional database representations. Although attribute-centered queries were less efficient in the EAV/CR model, these inefficiencies may be addressable, at least in part, by the use of more powerful hardware or more memory, or both.
Seo, Dong-Woo; Sohn, Chang Hwan; Kim, Sung-Hoon; Ryoo, Seung Mok; Lee, Yoon-Seon; Lee, Jae Ho; Kim, Won Young; Lim, Kyoung Soo
2016-01-01
Background Digital surveillance using internet search queries can improve both the sensitivity and timeliness of the detection of a health event, such as an influenza outbreak. While it has recently been estimated that the mobile search volume surpasses the desktop search volume and mobile search patterns differ from desktop search patterns, the previous digital surveillance systems did not distinguish mobile and desktop search queries. The purpose of this study was to compare the performance of mobile and desktop search queries in terms of digital influenza surveillance. Methods and Results The study period was from September 6, 2010 through August 30, 2014, which consisted of four epidemiological years. Influenza-like illness (ILI) and virologic surveillance data from the Korea Centers for Disease Control and Prevention were used. A total of 210 combined queries from our previous survey work were used for this study. Mobile and desktop weekly search data were extracted from Naver, which is the largest search engine in Korea. Spearman’s correlation analysis was used to examine the correlation of the mobile and desktop data with ILI and virologic data in Korea. We also performed lag correlation analysis. We observed that the influenza surveillance performance of mobile search queries matched or exceeded that of desktop search queries over time. The mean correlation coefficients of mobile search queries and the number of queries with an r-value of ≥ 0.7 equaled or became greater than those of desktop searches over the four epidemiological years. A lag correlation analysis of up to two weeks showed similar trends. Conclusion Our study shows that mobile search queries for influenza surveillance have equaled or even become greater than desktop search queries over time. In the future development of influenza surveillance using search queries, the recognition of changing trend of mobile search data could be necessary. PMID:27391028
Shin, Soo-Yong; Kim, Taerim; Seo, Dong-Woo; Sohn, Chang Hwan; Kim, Sung-Hoon; Ryoo, Seung Mok; Lee, Yoon-Seon; Lee, Jae Ho; Kim, Won Young; Lim, Kyoung Soo
2016-01-01
Digital surveillance using internet search queries can improve both the sensitivity and timeliness of the detection of a health event, such as an influenza outbreak. While it has recently been estimated that the mobile search volume surpasses the desktop search volume and mobile search patterns differ from desktop search patterns, the previous digital surveillance systems did not distinguish mobile and desktop search queries. The purpose of this study was to compare the performance of mobile and desktop search queries in terms of digital influenza surveillance. The study period was from September 6, 2010 through August 30, 2014, which consisted of four epidemiological years. Influenza-like illness (ILI) and virologic surveillance data from the Korea Centers for Disease Control and Prevention were used. A total of 210 combined queries from our previous survey work were used for this study. Mobile and desktop weekly search data were extracted from Naver, which is the largest search engine in Korea. Spearman's correlation analysis was used to examine the correlation of the mobile and desktop data with ILI and virologic data in Korea. We also performed lag correlation analysis. We observed that the influenza surveillance performance of mobile search queries matched or exceeded that of desktop search queries over time. The mean correlation coefficients of mobile search queries and the number of queries with an r-value of ≥ 0.7 equaled or became greater than those of desktop searches over the four epidemiological years. A lag correlation analysis of up to two weeks showed similar trends. Our study shows that mobile search queries for influenza surveillance have equaled or even become greater than desktop search queries over time. In the future development of influenza surveillance using search queries, the recognition of changing trend of mobile search data could be necessary.
Searching for cancer information on the internet: analyzing natural language search queries.
Bader, Judith L; Theofanos, Mary Frances
2003-12-11
Searching for health information is one of the most-common tasks performed by Internet users. Many users begin searching on popular search engines rather than on prominent health information sites. We know that many visitors to our (National Cancer Institute) Web site, cancer.gov, arrive via links in search engine result. To learn more about the specific needs of our general-public users, we wanted to understand what lay users really wanted to know about cancer, how they phrased their questions, and how much detail they used. The National Cancer Institute partnered with AskJeeves, Inc to develop a methodology to capture, sample, and analyze 3 months of cancer-related queries on the Ask.com Web site, a prominent United States consumer search engine, which receives over 35 million queries per week. Using a benchmark set of 500 terms and word roots supplied by the National Cancer Institute, AskJeeves identified a test sample of cancer queries for 1 week in August 2001. From these 500 terms only 37 appeared >or= 5 times/day over the trial test week in 17208 queries. Using these 37 terms, 204165 instances of cancer queries were found in the Ask.com query logs for the actual test period of June-August 2001. Of these, 7500 individual user questions were randomly selected for detailed analysis and assigned to appropriate categories. The exact language of sample queries is presented. Considering multiples of the same questions, the sample of 7500 individual user queries represented 76077 queries (37% of the total 3-month pool). Overall 78.37% of sampled Cancer queries asked about 14 specific cancer types. Within each cancer type, queries were sorted into appropriate subcategories including at least the following: General Information, Symptoms, Diagnosis and Testing, Treatment, Statistics, Definition, and Cause/Risk/Link. The most-common specific cancer types mentioned in queries were Digestive/Gastrointestinal/Bowel (15.0%), Breast (11.7%), Skin (11.3%), and Genitourinary (10.5%). Additional subcategories of queries about specific cancer types varied, depending on user input. Queries that were not specific to a cancer type were also tracked and categorized. Natural-language searching affords users the opportunity to fully express their information needs and can aid users naïve to the content and vocabulary. The specific queries analyzed for this study reflect news and research studies reported during the study dates and would surely change with different study dates. Analyzing queries from search engines represents one way of knowing what kinds of content to provide to users of a given Web site. Users ask questions using whole sentences and keywords, often misspelling words. Providing the option for natural-language searching does not obviate the need for good information architecture, usability engineering, and user testing in order to optimize user experience.
Searching for Cancer Information on the Internet: Analyzing Natural Language Search Queries
Theofanos, Mary Frances
2003-01-01
Background Searching for health information is one of the most-common tasks performed by Internet users. Many users begin searching on popular search engines rather than on prominent health information sites. We know that many visitors to our (National Cancer Institute) Web site, cancer.gov, arrive via links in search engine result. Objective To learn more about the specific needs of our general-public users, we wanted to understand what lay users really wanted to know about cancer, how they phrased their questions, and how much detail they used. Methods The National Cancer Institute partnered with AskJeeves, Inc to develop a methodology to capture, sample, and analyze 3 months of cancer-related queries on the Ask.com Web site, a prominent United States consumer search engine, which receives over 35 million queries per week. Using a benchmark set of 500 terms and word roots supplied by the National Cancer Institute, AskJeeves identified a test sample of cancer queries for 1 week in August 2001. From these 500 terms only 37 appeared ≥ 5 times/day over the trial test week in 17208 queries. Using these 37 terms, 204165 instances of cancer queries were found in the Ask.com query logs for the actual test period of June-August 2001. Of these, 7500 individual user questions were randomly selected for detailed analysis and assigned to appropriate categories. The exact language of sample queries is presented. Results Considering multiples of the same questions, the sample of 7500 individual user queries represented 76077 queries (37% of the total 3-month pool). Overall 78.37% of sampled Cancer queries asked about 14 specific cancer types. Within each cancer type, queries were sorted into appropriate subcategories including at least the following: General Information, Symptoms, Diagnosis and Testing, Treatment, Statistics, Definition, and Cause/Risk/Link. The most-common specific cancer types mentioned in queries were Digestive/Gastrointestinal/Bowel (15.0%), Breast (11.7%), Skin (11.3%), and Genitourinary (10.5%). Additional subcategories of queries about specific cancer types varied, depending on user input. Queries that were not specific to a cancer type were also tracked and categorized. Conclusions Natural-language searching affords users the opportunity to fully express their information needs and can aid users naïve to the content and vocabulary. The specific queries analyzed for this study reflect news and research studies reported during the study dates and would surely change with different study dates. Analyzing queries from search engines represents one way of knowing what kinds of content to provide to users of a given Web site. Users ask questions using whole sentences and keywords, often misspelling words. Providing the option for natural-language searching does not obviate the need for good information architecture, usability engineering, and user testing in order to optimize user experience. PMID:14713659
Improved configuration control for redundant robots
NASA Technical Reports Server (NTRS)
Seraji, H.; Colbaugh, R.
1990-01-01
This article presents a singularity-robust task-prioritized reformulation of the configuration control scheme for redundant robot manipulators. This reformulation suppresses large joint velocities near singularities, at the expense of small task trajectory errors. This is achieved by optimally reducing the joint velocities to induce minimal errors in the task performance by modifying the task trajectories. Furthermore, the same framework provides a means for assignment of priorities between the basic task of end-effector motion and the user-defined additional task for utilizing redundancy. This allows automatic relaxation of the additional task constraints in favor of the desired end-effector motion, when both cannot be achieved exactly. The improved configuration control scheme is illustrated for a variety of additional tasks, and extensive simulation results are presented.
2PI effective action for the SYK model and tensor field theories
NASA Astrophysics Data System (ADS)
Benedetti, Dario; Gurau, Razvan
2018-05-01
We discuss the two-particle irreducible (2PI) effective action for the SYK model and for tensor field theories. For the SYK model the 2PI effective action reproduces the bilocal reformulation of the model without using replicas. In general tensor field theories the 2PI formalism is the only way to obtain a bilocal reformulation of the theory, and as such is a precious instrument for the identification of soft modes and for possible holographic interpretations. We compute the 2PI action for several models, and push it up to fourth order in the 1 /N expansion for the model proposed by Witten in [1], uncovering a one-loop structure in terms of an auxiliary bilocal action.
Downs, Shauna M; Thow, Anne Marie; Ghosh-Jerath, Suparna; Leeder, Stephen R
2015-01-01
The national Government of India has published draft regulation proposing a 5% upper limit of trans fat in partially hydrogenated vegetable oils (PHVOs). Global recommendations are to replace PHVOs with unsaturated fat but it is not known whether this will be feasible in India. We systematically identified policy options to address the three major underlying agricultural sector issues that influence reformulation with healthier oils: the low productivity of domestically produced oilseeds leading to a reliance on palm oil imports, supply chain wastage, and the low availability of oils high in unsaturated fats. Strengthening domestic supply chains in India will be necessary to maximize health gains associated with product reformulation.
Plans for a Next Generation Space-Based Gravitational-Wave Observatory (NGO)
NASA Technical Reports Server (NTRS)
Livas, Jeffrey C.; Stebbins, Robin T.; Jennrich, Oliver
2012-01-01
The European Space Agency (ESA) is currently in the process of selecting a mission for the Cosmic Visions Program. A space-based gravitational wave observatory in the low-frequency band (0.0001 - 1 Hz) of the gravitational wave spectrum is one of the leading contenders. This low frequency band has a rich spectrum of astrophysical sources, and the LISA concept has been the key mission to cover this science for over twenty years. Tight budgets have recently forced ESA to consider a reformulation of the LISA mission concept that wi" allow the Cosmic Visions Program to proceed on schedule either with the US as a minority participant, or independently of the US altogether. We report on the status of these reformulation efforts.
Reformulating the Schrödinger equation as a Shabat-Zakharov system
NASA Astrophysics Data System (ADS)
Boonserm, Petarpa; Visser, Matt
2010-02-01
We reformulate the second-order Schrödinger equation as a set of two coupled first-order differential equations, a so-called "Shabat-Zakharov system" (sometimes called a "Zakharov-Shabat" system). There is considerable flexibility in this approach, and we emphasize the utility of introducing an "auxiliary condition" or "gauge condition" that is used to cut down the degrees of freedom. Using this formalism, we derive the explicit (but formal) general solution to the Schrödinger equation. The general solution depends on three arbitrarily chosen functions, and a path-ordered exponential matrix. If one considers path ordering to be an "elementary" process, then this represents complete quadrature, albeit formal, of the second-order linear ordinary differential equation.
Reformulation of Possio's kernel with application to unsteady wind tunnel interference
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Golberg, M. A.
1980-01-01
An efficient method for computing the Possio kernel has remained elusive up to the present time. In this paper the Possio is reformulated so that it can be computed accurately using existing high precision numerical quadrature techniques. Convergence to the correct values is demonstrated and optimization of the integration procedures is discussed. Since more general kernels such as those associated with unsteady flows in ventilated wind tunnels are analytic perturbations of the Possio free air kernel, a more accurate evaluation of their collocation matrices results with an exponential improvement in convergence. An application to predicting frequency response of an airfoil-trailing edge control system in a wind tunnel compared with that in free air is given showing strong interference effects.
The diagnostic status of homosexuality in DSM-III: a reformulation of the issues.
Spitzer, R L
1981-02-01
In 1973 homosexuality per se was removed from the DSM-II classification of mental disorders and replaced by the category Sexual Orientation Disturbance. This represented a compromise between the view that preferential homosexuality is invariably a mental disorder and the view that it is merely a normal sexual variant. While the 1973 DSM-II controversy was highly public, more recently a related but less public controversy involved what became the DSM-III category of Ego-dystonic Homosexuality. The author presents the DSM-III controversy and a reformulation of the issues involved in the diagnostic status of homosexuality. He argues that what is at issue is a value judgment about heterosexuality, rather than a factual dispute about homosexuality.
Searching for Images: The Analysis of Users' Queries for Image Retrieval in American History.
ERIC Educational Resources Information Center
Choi, Youngok; Rasmussen, Edie M.
2003-01-01
Studied users' queries for visual information in American history to identify the image attributes important for retrieval and the characteristics of users' queries for digital images, based on queries from 38 faculty and graduate students. Results of pre- and post-test questionnaires and interviews suggest principle categories of search terms.…
Searching and Filtering Tweets: CSIRO at the TREC 2012 Microblog Track
2012-11-01
stages. We first evaluate the effect of tweet corpus pre- processing in vanilla runs (no query expansion), and then assess the effect of query expansion...Effect of a vanilla run on D4 index (both realtime and non-real-time), and query expansion methods based on the submitted runs for two sets of queries
Knowledge Query Language (KQL)
2016-02-01
unlimited. This page intentionally left blank. iii EXECUTIVE SUMMARY Currently, queries for data ...retrieval from non-Structured Query Language (NoSQL) data stores are tightly coupled to the specific implementation of the data store implementation, making...of the storage content and format for querying NoSQL or relational data stores. This approach uses address expressions (or A-Expressions) embedded in
System, method and apparatus for conducting a keyterm search
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A keyterm search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more keyterms. Next, a gleaning model of the query is created. The gleaning model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.
System, method and apparatus for conducting a phrase search
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A phrase search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more sequences of terms. Next, a relational model of the query is created. The relational model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.
Targeted exploration and analysis of large cross-platform human transcriptomic compendia
Zhu, Qian; Wong, Aaron K; Krishnan, Arjun; Aure, Miriam R; Tadych, Alicja; Zhang, Ran; Corney, David C; Greene, Casey S; Bongo, Lars A; Kristensen, Vessela N; Charikar, Moses; Li, Kai; Troyanskaya, Olga G.
2016-01-01
We present SEEK (http://seek.princeton.edu), a query-based search engine across very large transcriptomic data collections, including thousands of human data sets from almost 50 microarray and next-generation sequencing platforms. SEEK uses a novel query-level cross-validation-based algorithm to automatically prioritize data sets relevant to the query and a robust search approach to identify query-coregulated genes, pathways, and processes. SEEK provides cross-platform handling, multi-gene query search, iterative metadata-based search refinement, and extensive visualization-based analysis options. PMID:25581801
Optimization of the Controlled Evaluation of Closed Relational Queries
NASA Astrophysics Data System (ADS)
Biskup, Joachim; Lochner, Jan-Hendrik; Sonntag, Sebastian
For relational databases, controlled query evaluation is an effective inference control mechanism preserving confidentiality regarding a previously declared confidentiality policy. Implementations of controlled query evaluation usually lack efficiency due to costly theorem prover calls. Suitably constrained controlled query evaluation can be implemented efficiently, but is not flexible enough from the perspective of database users and security administrators. In this paper, we propose an optimized framework for controlled query evaluation in relational databases, being efficiently implementable on the one hand and relaxing the constraints of previous approaches on the other hand.
FlyAtlas: database of gene expression in the tissues of Drosophila melanogaster
Robinson, Scott W.; Herzyk, Pawel; Dow, Julian A. T.; Leader, David P.
2013-01-01
The FlyAtlas resource contains data on the expression of the genes of Drosophila melanogaster in different tissues (currently 25—17 adult and 8 larval) obtained by hybridization of messenger RNA to Affymetrix Drosophila Genome 2 microarrays. The microarray probe sets cover 13 250 Drosophila genes, detecting 12 533 in an unambiguous manner. The data underlying the original web application (http://flyatlas.org) have been restructured into a relational database and a Java servlet written to provide a new web interface, FlyAtlas 2 (http://flyatlas.gla.ac.uk/), which allows several additional queries. Users can retrieve data for individual genes or for groups of genes belonging to the same or related ontological categories. Assistance in selecting valid search terms is provided by an Ajax ‘autosuggest’ facility that polls the database as the user types. Searches can also focus on particular tissues, and data can be retrieved for the most highly expressed genes, for genes of a particular category with above-average expression or for genes with the greatest difference in expression between the larval and adult stages. A novel facility allows the database to be queried with a specific gene to find other genes with a similar pattern of expression across the different tissues. PMID:23203866
FlyAtlas: database of gene expression in the tissues of Drosophila melanogaster.
Robinson, Scott W; Herzyk, Pawel; Dow, Julian A T; Leader, David P
2013-01-01
The FlyAtlas resource contains data on the expression of the genes of Drosophila melanogaster in different tissues (currently 25-17 adult and 8 larval) obtained by hybridization of messenger RNA to Affymetrix Drosophila Genome 2 microarrays. The microarray probe sets cover 13,250 Drosophila genes, detecting 12,533 in an unambiguous manner. The data underlying the original web application (http://flyatlas.org) have been restructured into a relational database and a Java servlet written to provide a new web interface, FlyAtlas 2 (http://flyatlas.gla.ac.uk/), which allows several additional queries. Users can retrieve data for individual genes or for groups of genes belonging to the same or related ontological categories. Assistance in selecting valid search terms is provided by an Ajax 'autosuggest' facility that polls the database as the user types. Searches can also focus on particular tissues, and data can be retrieved for the most highly expressed genes, for genes of a particular category with above-average expression or for genes with the greatest difference in expression between the larval and adult stages. A novel facility allows the database to be queried with a specific gene to find other genes with a similar pattern of expression across the different tissues.
[Tumor Data Interacted System Design Based on Grid Platform].
Liu, Ying; Cao, Jiaji; Zhang, Haowei; Zhang, Ke
2016-06-01
In order to satisfy demands of massive and heterogeneous tumor clinical data processing and the multi-center collaborative diagnosis and treatment for tumor diseases,a Tumor Data Interacted System(TDIS)was established based on grid platform,so that an implementing virtualization platform of tumor diagnosis service was realized,sharing tumor information in real time and carrying on standardized management.The system adopts Globus Toolkit 4.0tools to build the open grid service framework and encapsulats data resources based on Web Services Resource Framework(WSRF).The system uses the middleware technology to provide unified access interface for heterogeneous data interaction,which could optimize interactive process with virtualized service to query and call tumor information resources flexibly.For massive amounts of heterogeneous tumor data,the federated stored and multiple authorized mode is selected as security services mechanism,real-time monitoring and balancing load.The system can cooperatively manage multi-center heterogeneous tumor data to realize the tumor patient data query,sharing and analysis,and compare and match resources in typical clinical database or clinical information database in other service node,thus it can assist doctors in consulting similar case and making up multidisciplinary treatment plan for tumors.Consequently,the system can improve efficiency of diagnosis and treatment for tumor,and promote the development of collaborative tumor diagnosis model.
Mathematical Metaphors: Problem Reformulation and Analysis Strategies
NASA Technical Reports Server (NTRS)
Thompson, David E.
2005-01-01
This paper addresses the critical need for the development of intelligent or assisting software tools for the scientist who is working in the initial problem formulation and mathematical model representation stage of research. In particular, examples of that representation in fluid dynamics and instability theory are discussed. The creation of a mathematical model that is ready for application of certain solution strategies requires extensive symbolic manipulation of the original mathematical model. These manipulations can be as simple as term reordering or as complicated as discovery of various symmetry groups embodied in the equations, whereby Backlund-type transformations create new determining equations and integrability conditions or create differential Grobner bases that are then solved in place of the original nonlinear PDEs. Several examples are presented of the kinds of problem formulations and transforms that can be frequently encountered in model representation for fluids problems. The capability of intelligently automating these types of transforms, available prior to actual mathematical solution, is advocated. Physical meaning and assumption-understanding can then be propagated through the mathematical transformations, allowing for explicit strategy development.
A preliminary model of work during initial examination and treatment planning appointments.
Irwin, J Y; Torres-Urquidy, M H; Schleyer, T; Monaco, V
2009-01-10
Objective This study's objective was to formally describe the work process for charting and treatment planning in general dental practice to inform the design of a new clinical computing environment.Methods Using a process called contextual inquiry, researchers observed 23 comprehensive examination and treatment planning sessions during 14 visits to 12 general US dental offices. For each visit, field notes were analysed and reformulated as formalised models. Subsequently, each model type was consolidated across all offices and visits. Interruptions to the workflow, called breakdowns, were identified.Results Clinical work during dental examination and treatment planning appointments is a highly collaborative activity involving dentists, hygienists and assistants. Personnel with multiple overlapping roles complete complex multi-step tasks supported by a large and varied collection of equipment, artifacts and technology. Most of the breakdowns were related to technology which interrupted the workflow, caused rework and increased the number of steps in work processes.Conclusion Current dental software could be significantly improved with regard to its support for communication and collaboration, workflow, information design and presentation, information content, and data entry.
Methodology for urban rail and construction technology research and development planning
NASA Technical Reports Server (NTRS)
Rubenstein, L. D.; Land, J. E.; Deshpande, G.; Dayman, B.; Warren, E. H.
1980-01-01
A series of transit system visits, organized by the American Public Transit Association (APTA), was conducted in which the system operators identified the most pressing development needs. These varied by property and were reformulated into a series of potential projects. To assist in the evaluation, a data base useful for estimating the present capital and operating costs of various transit system elements was generated from published data. An evaluation model was developed which considered the rate of deployment of the research and development project, potential benefits, development time and cost. An outline of an evaluation methodology that considered benefits other than capital and operating cost savings was also presented. During the course of the study, five candidate projects were selected for detailed investigation; (1) air comfort systems; (2) solid state auxiliary power conditioners; (3) door systems; (4) escalators; and (5) fare collection systems. Application of the evaluation model to these five examples showed the usefulness of modeling deployment rates and indicated a need to increase the scope of the model to quantitatively consider reliability impacts.
An index-based algorithm for fast on-line query processing of latent semantic analysis
Li, Pohan; Wang, Wei
2017-01-01
Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm. PMID:28520747
An index-based algorithm for fast on-line query processing of latent semantic analysis.
Zhang, Mingxi; Li, Pohan; Wang, Wei
2017-01-01
Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm.
NCBI GEO: archive for functional genomics data sets—update
Barrett, Tanya; Wilhite, Stephen E.; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F.; Tomashevsky, Maxim; Marshall, Kimberly A.; Phillippy, Katherine H.; Sherman, Patti M.; Holko, Michelle; Yefanov, Andrey; Lee, Hyeseung; Zhang, Naigong; Robertson, Cynthia L.; Serova, Nadezhda; Davis, Sean; Soboleva, Alexandra
2013-01-01
The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data. PMID:23193258
Towards a Relation Extraction Framework for Cyber-Security Concepts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Corinne L; Bridges, Robert A; Huffer, Kelly M
In order to assist security analysts in obtaining information pertaining to their network, such as novel vulnerabilities, exploits, or patches, information retrieval methods tailored to the security domain are needed. As labeled text data is scarce and expensive, we follow developments in semi-supervised NLP and implement a bootstrapping algorithm for extracting security entities and their relationships from text. The algorithm requires little input data, specifically, a few relations or patterns (heuristics for identifying relations), and incorporates an active learning component which queries the user on the most important decisions to prevent drifting the desired relations. Preliminary testing on a smallmore » corpus shows promising results, obtaining precision of .82.« less
Bat-Inspired Algorithm Based Query Expansion for Medical Web Information Retrieval.
Khennak, Ilyes; Drias, Habiba
2017-02-01
With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.
2013-01-01
Background Clinical Intelligence, as a research and engineering discipline, is dedicated to the development of tools for data analysis for the purposes of clinical research, surveillance, and effective health care management. Self-service ad hoc querying of clinical data is one desirable type of functionality. Since most of the data are currently stored in relational or similar form, ad hoc querying is problematic as it requires specialised technical skills and the knowledge of particular data schemas. Results A possible solution is semantic querying where the user formulates queries in terms of domain ontologies that are much easier to navigate and comprehend than data schemas. In this article, we are exploring the possibility of using SADI Semantic Web services for semantic querying of clinical data. We have developed a prototype of a semantic querying infrastructure for the surveillance of, and research on, hospital-acquired infections. Conclusions Our results suggest that SADI can support ad-hoc, self-service, semantic queries of relational data in a Clinical Intelligence context. The use of SADI compares favourably with approaches based on declarative semantic mappings from data schemas to ontologies, such as query rewriting and RDFizing by materialisation, because it can easily cope with situations when (i) some computation is required to turn relational data into RDF or OWL, e.g., to implement temporal reasoning, or (ii) integration with external data sources is necessary. PMID:23497556
Executing SPARQL Queries over the Web of Linked Data
NASA Astrophysics Data System (ADS)
Hartig, Olaf; Bizer, Christian; Freytag, Johann-Christoph
The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.
A Natural Language Interface Concordant with a Knowledge Base.
Han, Yong-Jin; Park, Seong-Bae; Park, Se-Young
2016-01-01
The discordance between expressions interpretable by a natural language interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate natural language questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible natural language expressions corresponding to each formal query can also be collected in advance. A natural language expression has a one-to-one mapping with a formal query. Hence, a natural language question is translated into a formal query by matching the question with the most appropriate natural language expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively.
Saying What You're Looking For: Linguistics Meets Video Search.
Barrett, Daniel Paul; Barbu, Andrei; Siddharth, N; Siskind, Jeffrey Mark
2016-10-01
We present an approach to searching large video corpora for clips which depict a natural-language query in the form of a sentence. Compositional semantics is used to encode subtle meaning differences lost in other approaches, such as the difference between two sentences which have identical words but entirely different meaning: The person rode the horse versus The horse rode the person. Given a sentential query and a natural-language parser, we produce a score indicating how well a video clip depicts that sentence for each clip in a corpus and return a ranked list of clips. Two fundamental problems are addressed simultaneously: detecting and tracking objects, and recognizing whether those tracks depict the query. Because both tracking and object detection are unreliable, our approach uses the sentential query to focus the tracker on the relevant participants and ensures that the resulting tracks are described by the sentential query. While most earlier work was limited to single-word queries which correspond to either verbs or nouns, we search for complex queries which contain multiple phrases, such as prepositional phrases, and modifiers, such as adverbs. We demonstrate this approach by searching for 2,627 naturally elicited sentential queries in 10 Hollywood movies.
Context-Aware Online Commercial Intention Detection
NASA Astrophysics Data System (ADS)
Hu, Derek Hao; Shen, Dou; Sun, Jian-Tao; Yang, Qiang; Chen, Zheng
With more and more commercial activities moving onto the Internet, people tend to purchase what they need through Internet or conduct some online research before the actual transactions happen. For many Web users, their online commercial activities start from submitting a search query to search engines. Just like the common Web search queries, the queries with commercial intention are usually very short. Recognizing the queries with commercial intention against the common queries will help search engines provide proper search results and advertisements, help Web users obtain the right information they desire and help the advertisers benefit from the potential transactions. However, the intentions behind a query vary a lot for users with different background and interest. The intentions can even be different for the same user, when the query is issued in different contexts. In this paper, we present a new algorithm framework based on skip-chain conditional random field (SCCRF) for automatically classifying Web queries according to context-based online commercial intention. We analyze our algorithm performance both theoretically and empirically. Extensive experiments on several real search engine log datasets show that our algorithm can improve more than 10% on F1 score than previous algorithms on commercial intention detection.
Incremental Query Rewriting with Resolution
NASA Astrophysics Data System (ADS)
Riazanov, Alexandre; Aragão, Marcelo A. T.
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a resolution-based first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent translation of these schematic answers to SQL queries which are evaluated using a conventional relational DBMS. We call our method incremental query rewriting, because an original semantic query is rewritten into a (potentially infinite) series of SQL queries. In this chapter, we outline the main idea of our technique - using abstractions of databases and constrained clauses for deriving schematic answers, and provide completeness and soundness proofs to justify the applicability of this technique to the case of resolution for FOL without equality. The proposed method can be directly used with regular RDBs, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
Research on presentation and query service of geo-spatial data based on ontology
NASA Astrophysics Data System (ADS)
Li, Hong-wei; Li, Qin-chao; Cai, Chang
2008-10-01
The paper analyzed the deficiency on presentation and query of geo-spatial data existed in current GIS, discussed the advantages that ontology possessed in formalization of geo-spatial data and the presentation of semantic granularity, taken land-use classification system as an example to construct domain ontology, and described it by OWL; realized the grade level and category presentation of land-use data benefited from the thoughts of vertical and horizontal navigation; and then discussed query mode of geo-spatial data based on ontology, including data query based on types and grade levels, instances and spatial relation, and synthetic query based on types and instances; these methods enriched query mode of current GIS, and is a useful attempt; point out that the key point of the presentation and query of spatial data based on ontology is to construct domain ontology that can correctly reflect geo-concept and its spatial relation and realize its fine formalization description.
Serrano, Dolores R; Lalatsa, Aikaterini; Dea-Ayuela, M Auxiliadora
2017-07-19
Leishmaniasis is a neglected tropical disease responsible for the ninth largest disease burden in the world threatening 350 million people mostly in developing countries. The lack of efficacy, severe adverse effects, long duration, high cost and parenteral administration of the current therapies result in poor patient compliance and emergence of resistance. Leishmaniasis' unmet need for safer, affordable and more effective treatments is only partly addressed by today's global health product pipeline that focuses on products amenable to rapid clinical development, mainly by reformulating or repurposing existing drugs for new uses. Excipients are necessary for ensuring the stability and bioavailability of currently available antileishmaniasis drugs which in their majority are poorly soluble or have severe side-effects. Thus, selection of excipients that can ensure bioavailability and safety as well as elicit a synergistic effect against the Leishmania parasites without compromising safety will result in a more efficacious, safe and fast to market medicine. We have evaluated the in vitro activity of 30 commercially available generally regarded as safe (GRAS) excipients against different Leishmania spp., their cytotoxicity and potential use for inclusion in novel formulations. Amongst the tested excipients, the compounds with higher selectivity index were Eudragit E100 (cationic triblock copolymer of dimethylaminoethyl methacrylate, butyl methacrylate, and methyl methacrylate), CTAB (cetyltrimethylammonium bromide, cationic), lauric acid, Labrasol(non-ionic, caprylocaproyl polyoxyl- 8 glycerides) and sodium deoxycholate. An ideal excipient need to possess amphiphilic nature with ionic/polar groups and possess a short or medium fatty acid chain such as lauric (C12), capric C10) or caprylicacid (C8). Inclusion of these excipients and identification of the optimal combination of drug and excipients would lead to a more effective and safer antileishmanial therapies. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Ross, A.; Stackhouse, P. W.; Tisdale, B.; Tisdale, M.; Chandler, W.; Hoell, J. M., Jr.; Kusterer, J.
2014-12-01
The NASA Langley Research Center Science Directorate and Atmospheric Science Data Center have initiated a pilot program to utilize Geographic Information System (GIS) tools that enable, generate and store climatological averages using spatial queries and calculations in a spatial database resulting in greater accessibility of data for government agencies, industry and private sector individuals. The major objectives of this effort include the 1) Processing and reformulation of current data to be consistent with ESRI and openGIS tools, 2) Develop functions to improve capability and analysis that produce "on-the-fly" data products, extending these past the single location to regional and global scales. 3) Update the current web sites to enable both web-based and mobile application displays for optimization on mobile platforms, 4) Interact with user communities in government and industry to test formats and usage of optimization, and 5) develop a series of metrics that allow for monitoring of progressive performance. Significant project results will include the the development of Open Geospatial Consortium (OGC) compliant web services (WMS, WCS, WFS, WPS) that serve renewable energy and agricultural application products to users using GIS software and tools. Each data product and OGC service will be registered within ECHO, the Common Metadata Repository, the Geospatial Platform, and Data.gov to ensure the data are easily discoverable and provide data users with enhanced access to SSE data, parameters, services, and applications. This effort supports cross agency, cross organization, and interoperability of SSE data products and services by collaborating with DOI, NRCan, NREL, NCAR, and HOMER for requirements vetting and test bed users before making available to the wider public.
Datathons and Software to Promote Reproducible Research.
Celi, Leo Anthony; Lokhandwala, Sharukh; Montgomery, Robert; Moses, Christopher; Naumann, Tristan; Pollard, Tom; Spitz, Daniel; Stretch, Robert
2016-08-24
Datathons facilitate collaboration between clinicians, statisticians, and data scientists in order to answer important clinical questions. Previous datathons have resulted in numerous publications of interest to the critical care community and serve as a viable model for interdisciplinary collaboration. We report on an open-source software called Chatto that was created by members of our group, in the context of the second international Critical Care Datathon, held in September 2015. Datathon participants formed teams to discuss potential research questions and the methods required to address them. They were provided with the Chatto suite of tools to facilitate their teamwork. Each multidisciplinary team spent the next 2 days with clinicians working alongside data scientists to write code, extract and analyze data, and reformulate their queries in real time as needed. All projects were then presented on the last day of the datathon to a panel of judges that consisted of clinicians and scientists. Use of Chatto was particularly effective in the datathon setting, enabling teams to reduce the time spent configuring their research environments to just a few minutes-a process that would normally take hours to days. Chatto continued to serve as a useful research tool after the conclusion of the datathon. This suite of tools fulfills two purposes: (1) facilitation of interdisciplinary teamwork through archiving and version control of datasets, analytical code, and team discussions, and (2) advancement of research reproducibility by functioning postpublication as an online environment in which independent investigators can rerun or modify analyses with relative ease. With the introduction of Chatto, we hope to solve a variety of challenges presented by collaborative data mining projects while improving research reproducibility.
Model-based query language for analyzing clinical processes.
Barzdins, Janis; Barzdins, Juris; Rencis, Edgars; Sostaks, Agris
2013-01-01
Nowadays large databases of clinical process data exist in hospitals. However, these data are rarely used in full scope. In order to perform queries on hospital processes, one must either choose from the predefined queries or develop queries using MS Excel-type software system, which is not always a trivial task. In this paper we propose a new query language for analyzing clinical processes that is easily perceptible also by non-IT professionals. We develop this language based on a process modeling language which is also described in this paper. Prototypes of both languages have already been verified using real examples from hospitals.
AQBE — QBE Style Queries for Archetyped Data
NASA Astrophysics Data System (ADS)
Sachdeva, Shelly; Yaginuma, Daigo; Chu, Wanming; Bhalla, Subhash
Large-scale adoption of electronic healthcare applications requires semantic interoperability. The new proposals propose an advanced (multi-level) DBMS architecture for repository services for health records of patients. These also require query interfaces at multiple levels and at the level of semi-skilled users. In this regard, a high-level user interface for querying the new form of standardized Electronic Health Records system has been examined in this study. It proposes a step-by-step graphical query interface to allow semi-skilled users to write queries. Its aim is to decrease user effort and communication ambiguities, and increase user friendliness.
StarView: The object oriented design of the ST DADS user interface
NASA Technical Reports Server (NTRS)
Williams, J. D.; Pollizzi, J. A.
1992-01-01
StarView is the user interface being developed for the Hubble Space Telescope Data Archive and Distribution Service (ST DADS). ST DADS is the data archive for HST observations and a relational database catalog describing the archived data. Users will use StarView to query the catalog and select appropriate datasets for study. StarView sends requests for archived datasets to ST DADS which processes the requests and returns the database to the user. StarView is designed to be a powerful and extensible user interface. Unique features include an internal relational database to navigate query results, a form definition language that will work with both CRT and X interfaces, a data definition language that will allow StarView to work with any relational database, and the ability to generate adhoc queries without requiring the user to understand the structure of the ST DADS catalog. Ultimately, StarView will allow the user to refine queries in the local database for improved performance and merge in data from external sources for correlation with other query results. The user will be able to create a query from single or multiple forms, merging the selected attributes into a single query. Arbitrary selection of attributes for querying is supported. The user will be able to select how query results are viewed. A standard form or table-row format may be used. Navigation capabilities are provided to aid the user in viewing query results. Object oriented analysis and design techniques were used in the design of StarView to support the mechanisms and concepts required to implement these features. One such mechanism is the Model-View-Controller (MVC) paradigm. The MVC allows the user to have multiple views of the underlying database, while providing a consistent mechanism for interaction regardless of the view. This approach supports both CRT and X interfaces while providing a common mode of user interaction. Another powerful abstraction is the concept of a Query Model. This concept allows a single query to be built form a single or multiple forms before it is submitted to ST DADS. Supporting this concept is the adhoc query generator which allows the user to select and qualify an indeterminate number attributes from the database. The user does not need any knowledge of how the joins across various tables are to be resolved. The adhoc generator calculates the joins automatically and generates the correct SQL query.
Bowen, Raffick A R; Vu, Chi; Remaley, Alan T; Hortin, Glen L; Csako, Gyorgy
2007-03-01
Besides total triiodothyronine (TT3), total free fatty acids (FFA) concentrations were higher with serum separator tube (SST) than Vacuette tubes. The effects of surfactant, rubber stopper, and separator gel from various tubes were investigated on FFA, beta-hydroxybutyrate (beta-HB), and TT3 with 8 different tube types in blood specimens of apparently healthy volunteers. Compared to Vacuette tubes, serum FFA and TT3 concentrations were significantly higher in SST than glass tubes. Reformulated SST eliminated the increase in TT3 but not FFA. No significant difference was observed for beta-HB concentration among tube types. Surfactant and rubber stoppers from the different tube types significantly increased TT3 but not FFA and beta-HB concentrations. Agitation of whole blood but not serum or plasma specimens with separator gel from SST, reformulated SST and plasma preparation tube (PPT) tubes compared to Vacuette tubes gave higher FFA but not beta-HB levels. Unidentified component(s) from the separator gel in SST, reformulated SST and PPT tubes cause falsely high FFA concentration. In contrast to TT3, falsely high FFA results require exposure of whole blood and not serum to tube constituent(s). The approach employed here may serve as a model for assessing interference(s) from tube constituent(s).
Yon, Bethany A; Johnson, Rachel K
2014-03-01
The United States Department of Agriculture's (USDA) new nutrition standards for school meals include sweeping changes setting upper limits on calories served and limit milk offerings to low fat or fat-free and, if flavored, only fat-free. Milk processors are lowering the calories in flavored milks. As changes to milk impact school lunch participation and milk consumption, it is important to know the impact of these modifications. Elementary and middle schools from 17 public school districts that changed from standard flavored milk (160-180 kcal/8 oz) to lower calorie flavored milk (140-150 kcal/8 oz) between 2008 and 2009 were enrolled. Milk shipment and National School Lunch Program (NSLP) participation rates were collected for 3 time periods over 12 months (pre-reformulation, at the time of reformulation, and after reformulation). Linear mixed models were used with adjustments for free/reduced meal eligibility. No changes were seen in shipment of flavored milk or all milk, including unflavored. The NSLP participation rates dropped when lower calorie flavored milk was first offered, but recovered over time. While school children appear to accept lower calorie flavored milk, further monitoring is warranted as most of the flavored milks offered were not fat-free as was required by USDA as of fall 2012. © 2014, American School Health Association.
NASA Technical Reports Server (NTRS)
Clark-Ingram, Marceia
2010-01-01
Brominated Flame Retardants (BFRs) are widely used in the manufacture of electrical and electronic components and as additives in formulations for foams, plastics and rubbers. The United States (US) and the European Union (EU)have increased regulation and monitoring of of targeted BFRs, such as Polybrominated Diphenyl Ethers (PBDEs) due to the bioaccumulative effects in humans and animals. In response, manufacturers and vendors of BFR-containing materials are changing flame-retardant additives, sometimes without notifying BFR users. In some instances, Deca-bromodiphenylether (Deca-BDE) and other families of flame retardants are being used as replacement flame retardants for penta-BDE and octa-BDE. The reformulation of the BFR-containing material typically results in the removal of the targeted PBDE and replacement with a non-PBDE chemical or non-targeted PBDE. Many users of PBDE -based materials are concerned that vendors will perform reformulation and not inform the end user. Materials performance such as flammability, adhesion , and tensile strength may be altered due to reformulation. The requalification of newly formulated materials may be required, or replacement materials may have to be identified and qualified. The Shuttle Enviornmental Assurance (SEA) team indentified a risk to the Space Shuttle Program associated with the possibility that targeted PBDEs may be replaced without notification. Resultant decreases in flame retardancy, Liquid Oxygen (LOX) compatibility, or material performance could have serious consequences.
Dietary Impact of Adding Potassium Chloride to Foods as a Sodium Reduction Technique.
van Buren, Leo; Dötsch-Klerk, Mariska; Seewi, Gila; Newson, Rachel S
2016-04-21
Potassium chloride is a leading reformulation technology for reducing sodium in food products. As, globally, sodium intake exceeds guidelines, this technology is beneficial; however, its potential impact on potassium intake is unknown. Therefore, a modeling study was conducted using Dutch National Food Survey data to examine the dietary impact of reformulation (n = 2106). Product-specific sodium criteria, to enable a maximum daily sodium chloride intake of 5 grams/day, were applied to all foods consumed in the survey. The impact of replacing 20%, 50% and 100% of sodium chloride from each product with potassium chloride was modeled. At baseline median, potassium intake was 3334 mg/day. An increase in the median intake of potassium of 453 mg/day was seen when a 20% replacement was applied, 674 mg/day with a 50% replacement scenario and 733 mg/day with a 100% replacement scenario. Reformulation had the largest impact on: bread, processed fruit and vegetables, snacks and processed meat. Replacement of sodium chloride by potassium chloride, particularly in key contributing product groups, would result in better compliance to potassium intake guidelines (3510 mg/day). Moreover, it could be considered safe for the general adult population, as intake remains compliant with EFSA guidelines. Based on current modeling potassium chloride presents as a valuable, safe replacer for sodium chloride in food products.
NASA Technical Reports Server (NTRS)
Aspinall, David; Denney, Ewen; Lueth, Christoph
2012-01-01
We motivate and introduce a query language PrQL designed for inspecting machine representations of proofs. PrQL natively supports hiproofs which express proof structure using hierarchical nested labelled trees. The core language presented in this paper is locally structured (first-order), with queries built using recursion and patterns over proof structure and rule names. We define the syntax and semantics of locally structured queries, demonstrate their power, and sketch some implementation experiments.
SIMPLE GREEN® 2013 Reformulation
Technical product bulletin: this surface washing agent used in oil spill cleanups is equally effective in fresh water, estuarine, and marine environments at all temperatures. Spray directly on surface of oil.
Effective Multi-Query Expansions: Collaborative Deep Networks for Robust Landmark Retrieval.
Wang, Yang; Lin, Xuemin; Wu, Lin; Zhang, Wenjie
2017-03-01
Given a query photo issued by a user (q-user), the landmark retrieval is to return a set of photos with their landmarks similar to those of the query, while the existing studies on the landmark retrieval focus on exploiting geometries of landmarks for similarity matches between candidate photos and a query photo. We observe that the same landmarks provided by different users over social media community may convey different geometry information depending on the viewpoints and/or angles, and may, subsequently, yield very different results. In fact, dealing with the landmarks with low quality shapes caused by the photography of q-users is often nontrivial and has seldom been studied. In this paper, we propose a novel framework, namely, multi-query expansions, to retrieve semantically robust landmarks by two steps. First, we identify the top- k photos regarding the latent topics of a query landmark to construct multi-query set so as to remedy its possible low quality shape. For this purpose, we significantly extend the techniques of Latent Dirichlet Allocation. Then, motivated by the typical collaborative filtering methods, we propose to learn a collaborative deep networks-based semantically, nonlinear, and high-level features over the latent factor for landmark photo as the training set, which is formed by matrix factorization over collaborative user-photo matrix regarding the multi-query set. The learned deep network is further applied to generate the features for all the other photos, meanwhile resulting into a compact multi-query set within such space. Then, the final ranking scores are calculated over the high-level feature space between the multi-query set and all other photos, which are ranked to serve as the final ranking list of landmark retrieval. Extensive experiments are conducted on real-world social media data with both landmark photos together with their user information to show the superior performance over the existing methods, especially our recently proposed multi-query based mid-level pattern representation method [1].
Benchmarking distributed data warehouse solutions for storing genomic variant information
Wiewiórka, Marek S.; Wysakowicz, Dawid P.; Okoniewski, Michał J.
2017-01-01
Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient database storage and querying. The answer could be the application of modern distributed storage systems and query engines. However, the application of large genomic variant databases to this problem has not been sufficiently far explored so far in the literature. To investigate the effectiveness of modern columnar storage [column-oriented Database Management System (DBMS)] and query engines, we have developed a prototypic genomic variant data warehouse, populated with large generated content of genomic variants and phenotypic data. Next, we have benchmarked performance of a number of combinations of distributed storages and query engines on a set of SQL queries that address biological questions essential for both research and medical applications. In addition, a non-distributed, analytical database (MonetDB) has been used as a baseline. Comparison of query execution times confirms that distributed data warehousing solutions outperform classic relational DBMSs. Moreover, pre-aggregation and further denormalization of data, which reduce the number of distributed join operations, significantly improve query performance by several orders of magnitude. Most of distributed back-ends offer a good performance for complex analytical queries, while the Optimized Row Columnar (ORC) format paired with Presto and Parquet with Spark 2 query engines provide, on average, the lowest execution times. Apache Kudu on the other hand, is the only solution that guarantees a sub-second performance for simple genome range queries returning a small subset of data, where low-latency response is expected, while still offering decent performance for running analytical queries. In summary, research and clinical applications that require the storage and analysis of variants from thousands of samples can benefit from the scalability and performance of distributed data warehouse solutions. Database URL: https://github.com/ZSI-Bio/variantsdwh PMID:29220442
CUFID-query: accurate network querying through random walk based network flow estimation.
Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun
2017-12-28
Functional modules in biological networks consist of numerous biomolecules and their complicated interactions. Recent studies have shown that biomolecules in a functional module tend to have similar interaction patterns and that such modules are often conserved across biological networks of different species. As a result, such conserved functional modules can be identified through comparative analysis of biological networks. In this work, we propose a novel network querying algorithm based on the CUFID (Comparative network analysis Using the steady-state network Flow to IDentify orthologous proteins) framework combined with an efficient seed-and-extension approach. The proposed algorithm, CUFID-query, can accurately detect conserved functional modules as small subnetworks in the target network that are expected to perform similar functions to the given query functional module. The CUFID framework was recently developed for probabilistic pairwise global comparison of biological networks, and it has been applied to pairwise global network alignment, where the framework was shown to yield accurate network alignment results. In the proposed CUFID-query algorithm, we adopt the CUFID framework and extend it for local network alignment, specifically to solve network querying problems. First, in the seed selection phase, the proposed method utilizes the CUFID framework to compare the query and the target networks and to predict the probabilistic node-to-node correspondence between the networks. Next, the algorithm selects and greedily extends the seed in the target network by iteratively adding nodes that have frequent interactions with other nodes in the seed network, in a way that the conductance of the extended network is maximally reduced. Finally, CUFID-query removes irrelevant nodes from the querying results based on the personalized PageRank vector for the induced network that includes the fully extended network and its neighboring nodes. Through extensive performance evaluation based on biological networks with known functional modules, we show that CUFID-query outperforms the existing state-of-the-art algorithms in terms of prediction accuracy and biological significance of the predictions.
Querying graphs in protein-protein interactions networks using feedback vertex set.
Blin, Guillaume; Sikora, Florian; Vialette, Stéphane
2010-01-01
Recent techniques increase rapidly the amount of our knowledge on interactions between proteins. The interpretation of these new information depends on our ability to retrieve known substructures in the data, the Protein-Protein Interactions (PPIs) networks. In an algorithmic point of view, it is an hard task since it often leads to NP-hard problems. To overcome this difficulty, many authors have provided tools for querying patterns with a restricted topology, i.e., paths or trees in PPI networks. Such restriction leads to the development of fixed parameter tractable (FPT) algorithms, which can be practicable for restricted sizes of queries. Unfortunately, Graph Homomorphism is a W[1]-hard problem, and hence, no FPT algorithm can be found when patterns are in the shape of general graphs. However, Dost et al. gave an algorithm (which is not implemented) to query graphs with a bounded treewidth in PPI networks (the treewidth of the query being involved in the time complexity). In this paper, we propose another algorithm for querying pattern in the shape of graphs, also based on dynamic programming and the color-coding technique. To transform graphs queries into trees without loss of informations, we use feedback vertex set coupled to a node duplication mechanism. Hence, our algorithm is FPT for querying graphs with a bounded size of their feedback vertex set. It gives an alternative to the treewidth parameter, which can be better or worst for a given query. We provide a python implementation which allows us to validate our implementation on real data. Especially, we retrieve some human queries in the shape of graphs into the fly PPI network.
Occam's razor: supporting visual query expression for content-based image queries
NASA Astrophysics Data System (ADS)
Venters, Colin C.; Hartley, Richard J.; Hewitt, William T.
2005-01-01
This paper reports the results of a usability experiment that investigated visual query formulation on three dimensions: effectiveness, efficiency, and user satisfaction. Twenty eight evaluation sessions were conducted in order to assess the extent to which query by visual example supports visual query formulation in a content-based image retrieval environment. In order to provide a context and focus for the investigation, the study was segmented by image type, user group, and use function. The image type consisted of a set of abstract geometric device marks supplied by the UK Trademark Registry. Users were selected from the 14 UK Patent Information Network offices. The use function was limited to the retrieval of images by shape similarity. Two client interfaces were developed for comparison purposes: Trademark Image Browser Engine (TRIBE) and Shape Query Image Retrieval Systems Engine (SQUIRE).
Geometric Representations of Condition Queries on Three-Dimensional Vector Fields
NASA Technical Reports Server (NTRS)
Henze, Chris
1999-01-01
Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.
Occam"s razor: supporting visual query expression for content-based image queries
NASA Astrophysics Data System (ADS)
Venters, Colin C.; Hartley, Richard J.; Hewitt, William T.
2004-12-01
This paper reports the results of a usability experiment that investigated visual query formulation on three dimensions: effectiveness, efficiency, and user satisfaction. Twenty eight evaluation sessions were conducted in order to assess the extent to which query by visual example supports visual query formulation in a content-based image retrieval environment. In order to provide a context and focus for the investigation, the study was segmented by image type, user group, and use function. The image type consisted of a set of abstract geometric device marks supplied by the UK Trademark Registry. Users were selected from the 14 UK Patent Information Network offices. The use function was limited to the retrieval of images by shape similarity. Two client interfaces were developed for comparison purposes: Trademark Image Browser Engine (TRIBE) and Shape Query Image Retrieval Systems Engine (SQUIRE).
Retrieval feedback in MEDLINE.
Srinivasan, P
1996-01-01
OBJECTIVE: To investigate a new approach for query expansion based on retrieval feedback. The first objective in this study was to examine alternative query-expansion methods within the same retrieval-feedback framework. The three alternatives proposed are: expansion on the MeSH query field alone, expansion on the free-text field alone, and expansion on both the MeSH and the free-text fields. The second objective was to gain further understanding of retrieval feedback by examining possible dependencies on relevant documents during the feedback cycle. DESIGN: Comparative study of retrieval effectiveness using the original unexpanded and the alternative expanded user queries on a MEDLINE test collection of 75 queries and 2,334 MEDLINE citations. MEASUREMENTS: Retrieval effectivenesses of the original unexpanded and the alternative expanded queries were compared using 11-point-average precision scores (11-AvgP). These are averages of precision scores obtained at 11 standard recall points. RESULTS: All three expansion strategies significantly improved the original queries in terms of retrieval effectiveness. Expansion on MeSH alone was equivalent to expansion on both MeSH and the free-text fields. Expansion on the free-text field alone improved the queries significantly less than did the other two strategies. The second part of the study indicated that retrieval-feedback-based expansion yields significant performance improvements independent of the availability of relevant documents for feedback information. CONCLUSIONS: Retrieval feedback offers a robust procedure for query expansion that is most effective for MEDLINE when applied to the MeSH field. PMID:8653452
Hwang, Amy S.; Truong, Khai N.; Cameron, Jill I.; Lindqvist, Eva; Nygård, Louise; Mihailidis, Alex
2015-01-01
Ambient assisted living (AAL) aims to help older persons “age-in-place” and manage everyday activities using intelligent and pervasive computing technology. AAL research, however, has yet to explore how AAL might support or collaborate with informal care partners (ICPs), such as relatives and friends, who play important roles in the lives and care of persons with dementia (PwDs). In a multiphase codesign process with six (6) ICPs, we envisioned how AAL could be situated to complement their care. We used our codesigned “caregiver interface” artefacts as triggers to facilitate envisioning of AAL support and unpack the situated, idiosyncratic context within which AAL aims to assist. Our findings suggest that AAL should be designed to support ICPs in fashioning “do-it-yourself” solutions that complement tacitly improvised care strategies and enable them to try, observe, and adapt to solutions over time. In this way, an ICP could decide which activities to entrust to AAL support, when (i.e., scheduled or spontaneous) and how a system should provide support (i.e., using personalized prompts based on care experience), and when adaptations to system support are needed (i.e., based alerting patterns and queried reports). Future longitudinal work employing participatory, design-oriented methods with care dyads is encouraged. PMID:26161410
Reformulation and solution of the master equation for multiple-well chemical reactions.
Georgievskii, Yuri; Miller, James A; Burke, Michael P; Klippenstein, Stephen J
2013-11-21
We consider an alternative formulation of the master equation for complex-forming chemical reactions with multiple wells and bimolecular products. Within this formulation the dynamical phase space consists of only the microscopic populations of the various isomers making up the reactive complex, while the bimolecular reactants and products are treated equally as sources and sinks. This reformulation yields compact expressions for the phenomenological rate coefficients describing all chemical processes, i.e., internal isomerization reactions, bimolecular-to-bimolecular reactions, isomer-to-bimolecular reactions, and bimolecular-to-isomer reactions. The applicability of the detailed balance condition is discussed and confirmed. We also consider the situation where some of the chemical eigenvalues approach the energy relaxation time scale and show how to modify the phenomenological rate coefficients so that they retain their validity.
Effects of Active Listening, Reformulation, and Imitation on Mediator Success: Preliminary Results.
Fischer-Lokou, Jacques; Lamy, Lubomir; Guéguen, Nicolas; Dubarry, Alexandre
2016-06-01
An experiment with 212 students (100 men, 112 women; M age = 18.3 years, SD = 0.9) was carried out to compare the effect of four techniques used by mediators on the number of agreements contracted by negotiators. Under experimental conditions, mediators were asked either to rephrase (reformulate) negotiators' words or to imitate them or to show active listening behavior, or finally, to use a free technique. More agreements were reached in the active listening condition than in both free and rephrase conditions. Furthermore, mediators in the active listening condition were perceived, by the negotiators, as more efficient than mediators using other techniques, although there was no significant difference observed between the active listening and imitation conditions. © The Author(s) 2016.
On the stability of equilibrium for a reformulated foreign trade model of three countries
NASA Astrophysics Data System (ADS)
Dassios, Ioannis K.; Kalogeropoulos, Grigoris
2014-06-01
In this paper, we study the stability of equilibrium for a foreign trade model consisting of three countries. As the gravity equation has been proven an excellent tool of analysis and adequately stable over time and space all over the world, we further enhance the problem to three masses. We use the basic Structure of Heckscher-Ohlin-Samuelson model. The national income equals consumption outlays plus investment plus exports minus imports. The proposed reformulation of the problem focus on two basic concepts: (1) the delay inherited in our economic variables and (2) the interaction effect along the three economies involved. Stability and stabilizability conditions are investigated while numerical examples provide further insight and better understanding. Finally, a generalization of the gravity equation is somehow obtained for the model.
Almíron-Roig, Eva; Monsivais, Pablo; Jebb, Susan A.; Benjamin Neelon, Sara E.; Griffin, Simon J.; Ogilvie, David B.
2015-01-01
We examined the impact of regulatory action to reduce levels of artificial trans–fatty acids (TFAs) in food. We searched Medline, Embase, ISI Web of Knowledge, and EconLit (January 1980 to December 2012) for studies related to government regulation of food- or diet-related health behaviors from which we extracted the subsample of legislative initiatives to reduce artificial TFAs in food. We screened 38 162 articles and identified 14 studies that examined artificial TFA controls limiting permitted levels or mandating labeling. These measures achieved good compliance, with evidence of appropriate reformulation. Regulations grounded on maximum limits and mandated labeling can lead to reductions in actual and reported TFAs in food and appear to encourage food producers to reformulate their products. PMID:25602897
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janes, N.; Ma, L.; Hsu, J.W.
1992-01-01
The Meyer-Overton hypothesis--that anesthesia arises from the nonspecific action of solutes on membrane lipids--is reformulated using colligative thermodynamics. Configurational entropy, the randomness imparted by the solute through the partitioning process, is implicated as the energetic driving force that pertubs cooperative membrane equilibria. A proton NMR partitioning approach based on the anesthetic benzyl alcohol is developed to assess the reformulation. Ring resonances from the partitioned drug are shielded by 0.2 ppm and resolved from the free, aqueous drug. Free alcohol is quantitated in dilute lipid dispersions using an acetate internal standard. Cooperative equilibria in model dipalmitoyl lecithin membranes are examined withmore » changes in temperature and alcohol concentration. The L[sub [beta][prime
Query Expansion Using SNOMED-CT and Weighing Schemes
2014-11-01
For this research, we have used SNOMED-CT along with UMLS Methathesaurus as our ontology in medical domain to expand the queries. General Terms...CT along with UMLS Methathesaurus as our ontology in medical domain to expand the queries. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17...University of the Basque country discuss their finding on query expansion using external sources headlined by Unified Medical Language System ( UMLS
ERIC Educational Resources Information Center
Chung, EunKyung; Yoon, JungWon
2009-01-01
Introduction: The purpose of this study is to compare characteristics and features of user supplied tags and search query terms for images on the "Flickr" Website in terms of categories of pictorial meanings and level of term specificity. Method: This study focuses on comparisons between tags and search queries using Shatford's categorization…
Design Recommendations for Query Languages
1980-09-01
DESIGN RECOMMENDATIONS FOR QUERY LANGUAGES S.L. Ehrenreich Submitted by: Stanley M. Halpin, Acting Chief HUMAN FACTORS TECHNICAL AREA Approved by: Edgar ...respond to que- ries that it recognizes as faulty. Codd (1974) states that in designing a nat- ural query language, attention must be given to dealing...impaired. Codd (1974) also regarded the user’s perception of the data base to be of critical importance in properly designing a query language system
Agent-Based Framework for Discrete Entity Simulations
2006-11-01
Postgres database server for environment queries of neighbors and continuum data. As expected for raw database queries (no database optimizations in...form. Eventually the code was ported to GNU C++ on the same single Intel Pentium 4 CPU running RedHat Linux 9.0 and Postgres database server...Again Postgres was used for environmental queries, and the tool remained relatively slow because of the immense number of queries necessary to assess
Akce, Abdullah; Norton, James J S; Bretl, Timothy
2015-09-01
This paper presents a brain-computer interface for text entry using steady-state visually evoked potentials (SSVEP). Like other SSVEP-based spellers, ours identifies the desired input character by posing questions (or queries) to users through a visual interface. Each query defines a mapping from possible characters to steady-state stimuli. The user responds by attending to one of these stimuli. Unlike other SSVEP-based spellers, ours chooses from a much larger pool of possible queries-on the order of ten thousand instead of ten. The larger query pool allows our speller to adapt more effectively to the inherent structure of what is being typed and to the input performance of the user, both of which make certain queries provide more information than others. In particular, our speller chooses queries from this pool that maximize the amount of information to be received per unit of time, a measure of mutual information that we call information gain rate. To validate our interface, we compared it with two other state-of-the-art SSVEP-based spellers, which were re-implemented to use the same input mechanism. Results showed that our interface, with the larger query pool, allowed users to spell multiple-word texts nearly twice as fast as they could with the compared spellers.
Query construction, entropy, and generalization in neural-network models
NASA Astrophysics Data System (ADS)
Sollich, Peter
1994-05-01
We study query construction algorithms, which aim at improving the generalization ability of systems that learn from examples by choosing optimal, nonredundant training sets. We set up a general probabilistic framework for deriving such algorithms from the requirement of optimizing a suitable objective function; specifically, we consider the objective functions entropy (or information gain) and generalization error. For two learning scenarios, the high-low game and the linear perceptron, we evaluate the generalization performance obtained by applying the corresponding query construction algorithms and compare it to training on random examples. We find qualitative differences between the two scenarios due to the different structure of the underlying rules (nonlinear and ``noninvertible'' versus linear); in particular, for the linear perceptron, random examples lead to the same generalization ability as a sequence of queries in the limit of an infinite number of examples. We also investigate learning algorithms which are ill matched to the learning environment and find that, in this case, minimum entropy queries can in fact yield a lower generalization ability than random examples. Finally, we study the efficiency of single queries and its dependence on the learning history, i.e., on whether the previous training examples were generated randomly or by querying, and the difference between globally and locally optimal query construction.
Spatial information semantic query based on SPARQL
NASA Astrophysics Data System (ADS)
Xiao, Zhifeng; Huang, Lei; Zhai, Xiaofang
2009-10-01
How can the efficiency of spatial information inquiries be enhanced in today's fast-growing information age? We are rich in geospatial data but poor in up-to-date geospatial information and knowledge that are ready to be accessed by public users. This paper adopts an approach for querying spatial semantic by building an Web Ontology language(OWL) format ontology and introducing SPARQL Protocol and RDF Query Language(SPARQL) to search spatial semantic relations. It is important to establish spatial semantics that support for effective spatial reasoning for performing semantic query. Compared to earlier keyword-based and information retrieval techniques that rely on syntax, we use semantic approaches in our spatial queries system. Semantic approaches need to be developed by ontology, so we use OWL to describe spatial information extracted by the large-scale map of Wuhan. Spatial information expressed by ontology with formal semantics is available to machines for processing and to people for understanding. The approach is illustrated by introducing a case study for using SPARQL to query geo-spatial ontology instances of Wuhan. The paper shows that making use of SPARQL to search OWL ontology instances can ensure the result's accuracy and applicability. The result also indicates constructing a geo-spatial semantic query system has positive efforts on forming spatial query and retrieval.
DISPAQ: Distributed Profitable-Area Query from Big Taxi Trip Data.
Putri, Fadhilah Kurnia; Song, Giltae; Kwon, Joonho; Rao, Praveen
2017-09-25
One of the crucial problems for taxi drivers is to efficiently locate passengers in order to increase profits. The rapid advancement and ubiquitous penetration of Internet of Things (IoT) technology into transportation industries enables us to provide taxi drivers with locations that have more potential passengers (more profitable areas) by analyzing and querying taxi trip data. In this paper, we propose a query processing system, called Distributed Profitable-Area Query ( DISPAQ ) which efficiently identifies profitable areas by exploiting the Apache Software Foundation's Spark framework and a MongoDB database. DISPAQ first maintains a profitable-area query index (PQ-index) by extracting area summaries and route summaries from raw taxi trip data. It then identifies candidate profitable areas by searching the PQ-index during query processing. Then, it exploits a Z-Skyline algorithm, which is an extension of skyline processing with a Z-order space filling curve, to quickly refine the candidate profitable areas. To improve the performance of distributed query processing, we also propose local Z-Skyline optimization, which reduces the number of dominant tests by distributing killer profitable areas to each cluster node. Through extensive evaluation with real datasets, we demonstrate that our DISPAQ system provides a scalable and efficient solution for processing profitable-area queries from huge amounts of big taxi trip data.
Zhou, ZhangBing; Zhao, Deng; Shu, Lei; Tsang, Kim-Fung
2015-01-01
Wireless sensor networks, serving as an important interface between physical environments and computational systems, have been used extensively for supporting domain applications, where multiple-attribute sensory data are queried from the network continuously and periodically. Usually, certain sensory data may not vary significantly within a certain time duration for certain applications. In this setting, sensory data gathered at a certain time slot can be used for answering concurrent queries and may be reused for answering the forthcoming queries when the variation of these data is within a certain threshold. To address this challenge, a popularity-based cooperative caching mechanism is proposed in this article, where the popularity of sensory data is calculated according to the queries issued in recent time slots. This popularity reflects the possibility that sensory data are interested in the forthcoming queries. Generally, sensory data with the highest popularity are cached at the sink node, while sensory data that may not be interested in the forthcoming queries are cached in the head nodes of divided grid cells. Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data. Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability. PMID:26131665
DISPAQ: Distributed Profitable-Area Query from Big Taxi Trip Data †
Putri, Fadhilah Kurnia; Song, Giltae; Rao, Praveen
2017-01-01
One of the crucial problems for taxi drivers is to efficiently locate passengers in order to increase profits. The rapid advancement and ubiquitous penetration of Internet of Things (IoT) technology into transportation industries enables us to provide taxi drivers with locations that have more potential passengers (more profitable areas) by analyzing and querying taxi trip data. In this paper, we propose a query processing system, called Distributed Profitable-Area Query (DISPAQ) which efficiently identifies profitable areas by exploiting the Apache Software Foundation’s Spark framework and a MongoDB database. DISPAQ first maintains a profitable-area query index (PQ-index) by extracting area summaries and route summaries from raw taxi trip data. It then identifies candidate profitable areas by searching the PQ-index during query processing. Then, it exploits a Z-Skyline algorithm, which is an extension of skyline processing with a Z-order space filling curve, to quickly refine the candidate profitable areas. To improve the performance of distributed query processing, we also propose local Z-Skyline optimization, which reduces the number of dominant tests by distributing killer profitable areas to each cluster node. Through extensive evaluation with real datasets, we demonstrate that our DISPAQ system provides a scalable and efficient solution for processing profitable-area queries from huge amounts of big taxi trip data. PMID:28946679
VPipe: Virtual Pipelining for Scheduling of DAG Stream Query Plans
NASA Astrophysics Data System (ADS)
Wang, Song; Gupta, Chetan; Mehta, Abhay
There are data streams all around us that can be harnessed for tremendous business and personal advantage. For an enterprise-level stream processing system such as CHAOS [1] (Continuous, Heterogeneous Analytic Over Streams), handling of complex query plans with resource constraints is challenging. While several scheduling strategies exist for stream processing, efficient scheduling of complex DAG query plans is still largely unsolved. In this paper, we propose a novel execution scheme for scheduling complex directed acyclic graph (DAG) query plans with meta-data enriched stream tuples. Our solution, called Virtual Pipelined Chain (or VPipe Chain for short), effectively extends the "Chain" pipelining scheduling approach to complex DAG query plans.
NASA Astrophysics Data System (ADS)
Warren, Z.; Shahriar, M. S.; Tripathi, R.; Pati, G. S.
2018-02-01
A repeated query technique has been demonstrated as a new interrogation method in pulsed coherent population trapping for producing single-peaked Ramsey interference with high contrast. This technique enhances the contrast of the central Ramsey fringe by nearly 1.5 times and significantly suppresses the side fringes by using more query pulses ( >10) in the pulse cycle. Theoretical models have been developed to simulate Ramsey interference and analyze the characteristics of the Ramsey spectrum produced by the repeated query technique. Experiments have also been carried out employing a repeated query technique in a prototype rubidium clock to study its frequency stability performance.
Nadkarni, P M
1997-08-01
Concept Locator (CL) is a client-server application that accesses a Sybase relational database server containing a subset of the UMLS Metathesaurus for the purpose of retrieval of concepts corresponding to one or more query expressions supplied to it. CL's query grammar permits complex Boolean expressions, wildcard patterns, and parenthesized (nested) subexpressions. CL translates the query expressions supplied to it into one or more SQL statements that actually perform the retrieval. The generated SQL is optimized by the client to take advantage of the strengths of the server's query optimizer, and sidesteps its weaknesses, so that execution is reasonably efficient.
Evolution of Query Optimization Methods
NASA Astrophysics Data System (ADS)
Hameurlain, Abdelkader; Morvan, Franck
Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).
An alternative database approach for management of SNOMED CT and improved patient data queries.
Campbell, W Scott; Pedersen, Jay; McClay, James C; Rao, Praveen; Bastola, Dhundy; Campbell, James R
2015-10-01
SNOMED CT is the international lingua franca of terminologies for human health. Based in Description Logics (DL), the terminology enables data queries that incorporate inferences between data elements, as well as, those relationships that are explicitly stated. However, the ontologic and polyhierarchical nature of the SNOMED CT concept model make it difficult to implement in its entirety within electronic health record systems that largely employ object oriented or relational database architectures. The result is a reduction of data richness, limitations of query capability and increased systems overhead. The hypothesis of this research was that a graph database (graph DB) architecture using SNOMED CT as the basis for the data model and subsequently modeling patient data upon the semantic core of SNOMED CT could exploit the full value of the terminology to enrich and support advanced data querying capability of patient data sets. The hypothesis was tested by instantiating a graph DB with the fully classified SNOMED CT concept model. The graph DB instance was tested for integrity by calculating the transitive closure table for the SNOMED CT hierarchy and comparing the results with transitive closure tables created using current, validated methods. The graph DB was then populated with 461,171 anonymized patient record fragments and over 2.1 million associated SNOMED CT clinical findings. Queries, including concept negation and disjunction, were then run against the graph database and an enterprise Oracle relational database (RDBMS) of the same patient data sets. The graph DB was then populated with laboratory data encoded using LOINC, as well as, medication data encoded with RxNorm and complex queries performed using LOINC, RxNorm and SNOMED CT to identify uniquely described patient populations. A graph database instance was successfully created for two international releases of SNOMED CT and two US SNOMED CT editions. Transitive closure tables and descriptive statistics generated using the graph database were identical to those using validated methods. Patient queries produced identical patient count results to the Oracle RDBMS with comparable times. Database queries involving defining attributes of SNOMED CT concepts were possible with the graph DB. The same queries could not be directly performed with the Oracle RDBMS representation of the patient data and required the creation and use of external terminology services. Further, queries of undefined depth were successful in identifying unknown relationships between patient cohorts. The results of this study supported the hypothesis that a patient database built upon and around the semantic model of SNOMED CT was possible. The model supported queries that leveraged all aspects of the SNOMED CT logical model to produce clinically relevant query results. Logical disjunction and negation queries were possible using the data model, as well as, queries that extended beyond the structural IS_A hierarchy of SNOMED CT to include queries that employed defining attribute-values of SNOMED CT concepts as search parameters. As medical terminologies, such as SNOMED CT, continue to expand, they will become more complex and model consistency will be more difficult to assure. Simultaneously, consumers of data will increasingly demand improvements to query functionality to accommodate additional granularity of clinical concepts without sacrificing speed. This new line of research provides an alternative approach to instantiating and querying patient data represented using advanced computable clinical terminologies. Copyright © 2015 Elsevier Inc. All rights reserved.
Content-Aware DataGuide with Incremental Index Update using Frequently Used Paths
NASA Astrophysics Data System (ADS)
Sharma, A. K.; Duhan, Neelam; Khattar, Priyanka
2010-11-01
Size of the WWW is increasing day by day. Due to the absence of structured data on the Web, it becomes very difficult for information retrieval tools to fully utilize the Web information. As a solution to this problem, XML pages come into play, which provide structural information to the users to some extent. Without efficient indexes, query processing can be quite inefficient due to an exhaustive traversal on XML data. In this paper an improved content-centric approach of Content-Aware DataGuide, which is an indexing technique for XML databases, is being proposed that uses frequently used paths from historical query logs to improve query performance. The index can be updated incrementally according to the changes in query workload and thus, the overhead of reconstruction can be minimized. Frequently used paths are extracted using any Sequential Pattern mining algorithm on subsequent queries in the query workload. After this, the data structures are incrementally updated. This indexing technique proves to be efficient as partial matching queries can be executed efficiently and users can now get the more relevant documents in results.
Systematic review of dietary salt reduction policies: Evidence for an effectiveness hierarchy?
Hyseni, Lirije; Elliot-Green, Alex; Lloyd-Williams, Ffion; Kypridemos, Chris; O’Flaherty, Martin; McGill, Rory; Orton, Lois; Bromley, Helen; Cappuccio, Francesco P.; Capewell, Simon
2017-01-01
Background Non-communicable disease (NCD) prevention strategies now prioritise four major risk factors: food, tobacco, alcohol and physical activity. Dietary salt intake remains much higher than recommended, increasing blood pressure, cardiovascular disease and stomach cancer. Substantial reductions in salt intake are therefore urgently needed. However, the debate continues about the most effective approaches. To inform future prevention programmes, we systematically reviewed the evidence on the effectiveness of possible salt reduction interventions. We further compared “downstream, agentic” approaches targeting individuals with “upstream, structural” policy-based population strategies. Methods We searched six electronic databases (CDSR, CRD, MEDLINE, SCI, SCOPUS and the Campbell Library) using a pre-piloted search strategy focussing on the effectiveness of population interventions to reduce salt intake. Retrieved papers were independently screened, appraised and graded for quality by two researchers. To facilitate comparisons between the interventions, the extracted data were categorised using nine stages along the agentic/structural continuum, from “downstream”: dietary counselling (for individuals, worksites or communities), through media campaigns, nutrition labelling, voluntary and mandatory reformulation, to the most “upstream” regulatory and fiscal interventions, and comprehensive strategies involving multiple components. Results After screening 2,526 candidate papers, 70 were included in this systematic review (49 empirical studies and 21 modelling studies). Some papers described several interventions. Quality was variable. Multi-component strategies involving both upstream and downstream interventions, generally achieved the biggest reductions in salt consumption across an entire population, most notably 4g/day in Finland and Japan, 3g/day in Turkey and 1.3g/day recently in the UK. Mandatory reformulation alone could achieve a reduction of approximately 1.45g/day (three separate studies), followed by voluntary reformulation (-0.8g/day), school interventions (-0.7g/day), short term dietary advice (-0.6g/day) and nutrition labelling (-0.4g/day), but each with a wide range. Tax and community based counselling could, each typically reduce salt intake by 0.3g/day, whilst even smaller population benefits were derived from health education media campaigns (-0.1g/day). Worksite interventions achieved an increase in intake (+0.5g/day), however, with a very wide range. Long term dietary advice could achieve a -2g/day reduction under optimal research trial conditions; however, smaller reductions might be anticipated in unselected individuals. Conclusions Comprehensive strategies involving multiple components (reformulation, food labelling and media campaigns) and “upstream” population-wide policies such as mandatory reformulation generally appear to achieve larger reductions in population-wide salt consumption than “downstream”, individually focussed interventions. This ‘effectiveness hierarchy’ might deserve greater emphasis in future NCD prevention strategies. PMID:28542317
Feasibility and safety of augmented reality-assisted urological surgery using smartglass.
Borgmann, H; Rodríguez Socarrás, M; Salem, J; Tsaur, I; Gomez Rivas, J; Barret, E; Tortolero, L
2017-06-01
To assess the feasibility, safety and usefulness of augmented reality-assisted urological surgery using smartglass (SG). Seven urological surgeons (3 board urologists and 4 urology residents) performed augmented reality-assisted urological surgery using SG for 10 different types of operations and a total of 31 urological operations. Feasibility was assessed using technical metadata (number of photographs taken/number of videos recorded/video time recorded) and structured interviews with the urologists on their use of SG. Safety was evaluated by recording complications and grading according to the Clavien-Dindo classification. Usefulness of SG for urological surgery was queried in structured interviews and in a survey. The implementation of SG use during urological surgery was feasible with no intrinsic (technical defect) or extrinsic (inability to control the SG function) obstacles being observed. SG use was safe as no grade 3-5 complications occurred for the series of 31 urological surgeries of different complexities. Technical applications of SG included taking photographs/recording videos for teaching and documentation, hands-free teleconsultation, reviewing patients' medical records and images and searching the internet for health information. Overall usefulness of SG for urological surgery was rated as very high by 43 % and high by 29 % of surgeons. Augmented reality-assisted urological surgery using SG is both feasible and safe and also provides several useful functions for urological surgeons. Further developments and investigations are required in the near future to harvest the great potential of this exciting technology for urological surgery.
Autocorrelation and Regularization of Query-Based Information Retrieval Scores
2008-02-01
of the most general information retrieval models [ Salton , 1968]. By treating a query as a very short document, documents and queries can be rep... Salton , 1971]. In the context of single link hierarchical clustering, Jardine and van Rijsbergen showed that ranking all k clusters and retrieving a...a document about “dogs”, then the system will always miss this document when a user queries “dog”. Salton recognized that a document’s representation
Query Log Analysis of an Electronic Health Record Search Engine
Yang, Lei; Mei, Qiaozhu; Zheng, Kai; Hanauer, David A.
2011-01-01
We analyzed a longitudinal collection of query logs of a full-text search engine designed to facilitate information retrieval in electronic health records (EHR). The collection, 202,905 queries and 35,928 user sessions recorded over a course of 4 years, represents the information-seeking behavior of 533 medical professionals, including frontline practitioners, coding personnel, patient safety officers, and biomedical researchers for patient data stored in EHR systems. In this paper, we present descriptive statistics of the queries, a categorization of information needs manifested through the queries, as well as temporal patterns of the users’ information-seeking behavior. The results suggest that information needs in medical domain are substantially more sophisticated than those that general-purpose web search engines need to accommodate. Therefore, we envision there exists a significant challenge, along with significant opportunities, to provide intelligent query recommendations to facilitate information retrieval in EHR. PMID:22195150