Science.gov

Sample records for automatic text summarization

  1. Generalized minimum dominating set and application in automatic text summarization

    NASA Astrophysics Data System (ADS)

    Xu, Yi-Zhi; Zhou, Hai-Jun

    2016-03-01

    For a graph formed by vertices and weighted edges, a generalized minimum dominating set (MDS) is a vertex set of smallest cardinality such that the summed weight of edges from each outside vertex to vertices in this set is equal to or larger than certain threshold value. This generalized MDS problem reduces to the conventional MDS problem in the limiting case of all the edge weights being equal to the threshold value. We treat the generalized MDS problem in the present paper by a replica-symmetric spin glass theory and derive a set of belief-propagation equations. As a practical application we consider the problem of extracting a set of sentences that best summarize a given input text document. We carry out a preliminary test of the statistical physics-inspired method to this automatic text summarization problem.

  2. Summarizing Expository Texts

    ERIC Educational Resources Information Center

    Westby, Carol; Culatta, Barbara; Lawrence, Barbara; Hall-Kenyon, Kendra

    2010-01-01

    Purpose: This article reviews the literature on students' developing skills in summarizing expository texts and describes strategies for evaluating students' expository summaries. Evaluation outcomes are presented for a professional development project aimed at helping teachers develop new techniques for teaching summarization. Methods: Strategies…

  3. An Automatic Multidocument Text Summarization Approach Based on Naïve Bayesian Classifier Using Timestamp Strategy

    PubMed Central

    Ramanujam, Nedunchelian; Kaliappan, Manivannan

    2016-01-01

    Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach. PMID:27034971

  4. Using Text Messaging to Summarize Text

    ERIC Educational Resources Information Center

    Williams, Angela Ruffin

    2012-01-01

    Summarizing is an academic task that students are expected to have mastered by the time they enter college. However, experience has revealed quite the contrary. Summarization is often difficult to master as well as teach, but instructors in higher education can benefit greatly from the rapid advancement in mobile wireless technology devices, by…

  5. Figure-Associated Text Summarization and Evaluation

    PubMed Central

    Polepalli Ramesh, Balaji; Sethi, Ricky J.; Yu, Hong

    2015-01-01

    Biomedical literature incorporates millions of figures, which are a rich and important knowledge resource for biomedical researchers. Scientists need access to the figures and the knowledge they represent in order to validate research findings and to generate new hypotheses. By themselves, these figures are nearly always incomprehensible to both humans and machines and their associated texts are therefore essential for full comprehension. The associated text of a figure, however, is scattered throughout its full-text article and contains redundant information content. In this paper, we report the continued development and evaluation of several figure summarization systems, the FigSum+ systems, that automatically identify associated texts, remove redundant information, and generate a text summary for every figure in an article. Using a set of 94 annotated figures selected from 19 different journals, we conducted an intrinsic evaluation of FigSum+. We evaluate the performance by precision, recall, F1, and ROUGE scores. The best FigSum+ system is based on an unsupervised method, achieving F1 score of 0.66 and ROUGE-1 score of 0.97. The annotated data is available at figshare.com (http://figshare.com/articles/Figure_Associated_Text_Summarization_and_Evaluation/858903). PMID:25643357

  6. Figure-associated text summarization and evaluation.

    PubMed

    Polepalli Ramesh, Balaji; Sethi, Ricky J; Yu, Hong

    2015-01-01

    Biomedical literature incorporates millions of figures, which are a rich and important knowledge resource for biomedical researchers. Scientists need access to the figures and the knowledge they represent in order to validate research findings and to generate new hypotheses. By themselves, these figures are nearly always incomprehensible to both humans and machines and their associated texts are therefore essential for full comprehension. The associated text of a figure, however, is scattered throughout its full-text article and contains redundant information content. In this paper, we report the continued development and evaluation of several figure summarization systems, the FigSum+ systems, that automatically identify associated texts, remove redundant information, and generate a text summary for every figure in an article. Using a set of 94 annotated figures selected from 19 different journals, we conducted an intrinsic evaluation of FigSum+. We evaluate the performance by precision, recall, F1, and ROUGE scores. The best FigSum+ system is based on an unsupervised method, achieving F1 score of 0.66 and ROUGE-1 score of 0.97. The annotated data is available at figshare.com (http://figshare.com/articles/Figure_Associated_Text_Summarization_and_Evaluation/858903). PMID:25643357

  7. A Statistical Approach to Automatic Speech Summarization

    NASA Astrophysics Data System (ADS)

    Hori, Chiori; Furui, Sadaoki; Malkin, Rob; Yu, Hua; Waibel, Alex

    2003-12-01

    This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP) technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG). We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.

  8. Task-Driven Dynamic Text Summarization

    ERIC Educational Resources Information Center

    Workman, Terri Elizabeth

    2011-01-01

    The objective of this work is to examine the efficacy of natural language processing (NLP) in summarizing bibliographic text for multiple purposes. Researchers have noted the accelerating growth of bibliographic databases. Information seekers using traditional information retrieval techniques when searching large bibliographic databases are often…

  9. Task-Driven Dynamic Text Summarization

    ERIC Educational Resources Information Center

    Workman, Terri Elizabeth

    2011-01-01

    The objective of this work is to examine the efficacy of natural language processing (NLP) in summarizing bibliographic text for multiple purposes. Researchers have noted the accelerating growth of bibliographic databases. Information seekers using traditional information retrieval techniques when searching large bibliographic databases are often

  10. Enhancing Biomedical Text Summarization Using Semantic Relation Extraction

    PubMed Central

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336

  11. Summarization Instruction: Effects on Foreign Language Comprehension and Summarization of Expository Texts.

    ERIC Educational Resources Information Center

    Cordero-Ponce, Wanda L.

    2000-01-01

    Reports the effects of metacognitive strategy training in summarization on the ability of foreign language learners to comprehend and summarize expository texts. Notes that the improved summary performance was maintained three weeks after instruction ended. Suggests that explicit instruction in the rules of summarization is an effective tool for…

  12. Information Extraction and Text Summarization Using Linguistic Knowledge Acquisition.

    ERIC Educational Resources Information Center

    Rau, Lisa F.; And Others

    1989-01-01

    Describes SCISOR (System for Conceptual Information Summarization, Organization and Retrieval), a prototype intelligent information retrieval system that extracts useful information from large bodies of text. It overcomes limitations of linguistic coverage by applying a text processing strategy that is tolerant of unknown words and gaps in…

  13. Summarization of Text Document Using Query Dependent Parsing Techniques

    NASA Astrophysics Data System (ADS)

    Rokade, P. P.; Mrunal, Bewoor; Patil, S. H.

    2010-11-01

    World Wide Web is the largest source of information. Huge amount of data is present on the Web. There has been a great amount of work on query-independent summarization of documents. However, due to the success of Web search engines query-specific document summarization (query result snippets) has become an important problem. In this paper a method to create query specific summaries by identifying the most query-relevant fragments and combining them using the semantic associations within the document is discussed. In particular, first a structure is added to the documents in the preprocessing stage and converts them to document graphs. The present research work focuses on analytical study of different document clustering and summarization techniques currently the most research is focused on Query-Independent summarization. The main aim of this research work is to combine the both approaches of document clustering and query dependent summarization. This mainly includes applying different clustering algorithms on a text document. Create a weighted document graph of the resulting graph based on the keywords. And obtain the document graph to get the summary of the document. The performance of the summary using different clustering techniques will be analyzed and the optimal approach will be suggested.

  14. Towards an Automatic Forum Summarization to Support Tutoring

    NASA Astrophysics Data System (ADS)

    Carbonaro, Antonella

    The process of summarizing information is becoming increasingly important in the light of recent advances in resource creation and distribution and the resulting influx of large numbers of information in everyday life. These advances are also challenging educational institutions to adopt the opportunities of distributed knowledge sharing and communication. Among the most recent trends, the availability of social communication networks, knowledge representation and of activate learning gives rise for a new landscape of learning as a networked, situated, contextual and life-long activities. In this scenario, new perspectives on learning and teaching processes must be developed and supported, relating learning models, content-based tools, social organization and knowledge sharing.

  15. An Automatic Multimedia Content Summarization System for Video Recommendation

    ERIC Educational Resources Information Center

    Yang, Jie Chi; Huang, Yi Ting; Tsai, Chi Cheng; Chung, Ching I.; Wu, Yu Chieh

    2009-01-01

    In recent years, using video as a learning resource has received a lot of attention and has been successfully applied to many learning activities. In comparison with text-based learning, video learning integrates more multimedia resources, which usually motivate learners more than texts. However, one of the major limitations of video learning is…

  16. Automatic Summarization of MEDLINE Citations for Evidence–Based Medical Treatment: A Topic-Oriented Evaluation

    PubMed Central

    Fiszman, Marcelo; Demner-Fushman, Dina; Kilicoglu, Halil; Rindflesch, Thomas C.

    2009-01-01

    As the number of electronic biomedical textual resources increases, it becomes harder for physicians to find useful answers at the point of care. Information retrieval applications provide access to databases; however, little research has been done on using automatic summarization to help navigate the documents returned by these systems. After presenting a semantic abstraction automatic summarization system for MEDLINE citations, we concentrate on evaluating its ability to identify useful drug interventions for fifty-three diseases. The evaluation methodology uses existing sources of evidence-based medicine as surrogates for a physician-annotated reference standard. Mean average precision (MAP) and a clinical usefulness score developed for this study were computed as performance metrics. The automatic summarization system significantly outperformed the baseline in both metrics. The MAP gain was 0.17 (p < 0.01) and the increase in the overall score of clinical usefulness was 0.39 (p < 0.05). PMID:19022398

  17. Studying the correlation between different word sense disambiguation methods and summarization effectiveness in biomedical texts

    PubMed Central

    2011-01-01

    Background Word sense disambiguation (WSD) attempts to solve lexical ambiguities by identifying the correct meaning of a word based on its context. WSD has been demonstrated to be an important step in knowledge-based approaches to automatic summarization. However, the correlation between the accuracy of the WSD methods and the summarization performance has never been studied. Results We present three existing knowledge-based WSD approaches and a graph-based summarizer. Both the WSD approaches and the summarizer employ the Unified Medical Language System (UMLS) Metathesaurus as the knowledge source. We first evaluate WSD directly, by comparing the prediction of the WSD methods to two reference sets: the NLM WSD dataset and the MSH WSD collection. We next apply the different WSD methods as part of the summarizer, to map documents onto concepts in the UMLS Metathesaurus, and evaluate the summaries that are generated. The results obtained by the different methods in both evaluations are studied and compared. Conclusions It has been found that the use of WSD techniques has a positive impact on the results of our graph-based summarizer, and that, when both the WSD and summarization tasks are assessed over large and homogeneous evaluation collections, there exists a correlation between the overall results of the WSD and summarization tasks. Furthermore, the best WSD algorithm in the first task tends to be also the best one in the second. However, we also found that the improvement achieved by the summarizer is not directly correlated with the WSD performance. The most likely reason is that the errors in disambiguation are not equally important but depend on the relative salience of the different concepts in the document to be summarized. PMID:21871110

  18. Text Summarization in the Biomedical Domain: A Systematic Review of Recent Research

    PubMed Central

    Mishra, Rashmi; Bian, Jiantao; Fiszman, Marcelo; Weir, Charlene R.; Jonnalagadda, Siddhartha; Mostafa, Javed; Fiol, Guilherme Del

    2014-01-01

    Objective The amount of information for clinicians and clinical researchers is growing exponentially. Text summarization reduces information as an attempt to enable users to find and understand relevant source texts more quickly and effortlessly. In recent years, substantial research has been conducted to develop and evaluate various summarization techniques in the biomedical domain. The goal of this study was to systematically review recent published research on summarization of textual documents in the biomedical domain. Materials and methods MEDLINE (2000 to October 2013), IEEE Digital Library, and the ACM Digital library were searched. Investigators independently screened and abstracted studies that examined text summarization techniques in the biomedical domain. Information is derived from selected articles on five dimensions: input, purpose, output, method and evaluation. Results Of 10,786 studies retrieved, 34 (0.3%) met the inclusion criteria. Natural Language processing (17; 50%) and a Hybrid technique comprising of statistical, Natural language processing and machine learning (15; 44%) were the most common summarization approaches. Most studies (28; 82%) conducted an intrinsic evaluation. Discussion This is the first systematic review of text summarization in the biomedical domain. The study identified research gaps and provides recommendations for guiding future research on biomedical text summarization. conclusion Recent research has focused on a Hybrid technique comprising statistical, language processing and machine learning techniques. Further research is needed on the application and evaluation of text summarization in real research or patient care settings. PMID:25016293

  19. Science Text Comprehension: Drawing, Main Idea Selection, and Summarizing as Learning Strategies

    ERIC Educational Resources Information Center

    Leopold, Claudia; Leutner, Detlev

    2012-01-01

    The purpose of two experiments was to contrast instructions to generate drawings with two text-focused strategies--main idea selection (Exp. 1) and summarization (Exp. 2)--and to examine whether these strategies could help students learn from a chemistry science text. Both experiments followed a 2 x 2 design, with drawing strategy instructions…

  20. A Comparison of Two Strategies for Teaching Third Graders to Summarize Information Text

    ERIC Educational Resources Information Center

    Dromsky, Ann Marie

    2011-01-01

    Summarizing text is one of the most effective comprehension strategies (National Institute of Child Health and Human Development, 2000) and an effective way to learn from information text (Dole, Duffy, Roehler, & Pearson, 1991; Pressley & Woloshyn, 1995). In addition, much research supports the explicit instruction of such strategies as…

  1. Spatial Text Visualization Using Automatic Typographic Maps.

    PubMed

    Afzal, S; Maciejewski, R; Jang, Yun; Elmqvist, N; Ebert, D S

    2012-12-01

    We present a method for automatically building typographic maps that merge text and spatial data into a visual representation where text alone forms the graphical features. We further show how to use this approach to visualize spatial data such as traffic density, crime rate, or demographic data. The technique accepts a vector representation of a geographic map and spatializes the textual labels in the space onto polylines and polygons based on user-defined visual attributes and constraints. Our sample implementation runs as a Web service, spatializing shape files from the OpenStreetMap project into typographic maps for any region. PMID:26357164

  2. MeSH: a window into full text for document summarization

    PubMed Central

    Bhattacharya, Sanmitra; Ha−Thuc, Viet; Srinivasan, Padmini

    2011-01-01

    Motivation: Previous research in the biomedical text-mining domain has historically been limited to titles, abstracts and metadata available in MEDLINE records. Recent research initiatives such as TREC Genomics and BioCreAtIvE strongly point to the merits of moving beyond abstracts and into the realm of full texts. Full texts are, however, more expensive to process not only in terms of resources needed but also in terms of accuracy. Since full texts contain embellishments that elaborate, contextualize, contrast, supplement, etc., there is greater risk for false positives. Motivated by this, we explore an approach that offers a compromise between the extremes of abstracts and full texts. Specifically, we create reduced versions of full text documents that contain only important portions. In the long-term, our goal is to explore the use of such summaries for functions such as document retrieval and information extraction. Here, we focus on designing summarization strategies. In particular, we explore the use of MeSH terms, manually assigned to documents by trained annotators, as clues to select important text segments from the full text documents. Results: Our experiments confirm the ability of our approach to pick the important text portions. Using the ROUGE measures for evaluation, we were able to achieve maximum ROUGE-1, ROUGE-2 and ROUGE-SU4 F-scores of 0.4150, 0.1435 and 0.1782, respectively, for our MeSH term-based method versus the maximum baseline scores of 0.3815, 0.1353 and 0.1428, respectively. Using a MeSH profile-based strategy, we were able to achieve maximum ROUGE F-scores of 0.4320, 0.1497 and 0.1887, respectively. Human evaluation of the baselines and our proposed strategies further corroborates the ability of our method to select important sentences from the full texts. Contact: sanmitra-bhattacharya@uiowa.edu; padmini-srinivasan@uiowa.edu PMID:21685060

  3. Presentation video retrieval using automatically recovered slide and spoken text

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  4. Individual Differences in Reading To Summarize Expository Text: Evidence from Eye Fixation Patterns.

    ERIC Educational Resources Information Center

    Hyona, Jukka; Lorch, Robert F., Jr.; Kaakinen, Johanna K.

    2002-01-01

    Eye fixation patterns were used to identify reading strategies of adults as they read multiple-topic expository texts. A clustering technique distinguished four strategies that differed with respect to the ways in which readers processed text. Findings indicated that qualitatively distinct reading strategies are observable among competent, adult…

  5. Stemming Malay Text and Its Application in Automatic Text Categorization

    NASA Astrophysics Data System (ADS)

    Yasukawa, Michiko; Lim, Hui Tian; Yokoo, Hidetoshi

    In Malay language, there are no conjugations and declensions and affixes have important grammatical functions. In Malay, the same word may function as a noun, an adjective, an adverb, or, a verb, depending on its position in the sentence. Although extensively simple root words are used in informal conversations, it is essential to use the precise words in formal speech or written texts. In Malay, to make sentences clear, derivative words are used. Derivation is achieved mainly by the use of affixes. There are approximately a hundred possible derivative forms of a root word in written language of the educated Malay. Therefore, the composition of Malay words may be complicated. Although there are several types of stemming algorithms available for text processing in English and some other languages, they cannot be used to overcome the difficulties in Malay word stemming. Stemming is the process of reducing various words to their root forms in order to improve the effectiveness of text processing in information systems. It is essential to avoid both over-stemming and under-stemming errors. We have developed a new Malay stemmer (stemming algorithm) for removing inflectional and derivational affixes. Our stemmer uses a set of affix rules and two types of dictionaries: a root-word dictionary and a derivative-word dictionary. The use of set of rules is aimed at reducing the occurrence of under-stemming errors, while that of the dictionaries is believed to reduce the occurrence of over-stemming errors. We performed an experiment to evaluate the application of our stemmer in text mining software. For the experiment, text data used were actual web pages collected from the World Wide Web to demonstrate the effectiveness of our Malay stemming algorithm. The experimental results showed that our stemmer can effectively increase the precision of the extracted Boolean expressions for text categorization.

  6. Effects of Presentation Mode and Computer Familiarity on Summarization of Extended Texts

    ERIC Educational Resources Information Center

    Yu, Guoxing

    2010-01-01

    Comparability studies on computer- and paper-based reading tests have focused on short texts and selected-response items via almost exclusively statistical modeling of test performance. The psychological effects of presentation mode and computer familiarity on individual students are under-researched. In this study, 157 students read extended…

  7. Automatically generating extraction patterns from untagged text

    SciTech Connect

    Riloff, E.

    1996-12-31

    Many corpus-based natural language processing systems rely on text corpora that have been manually annotated with syntactic or semantic tags. In particular, all previous dictionary construction systems for information extraction have used an annotated training corpus or some form of annotated input. We have developed a system called AutoSlog-TS that creates dictionaries of extraction patterns using only untagged text. AutoSlog-TS is based on the AutoSlog system, which generated extraction patterns using annotated text and a set of heuristic rules. By adapting AutoSlog and combining it with statistical techniques, we eliminated its dependency on tagged text. In experiments with the MUC-4 terrorism domain, AutoSlog-TS created a dictionary of extraction patterns that performed comparably to a dictionary created by AutoSlog, using only preclassified texts as input.

  8. Information fusion for automatic text classification

    SciTech Connect

    Dasigi, V.; Mann, R.C.; Protopopescu, V.A.

    1996-08-01

    Analysis and classification of free text documents encompass decision-making processes that rely on several clues derived from text and other contextual information. When using multiple clues, it is generally not known a priori how these should be integrated into a decision. An algorithmic sensor based on Latent Semantic Indexing (LSI) (a recent successful method for text retrieval rather than classification) is the primary sensor used in our work, but its utility is limited by the {ital reference}{ital library} of documents. Thus, there is an important need to complement or at least supplement this sensor. We have developed a system that uses a neural network to integrate the LSI-based sensor with other clues derived from the text. This approach allows for systematic fusion of several information sources in order to determine a combined best decision about the category to which a document belongs.

  9. Automatic extraction of angiogenesis bioprocess from text

    PubMed Central

    Wang, Xinglong; McKendrick, Iain; Barrett, Ian; Dix, Ian; French, Tim; Tsujii, Jun'ichi; Ananiadou, Sophia

    2011-01-01

    Motivation: Understanding key biological processes (bioprocesses) and their relationships with constituent biological entities and pharmaceutical agents is crucial for drug design and discovery. One way to harvest such information is searching the literature. However, bioprocesses are difficult to capture because they may occur in text in a variety of textual expressions. Moreover, a bioprocess is often composed of a series of bioevents, where a bioevent denotes changes to one or a group of cells involved in the bioprocess. Such bioevents are often used to refer to bioprocesses in text, which current techniques, relying solely on specialized lexicons, struggle to find. Results: This article presents a range of methods for finding bioprocess terms and events. To facilitate the study, we built a gold standard corpus in which terms and events related to angiogenesis, a key biological process of the growth of new blood vessels, were annotated. Statistics of the annotated corpus revealed that over 36% of the text expressions that referred to angiogenesis appeared as events. The proposed methods respectively employed domain-specific vocabularies, a manually annotated corpus and unstructured domain-specific documents. Evaluation results showed that, while a supervised machine-learning model yielded the best precision, recall and F1 scores, the other methods achieved reasonable performance and less cost to develop. Availability: The angiogenesis vocabularies, gold standard corpus, annotation guidelines and software described in this article are available at http://text0.mib.man.ac.uk/~mbassxw2/angiogenesis/ Contact: xinglong.wang@gmail.com PMID:21821664

  10. Text Structuration Leading to an Automatic Summary System: RAFI.

    ERIC Educational Resources Information Center

    Lehman, Abderrafih

    1999-01-01

    Describes the design and construction of Resume Automatique a Fragments Indicateurs (RAFI), a system of automatic text summary which sums up scientific and technical texts. The RAFI system transforms a long source text into several versions of more condensed texts, using discourse analysis, to make searching easier; it could be adapted to the…

  11. Usability evaluation of an experimental text summarization system and three search engines: implications for the reengineering of health care interfaces.

    PubMed Central

    Kushniruk, Andre W.; Kan, Min-Yem; McKeown, Kathleen; Klavans, Judith; Jordan, Desmond; LaFlamme, Mark; Patel, Vimia L.

    2002-01-01

    This paper describes the comparative evaluation of an experimental automated text summarization system, Centrifuser and three conventional search engines - Google, Yahoo and About.com. Centrifuser provides information to patients and families relevant to their questions about specific health conditions. It then produces a multidocument summary of articles retrieved by a standard search engine, tailored to the user's question. Subjects, consisting of friends or family of hospitalized patients, were asked to "think aloud" as they interacted with the four systems. The evaluation involved audio- and video recording of subject interactions with the interfaces in situ at a hospital. Results of the evaluation show that subjects found Centrifuser's summarization capability useful and easy to understand. In comparing Centrifuser to the three search engines, subjects' ratings varied; however, specific interface features were deemed useful across interfaces. We conclude with a discussion of the implications for engineering Web-based retrieval systems. PMID:12463858

  12. Profiling School Shooters: Automatic Text-Based Analysis

    PubMed Central

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L.

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters’ texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  13. Profiling School Shooters: Automatic Text-Based Analysis.

    PubMed

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters' texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  14. Automatic inpainting scheme for video text detection and removal.

    PubMed

    Mosleh, Ali; Bouguila, Nizar; Ben Hamza, Abdessamad

    2013-11-01

    We present a two stage framework for automatic video text removal to detect and remove embedded video texts and fill-in their remaining regions by appropriate data. In the video text detection stage, text locations in each frame are found via an unsupervised clustering performed on the connected components produced by the stroke width transform (SWT). Since SWT needs an accurate edge map, we develop a novel edge detector which benefits from the geometric features revealed by the bandlet transform. Next, the motion patterns of the text objects of each frame are analyzed to localize video texts. The detected video text regions are removed, then the video is restored by an inpainting scheme. The proposed video inpainting approach applies spatio-temporal geometric flows extracted by bandlets to reconstruct the missing data. A 3D volume regularization algorithm, which takes advantage of bandlet bases in exploiting the anisotropic regularities, is introduced to carry out the inpainting task. The method does not need extra processes to satisfy visual consistency. The experimental results demonstrate the effectiveness of both our proposed video text detection approach and the video completion technique, and consequently the entire automatic video text removal and restoration process. PMID:24057006

  15. A scheme for automatic text rectification in real scene images

    NASA Astrophysics Data System (ADS)

    Wang, Baokang; Liu, Changsong; Ding, Xiaoqing

    2015-03-01

    Digital camera is gradually replacing traditional flat-bed scanner as the main access to obtain text information for its usability, cheapness and high-resolution, there has been a large amount of research done on camera-based text understanding. Unfortunately, arbitrary position of camera lens related to text area can frequently cause perspective distortion which most OCR systems at present cannot manage, thus creating demand for automatic text rectification. Current rectification-related research mainly focused on document images, distortion of natural scene text is seldom considered. In this paper, a scheme for automatic text rectification in natural scene images is proposed. It relies on geometric information extracted from characters themselves as well as their surroundings. For the first step, linear segments are extracted from interested region, and a J-Linkage based clustering is performed followed by some customized refinement to estimate primary vanishing point(VP)s. To achieve a more comprehensive VP estimation, second stage would be performed by inspecting the internal structure of characters which involves analysis on pixels and connected components of text lines. Finally VPs are verified and used to implement perspective rectification. Experiments demonstrate increase of recognition rate and improvement compared with some related algorithms.

  16. Automatic text extraction in news images using morphology

    NASA Astrophysics Data System (ADS)

    Jang, InYoung; Ko, ByoungChul; Byun, HyeRan; Choi, Yeongwoo

    2002-01-01

    In this paper we present a new method to extract both superimposed and embedded graphical texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, we convert a color image into a gray-level image and apply contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose + CloseOpen)/2 morphological operations, maintaining text components using (OpenClose + CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, our method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.

  17. Toward a multi-sensor-based approach to automatic text classification

    SciTech Connect

    Dasigi, V.R.; Mann, R.C.

    1995-10-01

    Many automatic text indexing and retrieval methods use a term-document matrix that is automatically derived from the text in question. Latent Semantic Indexing is a method, recently proposed in the Information Retrieval (IR) literature, for approximating a large and sparse term-document matrix with a relatively small number of factors, and is based on a solid mathematical foundation. LSI appears to be quite useful in the problem of text information retrieval, rather than text classification. In this report, we outline a method that attempts to combine the strength of the LSI method with that of neural networks, in addressing the problem of text classification. In doing so, we also indicate ways to improve performance by adding additional {open_quotes}logical sensors{close_quotes} to the neural network, something that is hard to do with the LSI method when employed by itself. The various programs that can be used in testing the system with TIPSTER data set are described. Preliminary results are summarized, but much work remains to be done.

  18. Exploring Automaticity in Text Processing: Syntactic Ambiguity as a Test Case

    ERIC Educational Resources Information Center

    Rawson, Katherine A.

    2004-01-01

    A prevalent assumption in text comprehension research is that many aspects of text processing are automatic, with automaticity typically defined in terms of properties (e.g., speed and effort). The present research advocates conceptualization of automaticity in terms of underlying mechanisms and evaluates two such accounts, a…

  19. Memory-Based Processing as a Mechanism of Automaticity in Text Comprehension

    ERIC Educational Resources Information Center

    Rawson, Katherine A.; Middleton, Erica L.

    2009-01-01

    A widespread theoretical assumption is that many processes involved in text comprehension are automatic, with automaticity typically defined in terms of properties (e.g., speed, effort). In contrast, the authors advocate for conceptualization of automaticity in terms of underlying cognitive mechanisms and evaluate one prominent account, the…

  20. Supervised and traditional term weighting methods for automatic text categorization.

    PubMed

    Lan, Man; Tan, Chew Lim; Su, Jian; Lu, Yue

    2009-04-01

    In vector space model (VSM), text representation is the task of transforming the content of a textual document into a vector in the term space so that the document could be recognized and classified by a computer or a classifier. Different terms (i.e. words, phrases, or any other indexing units used to identify the contents of a text) have different importance in a text. The term weighting methods assign appropriate weights to the terms to improve the performance of text categorization. In this study, we investigate several widely-used unsupervised (traditional) and supervised term weighting methods on benchmark data collections in combination with SVM and kappa NN algorithms. In consideration of the distribution of relevant documents in the collection, we propose a new simple supervised term weighting method, i.e. tf.rf, to improve the terms' discriminating power for text categorization task. From the controlled experimental results, these supervised term weighting methods have mixed performance. Specifically, our proposed supervised term weighting method, tf.rf, has a consistently better performance than other term weighting methods while other supervised term weighting methods based on information theory or statistical metric perform the worst in all experiments. On the other hand, the popularly used tf.idf method has not shown a uniformly good performance in terms of different data sets. PMID:19229086

  1. Automatic Coding of Short Text Responses via Clustering in Educational Assessment

    ERIC Educational Resources Information Center

    Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank

    2016-01-01

    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…

  2. Automatic theory generation from analyst text files using coherence networks

    NASA Astrophysics Data System (ADS)

    Shaffer, Steven C.

    2014-05-01

    This paper describes a three-phase process of extracting knowledge from analyst textual reports. Phase 1 involves performing natural language processing on the source text to extract subject-predicate-object triples. In phase 2, these triples are then fed into a coherence network analysis process, using a genetic algorithm optimization. Finally, the highest-value sub networks are processed into a semantic network graph for display. Initial work on a well- known data set (a Wikipedia article on Abraham Lincoln) has shown excellent results without any specific tuning. Next, we ran the process on the SYNthetic Counter-INsurgency (SYNCOIN) data set, developed at Penn State, yielding interesting and potentially useful results.

  3. Combining MEDLINE and publisher data to create parallel corpora for the automatic translation of biomedical text

    PubMed Central

    2013-01-01

    Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733

  4. The Effects of Teaching a Text-Structure Based Reading Comprehension Strategy on Struggling Fifth Grade Students' Ability to Summarize and Analyze Written Arguments

    ERIC Educational Resources Information Center

    Haria, Priti; MacArthur, Charles; Santoro, Lana Edwards

    2010-01-01

    The purpose of this research was to examine the effectiveness of teaching fifth grade students with reading difficulties a genre-specific strategy for summarizing and critically analyzing written arguments. In addition, this research explored whether learning this particular reading strategy informed the students' ability to write effective and…

  5. Automatic Cataloguing and Searching for Retrospective Data by Use of OCR Text.

    ERIC Educational Resources Information Center

    Tseng, Yuen-Hsien

    2001-01-01

    Describes efforts in supporting information retrieval from OCR (optical character recognition) degraded text. Reports on approaches used in an automatic cataloging and searching contest for books in multiple languages, including a vector space retrieval model, an n-gram indexing method, and a weighting scheme; and discusses problems of Asian…

  6. Automatic Cataloguing and Searching for Retrospective Data by Use of OCR Text.

    ERIC Educational Resources Information Center

    Tseng, Yuen-Hsien

    2001-01-01

    Describes efforts in supporting information retrieval from OCR (optical character recognition) degraded text. Reports on approaches used in an automatic cataloging and searching contest for books in multiple languages, including a vector space retrieval model, an n-gram indexing method, and a weighting scheme; and discusses problems of Asian

  7. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction

    PubMed Central

    Najafi, Elham; Darooneh, Amir H.

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction. PMID:26091207

  8. An automatic system to detect and extract texts in medical images for de-identification

    NASA Astrophysics Data System (ADS)

    Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael

    2010-03-01

    Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.

  9. Interpretable Probabilistic Latent Variable Models for Automatic Annotation of Clinical Text

    PubMed Central

    Kotov, Alexander; Hasan, Mehedi; Carcone, April; Dong, Ming; Naar-King, Sylvie; BroganHartlieb, Kathryn

    2015-01-01

    We propose Latent Class Allocation (LCA) and Discriminative Labeled Latent Dirichlet Allocation (DL-LDA), two novel interpretable probabilistic latent variable models for automatic annotation of clinical text. Both models separate the terms that are highly characteristic of textual fragments annotated with a given set of labels from other non-discriminative terms, but rely on generative processes with different structure of latent variables. LCA directly learns class-specific multinomials, while DL-LDA breaks them down into topics (clusters of semantically related words). Extensive experimental evaluation indicates that the proposed models outperform Naïve Bayes, a standard probabilistic classifier, and Labeled LDA, a state-of-the-art topic model for labeled corpora, on the task of automatic annotation of transcripts of motivational interviews, while the output of the proposed models can be easily interpreted by clinical practitioners. PMID:26958214

  10. Extractive summarization using complex networks and syntactic dependency

    NASA Astrophysics Data System (ADS)

    Amancio, Diego R.; Nunes, Maria G. V.; Oliveira, Osvaldo N.; Costa, Luciano da F.

    2012-02-01

    The realization that statistical physics methods can be applied to analyze written texts represented as complex networks has led to several developments in natural language processing, including automatic summarization and evaluation of machine translation. Most importantly, so far only a few metrics of complex networks have been used and therefore there is ample opportunity to enhance the statistics-based methods as new measures of network topology and dynamics are created. In this paper, we employ for the first time the metrics betweenness, vulnerability and diversity to analyze written texts in Brazilian Portuguese. Using strategies based on diversity metrics, a better performance in automatic summarization is achieved in comparison to previous work employing complex networks. With an optimized method the Rouge score (an automatic evaluation method used in summarization) was 0.5089, which is the best value ever achieved for an extractive summarizer with statistical methods based on complex networks for Brazilian Portuguese. Furthermore, the diversity metric can detect keywords with high precision, which is why we believe it is suitable to produce good summaries. It is also shown that incorporating linguistic knowledge through a syntactic parser does enhance the performance of the automatic summarizers, as expected, but the increase in the Rouge score is only minor. These results reinforce the suitability of complex network methods for improving automatic summarizers in particular, and treating text in general.

  11. Automatic Entity Recognition and Typing from Massive Text Corpora: A Phrase and Network Mining Approach

    PubMed Central

    Ren, Xiang; El-Kishky, Ahmed; Wang, Chi; Han, Jiawei

    2015-01-01

    In today’s computerized and information-based society, we are soaked with vast amounts of text data, ranging from news articles, scientific publications, product reviews, to a wide range of textual information from social media. To unlock the value of these unstructured text data from various domains, it is of great importance to gain an understanding of entities and their relationships. In this tutorial, we introduce data-driven methods to recognize typed entities of interest in massive, domain-specific text corpora. These methods can automatically identify token spans as entity mentions in documents and label their types (e.g., people, product, food) in a scalable way. We demonstrate on real datasets including news articles and tweets how these typed entities aid in knowledge discovery and management. PMID:26705508

  12. Portable Automatic Text Classification for Adverse Drug Reaction Detection via Multi-corpus Training

    PubMed Central

    Gonzalez, Graciela

    2014-01-01

    Objective Automatic detection of Adverse Drug Reaction (ADR) mentions from text has recently received significant interest in pharmacovigilance research. Current research focuses on various sources of text-based information, including social media — where enormous amounts of user posted data is available, which have the potential for use in pharmacovigilance if collected and filtered accurately. The aims of this study are: (i) to explore natural language processing approaches for generating useful features from text, and utilizing them in optimized machine learning algorithms for automatic classification of ADR assertive text segments; (ii) to present two data sets that we prepared for the task of ADR detection from user posted internet data; and (iii) to investigate if combining training data from distinct corpora can improve automatic classification accuracies. Methods One of our three data sets contains annotated sentences from clinical reports, and the two other data sets, built in-house, consist of annotated posts from social media. Our text classification approach relies on generating a large set of features, representing semantic properties (e.g., sentiment, polarity, and topic), from short text nuggets. Importantly, using our expanded feature sets, we combine training data from different corpora in attempts to boost classification accuracies. Results Our feature-rich classification approach performs significantly better than previously published approaches with ADR class F-scores of 0.812 (previously reported best: 0.770), 0.538 and 0.678 for the three data sets. Combining training data from multiple compatible corpora further improves the ADR F-scores for the in-house data sets to 0.597 (improvement of 5.9 units) and 0.704 (improvement of 2.6 units) respectively. Conclusions Our research results indicate that using advanced NLP techniques for generating information rich features from text can significantly improve classification accuracies over existing benchmarks. Our experiments illustrate the benefits of incorporating various semantic features such as topics, concepts, sentiments, and polarities. Finally, we show that integration of information from compatible corpora can significantly improve classification performance. This form of multi-corpus training may be particularly useful in cases where data sets are heavily imbalanced (e.g., social media data), and may reduce the time and costs associated with the annotation of data in the future. PMID:25451103

  13. Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text

    PubMed Central

    Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda

    2015-01-01

    Background The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. Objective The primary objective of this study is to explore an alternative approach—using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Methods Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap’s commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. Results From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap’s mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. Conclusions We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text. PMID:26323337

  14. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis

    PubMed Central

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  15. Automatic identification of ROI in figure images toward improving hybrid (text and image) biomedical document retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Rahman, Md Mahmudur; Govindaraju, Venu; Thoma, George R.

    2011-01-01

    Biomedical images are often referenced for clinical decision support (CDS), educational purposes, and research. They appear in specialized databases or in biomedical publications and are not meaningfully retrievable using primarily textbased retrieval systems. The task of automatically finding the images in an article that are most useful for the purpose of determining relevance to a clinical situation is quite challenging. An approach is to automatically annotate images extracted from scientific publications with respect to their usefulness for CDS. As an important step toward achieving the goal, we proposed figure image analysis for localizing pointers (arrows, symbols) to extract regions of interest (ROI) that can then be used to obtain meaningful local image content. Content-based image retrieval (CBIR) techniques can then associate local image ROIs with identified biomedical concepts in figure captions for improved hybrid (text and image) retrieval of biomedical articles. In this work we present methods that make robust our previous Markov random field (MRF)-based approach for pointer recognition and ROI extraction. These include use of Active Shape Models (ASM) to overcome problems in recognizing distorted pointer shapes and a region segmentation method for ROI extraction. We measure the performance of our methods on two criteria: (i) effectiveness in recognizing pointers in images, and (ii) improved document retrieval through use of extracted ROIs. Evaluation on three test sets shows 87% accuracy in the first criterion. Further, the quality of document retrieval using local visual features and text is shown to be better than using visual features alone.

  16. The Effects of Two Summarization Strategies Using Expository Text on the Reading Comprehension and Summary Writing of Fourth-and Fifth-Grade Students in an Urban, Title 1 School

    ERIC Educational Resources Information Center

    Braxton, Diane M.

    2009-01-01

    Using a quasi-experimental pretest/post test design, this study examined the effects of two summarization strategies on the reading comprehension and summary writing of fourth- and fifth- grade students in an urban, Title 1 school. The Strategies, "G"enerating "I"nteractions between "S"chemata and "T"ext (GIST) and Rule-based, were taught using…

  17. Assessing the Utility of Automatic Cancer Registry Notifications Data Extraction from Free-Text Pathology Reports.

    PubMed

    Nguyen, Anthony N; Moore, Julie; O'Dwyer, John; Philpot, Shoni

    2015-01-01

    Cancer Registries record cancer data by reading and interpreting pathology cancer specimen reports. For some Registries this can be a manual process, which is labour and time intensive and subject to errors. A system for automatic extraction of cancer data from HL7 electronic free-text pathology reports has been proposed to improve the workflow efficiency of the Cancer Registry. The system is currently processing an incoming trickle feed of HL7 electronic pathology reports from across the state of Queensland in Australia to produce an electronic cancer notification. Natural language processing and symbolic reasoning using SNOMED CT were adopted in the system; Queensland Cancer Registry business rules were also incorporated. A set of 220 unseen pathology reports selected from patients with a range of cancers was used to evaluate the performance of the system. The system achieved overall recall of 0.78, precision of 0.83 and F-measure of 0.80 over seven categories, namely, basis of diagnosis (3 classes), primary site (66 classes), laterality (5 classes), histological type (94 classes), histological grade (7 classes), metastasis site (19 classes) and metastatic status (2 classes). These results are encouraging given the large cross-section of cancers. The system allows for the provision of clinical coding support as well as indicative statistics on the current state of cancer, which is not otherwise available. PMID:26958232

  18. Assessing the Utility of Automatic Cancer Registry Notifications Data Extraction from Free-Text Pathology Reports

    PubMed Central

    Nguyen, Anthony N.; Moore, Julie; O’Dwyer, John; Philpot, Shoni

    2015-01-01

    Cancer Registries record cancer data by reading and interpreting pathology cancer specimen reports. For some Registries this can be a manual process, which is labour and time intensive and subject to errors. A system for automatic extraction of cancer data from HL7 electronic free-text pathology reports has been proposed to improve the workflow efficiency of the Cancer Registry. The system is currently processing an incoming trickle feed of HL7 electronic pathology reports from across the state of Queensland in Australia to produce an electronic cancer notification. Natural language processing and symbolic reasoning using SNOMED CT were adopted in the system; Queensland Cancer Registry business rules were also incorporated. A set of 220 unseen pathology reports selected from patients with a range of cancers was used to evaluate the performance of the system. The system achieved overall recall of 0.78, precision of 0.83 and F-measure of 0.80 over seven categories, namely, basis of diagnosis (3 classes), primary site (66 classes), laterality (5 classes), histological type (94 classes), histological grade (7 classes), metastasis site (19 classes) and metastatic status (2 classes). These results are encouraging given the large cross-section of cancers. The system allows for the provision of clinical coding support as well as indicative statistics on the current state of cancer, which is not otherwise available. PMID:26958232

  19. Texting

    ERIC Educational Resources Information Center

    Tilley, Carol L.

    2009-01-01

    With the increasing ranks of cell phone ownership is an increase in text messaging, or texting. During 2008, more than 2.5 trillion text messages were sent worldwide--that's an average of more than 400 messages for every person on the planet. Although many of the messages teenagers text each day are perhaps nothing more than "how r u?" or "c u…

  20. Texting

    ERIC Educational Resources Information Center

    Tilley, Carol L.

    2009-01-01

    With the increasing ranks of cell phone ownership is an increase in text messaging, or texting. During 2008, more than 2.5 trillion text messages were sent worldwide--that's an average of more than 400 messages for every person on the planet. Although many of the messages teenagers text each day are perhaps nothing more than "how r u?" or "c u

  1. A framework and its empirical study of automatic diagnosis of traditional Chinese medicine utilizing raw free-text clinical records.

    PubMed

    Wang, Yaqiang; Yu, Zhonghua; Jiang, Yongguang; Liu, Yongchao; Chen, Li; Liu, Yiguang

    2012-04-01

    Automatic diagnosis is one of the most important parts in the expert system of traditional Chinese medicine (TCM), and in recent years, it has been studied widely. Most of the previous researches are based on well-structured datasets which are manually collected, structured and normalized by TCM experts. However, the obtained results of the former work could not be directly and effectively applied to clinical practice, because the raw free-text clinical records differ a lot from the well-structured datasets. They are unstructured and are denoted by TCM doctors without the support of authoritative editorial board in their routine diagnostic work. Therefore, in this paper, a novel framework of automatic diagnosis of TCM utilizing raw free-text clinical records for clinical practice is proposed and investigated for the first time. A series of appropriate methods are attempted to tackle several challenges in the framework, and the Naïve Bayes classifier and the Support Vector Machine classifier are employed for TCM automatic diagnosis. The framework is analyzed carefully. Its feasibility is validated through evaluating the performance of each module of the framework and its effectiveness is demonstrated based on the precision, recall and F-Measure of automatic diagnosis results. PMID:22101128

  2. Web-based UMLS concept retrieval by automatic text scanning: a comparison of two methods.

    PubMed

    Brandt, C; Nadkarni, P

    2001-01-01

    The Web is increasingly the medium of choice for multi-user application program delivery. Yet selection of an appropriate programming environment for rapid prototyping, code portability, and maintainability remain issues. We summarize our experience on the conversion of a LISP Web application, Search/SR to a new, functionally identical application, Search/SR-ASP using a relational database and active server pages (ASP) technology. Our results indicate that provision of easy access to database engines and external objects is almost essential for a development environment to be considered viable for rapid and robust application delivery. While LISP itself is a robust language, its use in Web applications may be hard to justify given that current vendor implementations do not provide such functionality. Alternative, currently available scripting environments for Web development appear to have most of LISP's advantages and few of its disadvantages. PMID:11084231

  3. Experimenting with Automatic Text-to-Diagram Conversion: A Novel Teaching Aid for the Blind People

    ERIC Educational Resources Information Center

    Mukherjee, Anirban; Garain, Utpal; Biswas, Arindam

    2014-01-01

    Diagram describing texts are integral part of science and engineering subjects including geometry, physics, engineering drawing, etc. In order to understand such text, one, at first, tries to draw or perceive the underlying diagram. For perception of the blind students such diagrams need to be drawn in some non-visual accessible form like tactile…

  4. Experimenting with Automatic Text-to-Diagram Conversion: A Novel Teaching Aid for the Blind People

    ERIC Educational Resources Information Center

    Mukherjee, Anirban; Garain, Utpal; Biswas, Arindam

    2014-01-01

    Diagram describing texts are integral part of science and engineering subjects including geometry, physics, engineering drawing, etc. In order to understand such text, one, at first, tries to draw or perceive the underlying diagram. For perception of the blind students such diagrams need to be drawn in some non-visual accessible form like tactile

  5. The Automatic Assessment of Free Text Answers Using a Modified BLEU Algorithm

    ERIC Educational Resources Information Center

    Noorbehbahani, F.; Kardan, A. A.

    2011-01-01

    e-Learning plays an undoubtedly important role in today's education and assessment is one of the most essential parts of any instruction-based learning process. Assessment is a common way to evaluate a student's knowledge regarding the concepts related to learning objectives. In this paper, a new method for assessing the free text answers of…

  6. Semi-Automatic Grading of Students' Answers Written in Free Text

    ERIC Educational Resources Information Center

    Escudeiro, Nuno; Escudeiro, Paula; Cruz, Augusto

    2011-01-01

    The correct grading of free text answers to exam questions during an assessment process is time consuming and subject to fluctuations in the application of evaluation criteria, particularly when the number of answers is high (in the hundreds). In consequence of these fluctuations, inherent to human nature, and largely determined by emotional…

  7. BROWSER: An Automatic Indexing On-Line Text Retrieval System. Annual Progress Report.

    ERIC Educational Resources Information Center

    Williams, J. H., Jr.

    The development and testing of the Browsing On-line With Selective Retrieval (BROWSER) text retrieval system allowing a natural language query statement and providing on-line browsing capabilities through an IBM 2260 display terminal is described. The prototype system contains data bases of 25,000 German language patent abstracts, 9,000 English…

  8. Improved chemical text mining of patents with infinite dictionaries and automatic spelling correction.

    PubMed

    Sayle, Roger; Xie, Paul Hongxing; Muresan, Sorel

    2012-01-23

    The text mining of patents of pharmaceutical interest poses a number of unique challenges not encountered in other fields of text mining. Unlike fields, such as bioinformatics, where the number of terms of interest is enumerable and essentially static, systematic chemical nomenclature can describe an infinite number of molecules. Hence, the dictionary- and ontology-based techniques that are commonly used for gene names, diseases, species, etc., have limited utility when searching for novel therapeutic compounds in patents. Additionally, the length and the composition of IUPAC-like names make them more susceptible to typographic problems: OCR failures, human spelling errors, and hyphenation and line breaking issues. This work describes a novel technique, called CaffeineFix, designed to efficiently identify chemical names in free text, even in the presence of typographical errors. Corrected chemical names are generated as input for name-to-structure software. This forms a preprocessing pass, independent of the name-to-structure software used, and is shown to greatly improve the results of chemical text mining in our study. PMID:22148717

  9. ABNER: an open source tool for automatically tagging genes, proteins and other entity names in text.

    PubMed

    Settles, Burr

    2005-07-15

    ABNER (A Biomedical Named Entity Recognizer) is an open source software tool for molecular biology text mining. At its core is a machine learning system using conditional random fields with a variety of orthographic and contextual features. The latest version is 1.5, which has an intuitive graphical interface and includes two modules for tagging entities (e.g. protein and cell line) trained on standard corpora, for which performance is roughly state of the art. It also includes a Java application programming interface allowing users to incorporate ABNER into their own systems and train models on new corpora. PMID:15860559

  10. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems

    PubMed Central

    GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.

    2012-01-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916

  11. An automatic system to identify heart disease risk factors in clinical texts over time.

    PubMed

    Chen, Qingcai; Li, Haodi; Tang, Buzhou; Wang, Xiaolong; Liu, Xin; Liu, Zengjian; Liu, Shu; Wang, Weida; Deng, Qiwen; Zhu, Suisong; Chen, Yangxin; Wang, Jingfeng

    2015-12-01

    Despite recent progress in prediction and prevention, heart disease remains a leading cause of death. One preliminary step in heart disease prediction and prevention is risk factor identification. Many studies have been proposed to identify risk factors associated with heart disease; however, none have attempted to identify all risk factors. In 2014, the National Center of Informatics for Integrating Biology and Beside (i2b2) issued a clinical natural language processing (NLP) challenge that involved a track (track 2) for identifying heart disease risk factors in clinical texts over time. This track aimed to identify medically relevant information related to heart disease risk and track the progression over sets of longitudinal patient medical records. Identification of tags and attributes associated with disease presence and progression, risk factors, and medications in patient medical history were required. Our participation led to development of a hybrid pipeline system based on both machine learning-based and rule-based approaches. Evaluation using the challenge corpus revealed that our system achieved an F1-score of 92.68%, making it the top-ranked system (without additional annotations) of the 2014 i2b2 clinical NLP challenge. PMID:26362344

  12. Automatism

    PubMed Central

    McCaldon, R. J.

    1964-01-01

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”. PMID:14199824

  13. Progress towards an unassisted element identification from Laser Induced Breakdown Spectra with automatic ranking techniques inspired by text retrieval

    NASA Astrophysics Data System (ADS)

    Amato, G.; Cristoforetti, G.; Legnaioli, S.; Lorenzetti, G.; Palleschi, V.; Sorrentino, F.; Tognoni, E.

    2010-08-01

    In this communication, we will illustrate an algorithm for automatic element identification in LIBS spectra which takes inspiration from the vector space model applied to text retrieval techniques. The vector space model prescribes that text documents and text queries are represented as vectors of weighted terms (words). Document ranking, with respect to relevance to a query, is obtained by comparing the vectors representing the documents with the vector representing the query. In our case, we represent elements and samples as vectors of weighted peaks, obtained from their spectra. The likelihood of the presence of an element in a sample is computed by comparing the corresponding vectors of weighted peaks. The weight of a peak is proportional to its intensity and to the inverse of the number of peaks, in the database, in its wavelength neighboring. We suppose to have a database containing the peaks of all elements we want to recognize, where each peak is represented by a wavelength and it is associated with its expected relative intensity and the corresponding element. Detection of elements in a sample is obtained by ranking the elements according to the distance of the associated vectors from the vector representing the sample. The application of this approach to elements identification using LIBS spectra obtained from several kinds of metallic alloys will be also illustrated. The possible extension of this technique towards an algorithm for fully automated LIBS analysis will be discussed.

  14. Text Mining and Natural Language Processing Approaches for Automatic Categorization of Lay Requests to Web-Based Expert Forums

    PubMed Central

    Reincke, Ulrich; Michelmann, Hans Wilhelm

    2009-01-01

    Background Both healthy and sick people increasingly use electronic media to obtain medical information and advice. For example, Internet users may send requests to Web-based expert forums, or so-called “ask the doctor” services. Objective To automatically classify lay requests to an Internet medical expert forum using a combination of different text-mining strategies. Methods We first manually classified a sample of 988 requests directed to a involuntary childlessness forum on the German website “Rund ums Baby” (“Everything about Babies”) into one or more of 38 categories belonging to two dimensions (“subject matter” and “expectations”). After creating start and synonym lists, we calculated the average Cramer’s V statistic for the association of each word with each category. We also used principle component analysis and singular value decomposition as further text-mining strategies. With these measures we trained regression models and determined, on the basis of best regression models, for any request the probability of belonging to each of the 38 different categories, with a cutoff of 50%. Recall and precision of a test sample were calculated as a measure of quality for the automatic classification. Results According to the manual classification of 988 documents, 102 (10%) documents fell into the category “in vitro fertilization (IVF),” 81 (8%) into the category “ovulation,” 79 (8%) into “cycle,” and 57 (6%) into “semen analysis.” These were the four most frequent categories in the subject matter dimension (consisting of 32 categories). The expectation dimension comprised six categories; we classified 533 documents (54%) as “general information” and 351 (36%) as a wish for “treatment recommendations.” The generation of indicator variables based on the chi-square analysis and Cramer’s V proved to be the best approach for automatic classification in about half of the categories. In combination with the two other approaches, 100% precision and 100% recall were realized in 18 (47%) out of the 38 categories in the test sample. For 35 (92%) categories, precision and recall were better than 80%. For some categories, the input variables (ie, “words”) also included variables from other categories, most often with a negative sign. For example, absence of words predictive for “menstruation” was a strong indicator for the category “pregnancy test.” Conclusions Our approach suggests a way of automatically classifying and analyzing unstructured information in Internet expert forums. The technique can perform a preliminary categorization of new requests and help Internet medical experts to better handle the mass of information and to give professional feedback. PMID:19632978

  15. Large-scale automatic extraction of side effects associated with targeted anticancer drugs from full-text oncological articles.

    PubMed

    Xu, Rong; Wang, QuanQiu

    2015-06-01

    Targeted anticancer drugs such as imatinib, trastuzumab and erlotinib dramatically improved treatment outcomes in cancer patients, however, these innovative agents are often associated with unexpected side effects. The pathophysiological mechanisms underlying these side effects are not well understood. The availability of a comprehensive knowledge base of side effects associated with targeted anticancer drugs has the potential to illuminate complex pathways underlying toxicities induced by these innovative drugs. While side effect association knowledge for targeted drugs exists in multiple heterogeneous data sources, published full-text oncological articles represent an important source of pivotal, investigational, and even failed trials in a variety of patient populations. In this study, we present an automatic process to extract targeted anticancer drug-associated side effects (drug-SE pairs) from a large number of high profile full-text oncological articles. We downloaded 13,855 full-text articles from the Journal of Oncology (JCO) published between 1983 and 2013. We developed text classification, relationship extraction, signaling filtering, and signal prioritization algorithms to extract drug-SE pairs from downloaded articles. We extracted a total of 26,264 drug-SE pairs with an average precision of 0.405, a recall of 0.899, and an F1 score of 0.465. We show that side effect knowledge from JCO articles is largely complementary to that from the US Food and Drug Administration (FDA) drug labels. Through integrative correlation analysis, we show that targeted drug-associated side effects positively correlate with their gene targets and disease indications. In conclusion, this unique database that we built from a large number of high-profile oncological articles could facilitate the development of computational models to understand toxic effects associated with targeted anticancer drugs. PMID:25817969

  16. Automatic recognition of disorders, findings, pharmaceuticals and body structures from clinical text: an annotation and machine learning study.

    PubMed

    Skeppstedt, Maria; Kvist, Maria; Nilsson, Gunnar H; Dalianis, Hercules

    2014-06-01

    Automatic recognition of clinical entities in the narrative text of health records is useful for constructing applications for documentation of patient care, as well as for secondary usage in the form of medical knowledge extraction. There are a number of named entity recognition studies on English clinical text, but less work has been carried out on clinical text in other languages. This study was performed on Swedish health records, and focused on four entities that are highly relevant for constructing a patient overview and for medical hypothesis generation, namely the entities: Disorder, Finding, Pharmaceutical Drug and Body Structure. The study had two aims: to explore how well named entity recognition methods previously applied to English clinical text perform on similar texts written in Swedish; and to evaluate whether it is meaningful to divide the more general category Medical Problem, which has been used in a number of previous studies, into the two more granular entities, Disorder and Finding. Clinical notes from a Swedish internal medicine emergency unit were annotated for the four selected entity categories, and the inter-annotator agreement between two pairs of annotators was measured, resulting in an average F-score of 0.79 for Disorder, 0.66 for Finding, 0.90 for Pharmaceutical Drug and 0.80 for Body Structure. A subset of the developed corpus was thereafter used for finding suitable features for training a conditional random fields model. Finally, a new model was trained on this subset, using the best features and settings, and its ability to generalise to held-out data was evaluated. This final model obtained an F-score of 0.81 for Disorder, 0.69 for Finding, 0.88 for Pharmaceutical Drug, 0.85 for Body Structure and 0.78 for the combined category Disorder+Finding. The obtained results, which are in line with or slightly lower than those for similar studies on English clinical text, many of them conducted using a larger training data set, show that the approaches used for English are also suitable for Swedish clinical text. However, a small proportion of the errors made by the model are less likely to occur in English text, showing that results might be improved by further tailoring the system to clinical Swedish. The entity recognition results for the individual entities Disorder and Finding show that it is meaningful to separate the general category Medical Problem into these two more granular entity types, e.g. for knowledge mining of co-morbidity relations and disorder-finding relations. PMID:24508177

  17. Text Classification for Automatic Detection of E-Cigarette Use and Use for Smoking Cessation from Twitter: A Feasibility Pilot

    PubMed Central

    Aphinyanaphongs, Yin; Lulejian, Armine; Brown, Duncan Penfold; Bonneau, Richard; Krebs, Paul

    2015-01-01

    Rapid increases in e-cigarette use and potential exposure to harmful byproducts have shifted public health focus to e-cigarettes as a possible drug of abuse. Effective surveillance of use and prevalence would allow appropriate regulatory responses. An ideal surveillance system would collect usage data in real time, focus on populations of interest, include populations unable to take the survey, allow a breadth of questions to answer, and enable geo-location analysis. Social media streams may provide this ideal system. To realize this use case, a foundational question is whether we can detect ecigarette use at all. This work reports two pilot tasks using text classification to identify automatically Tweets that indicate e-cigarette use and/or e-cigarette use for smoking cessation. We build and define both datasets and compare performance of 4 state of the art classifiers and a keyword search for each task. Our results demonstrate excellent classifier performance of up to 0.90 and 0.94 area under the curve in each category. These promising initial results form the foundation for further studies to realize the ideal surveillance solution. PMID:26776211

  18. TEXT CLASSIFICATION FOR AUTOMATIC DETECTION OF E-CIGARETTE USE AND USE FOR SMOKING CESSATION FROM TWITTER: A FEASIBILITY PILOT.

    PubMed

    Aphinyanaphongs, Yin; Lulejian, Armine; Brown, Duncan Penfold; Bonneau, Richard; Krebs, Paul

    2016-01-01

    Rapid increases in e-cigarette use and potential exposure to harmful byproducts have shifted public health focus to e-cigarettes as a possible drug of abuse. Effective surveillance of use and prevalence would allow appropriate regulatory responses. An ideal surveillance system would collect usage data in real time, focus on populations of interest, include populations unable to take the survey, allow a breadth of questions to answer, and enable geo-location analysis. Social media streams may provide this ideal system. To realize this use case, a foundational question is whether we can detect e-cigarette use at all. This work reports two pilot tasks using text classification to identify automatically Tweets that indicate e-cigarette use and/or e-cigarette use for smoking cessation. We build and define both datasets and compare performance of 4 state of the art classifiers and a keyword search for each task. Our results demonstrate excellent classifier performance of up to 0.90 and 0.94 area under the curve in each category. These promising initial results form the foundation for further studies to realize the ideal surveillance solution. PMID:26776211

  19. QCS : a system for querying, clustering, and summarizing documents.

    SciTech Connect

    Dunlavy, Daniel M.

    2006-08-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence ''trimming'', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.

  20. QCS: a system for querying, clustering and summarizing documents.

    SciTech Connect

    Dunlavy, Daniel M.; Schlesinger, Judith D. (Center for Computing Sciences, Bowie, MD); O'Leary, Dianne P.; Conroy, John M.

    2006-10-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.

  1. Degree centrality for semantic abstraction summarization of therapeutic studies

    PubMed Central

    Zhang, Han; Fiszman, Marcelo; Shin, Dongwook; Miller, Christopher M.; Rosemblat, Graciela; Rindflesch, Thomas C.

    2011-01-01

    Automatic summarization has been proposed to help manage the results of biomedical information retrieval systems. Semantic MEDLINE, for example, summarizes semantic predications representing assertions in MEDLINE citations. Results are presented as a graph which maintains links to the original citations. Graphs summarizing more than 500 citations are hard to read and navigate, however. We exploit graph theory for focusing these large graphs. The method is based on degree centrality, which measures connectedness in a graph. Four categories of clinical concepts related to treatment of disease were identified and presented as a summary of input text. A baseline was created using term frequency of occurrence. The system was evaluated on summaries for treatment of five diseases compared to a reference standard produced manually by two physicians. The results showed that recall for system results was 72%, precision was 73%, and F-score was 0.72. The system F-score was considerably higher than that for the baseline (0.47). PMID:21575741

  2. User and Device Adaptation in Summarizing Sports Videos

    NASA Astrophysics Data System (ADS)

    Nitta, Naoko; Babaguchi, Noboru

    Video summarization is defined as creating a video summary which includes only important scenes in the original video streams. In order to realize automatic video summarization, the significance of each scene needs to be determined. When targeted especially on broadcast sports videos, a play scene, which corresponds to a play, can be considered as a scene unit. The significance of every play scene can generally be determined based on the importance of the play in the game. Furthermore, the following two issues should be considered: 1) what is important depends on each user's preferences, and 2) the summaries should be tailored for media devices that each user has. Considering the above issues, this paper proposes a unified framework for user and device adaptation in summarizing broadcast sports videos. The proposed framework summarizes sports videos by selecting play scenes based on not only the importance of each play itself but also the users' preferences by using the metadata, which describes the semantic content of videos with keywords, and user profiles, which describe users' preference degrees for the keywords. The selected scenes are then presented in a proper way using various types of media such as video, image, or text according to device profiles which describe the device type. We experimentally verified the effectiveness of user adaptation by examining how the generated summaries are changed by different preference degrees and by comparing our results with/without using user profiles. The validity of device adaptation is also evaluated by conducting questionnaires using PCs and mobile phones as the media devices.

  3. Summarizing Social Disparities in Health

    PubMed Central

    Asada, Yukiko; Yoshida, Yoko; Whipp, Alyce M

    2013-01-01

    Context Reporting on health disparities is fundamental for meeting the goal of reducing health disparities. One often overlooked challenge is determining the best way to report those disparities associated with multiple attributes such as income, education, sex, and race/ethnicity. This article proposes an analytical approach to summarizing social disparities in health, and we demonstrate its empirical application by comparing the degrees and patterns of health disparities in all fifty states and the District of Columbia (DC). Methods We used the 2009 American Community Survey, and our measure of health was functional limitation. For each state and DC, we calculated the overall disparity and attribute-specific disparities for income, education, sex, and race/ethnicity in functional limitation. Along with the state rankings of these health disparities, we developed health disparity profiles according to the attribute making the largest contribution to overall disparity in each state. Findings Our results show a general lack of consistency in the rankings of overall and attribute-specific disparities in functional limitation across the states. Wyoming has the smallest overall disparity and West Virginia the largest. In each of the four attribute-specific health disparity rankings, however, most of the best- and worst-performing states in regard to overall health disparity are not consistently good or bad. Our analysis suggests the following three disparity profiles across states: (1) the largest contribution from race/ethnicity (thirty-four states), (2) roughly equal contributions of race/ethnicity and socioeconomic factor(s) (ten states), and (3) the largest contribution from socioeconomic factor(s) (seven states). Conclusions Our proposed approach offers policy-relevant health disparity information in a comparable and interpretable manner, and currently publicly available data support its application. We hope this approach will spark discussion regarding how best to systematically track health disparities across communities or within a community over time in relation to the health disparity goal of Healthy People 2020. PMID:23488710

  4. Summarize to Get the Gist

    ERIC Educational Resources Information Center

    Collins, John

    2012-01-01

    As schools prepare for the common core state standards in literacy, they'll be confronted with two challenges: first, helping students comprehend complex texts, and, second, training students to write arguments supported by factual evidence. A teacher's response to these challenges might be to lead class discussions about complex reading or assign…

  5. Algorithm for Video Summarization of Bronchoscopy Procedures

    PubMed Central

    2011-01-01

    Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the challenge of generating summaries of bronchoscopy video recordings. PMID:22185344

  6. Combining automatic table classification and relationship extraction in extracting anticancer drug-side effect pairs from full-text articles.

    PubMed

    Xu, Rong; Wang, QuanQiu

    2015-02-01

    Anticancer drug-associated side effect knowledge often exists in multiple heterogeneous and complementary data sources. A comprehensive anticancer drug-side effect (drug-SE) relationship knowledge base is important for computation-based drug target discovery, drug toxicity predication and drug repositioning. In this study, we present a two-step approach by combining table classification and relationship extraction to extract drug-SE pairs from a large number of high-profile oncological full-text articles. The data consists of 31,255 tables downloaded from the Journal of Oncology (JCO). We first trained a statistical classifier to classify tables into SE-related and -unrelated categories. We then extracted drug-SE pairs from SE-related tables. We compared drug side effect knowledge extracted from JCO tables to that derived from FDA drug labels. Finally, we systematically analyzed relationships between anti-cancer drug-associated side effects and drug-associated gene targets, metabolism genes, and disease indications. The statistical table classifier is effective in classifying tables into SE-related and -unrelated (precision: 0.711; recall: 0.941; F1: 0.810). We extracted a total of 26,918 drug-SE pairs from SE-related tables with a precision of 0.605, a recall of 0.460, and a F1 of 0.520. Drug-SE pairs extracted from JCO tables is largely complementary to those derived from FDA drug labels; as many as 84.7% of the pairs extracted from JCO tables have not been included a side effect database constructed from FDA drug labels. Side effects associated with anticancer drugs positively correlate with drug target genes, drug metabolism genes, and disease indications. PMID:25445920

  7. Macroprocesses and Microprocesses in the Development of Summarization Skill.

    ERIC Educational Resources Information Center

    Kintsch, Eileen

    A study investigated how students' mental representation of an expository text and the inferences they used in summarizing varied as a function of text difficulty and of differences in the task. Subjects, 96 college students and students from grades 6 and 10, wrote summaries of expository texts and answered orally several probe questions about the…

  8. Highlight summarization in golf videos using audio signals

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Kim, Jin Young

    2008-01-01

    In this paper, we present an automatic summarization of highlights in golf videos based on audio information alone without video information. The proposed highlight summarization system is carried out based on semantic audio segmentation and detection on action units from audio signals. Studio speech, field speech, music, and applause are segmented by means of sound classification. Swing is detected by the methods of impulse onset detection. Sounds like swing and applause form a complete action unit, while studio speech and music parts are used to anchor the program structure. With the advantage of highly precise detection of applause, highlights are extracted effectively. Our experimental results obtain high classification precision on 18 golf games. It proves that the proposed system is very effective and computationally efficient to apply the technology to embedded consumer electronic devices.

  9. On the Application of Generic Summarization Algorithms to Music

    NASA Astrophysics Data System (ADS)

    Raposo, Francisco; Ribeiro, Ricardo; de Matos, David Martins

    2015-01-01

    Several generic summarization algorithms were developed in the past and successfully applied in fields such as text and speech summarization. In this paper, we review and apply these algorithms to music. To evaluate this summarization's performance, we adopt an extrinsic approach: we compare a Fado Genre Classifier's performance using truncated contiguous clips against the summaries extracted with those algorithms on 2 different datasets. We show that Maximal Marginal Relevance (MMR), LexRank and Latent Semantic Analysis (LSA) all improve classification performance in both datasets used for testing.

  10. MPEG content summarization based on compressed domain feature analysis

    NASA Astrophysics Data System (ADS)

    Sugano, Masaru; Nakajima, Yasuyuki; Yanagihara, Hiromasa

    2003-11-01

    This paper addresses automatic summarization of MPEG audiovisual content on compressed domain. By analyzing semantically important low-level and mid-level audiovisual features, our method universally summarizes the MPEG-1/-2 contents in the form of digest or highlight. The former is a shortened version of an original, while the latter is an aggregation of important or interesting events. In our proposal, first, the incoming MPEG stream is segmented into shots and the above features are derived from each shot. Then the features are adaptively evaluated in an integrated manner, and finally the qualified shots are aggregated into a summary. Since all the processes are performed completely on compressed domain, summarization is achieved at very low computational cost. The experimental results show that news highlights and sports highlights in TV baseball games can be successfully extracted according to simple shot transition models. As for digest extraction, subjective evaluation proves that meaningful shots are extracted from content without a priori knowledge, even if it contains multiple genres of programs. Our method also has the advantage of generating an MPEG-7 based description such as summary and audiovisual segments in the course of summarization.

  11. Summarization of Multiple Documents with Rhetorical Annotation

    NASA Astrophysics Data System (ADS)

    Aya, Sohei; Matsuo, Yutaka; Okazaki, Naoaki; Hasida, Kôiti; Ishizuka, Mitsuru

    In this paper, we propose a new algorithm of summarization which targets a new kind of structured contents. The structured content, which is to be created by semantic authoring, consists of sentenses and rhetorical relation among sentences: It is represented by a graph, where a node is a sentence and an edge is a rhetorical relation. We simulate creating this content graph by using news paper articles that are annotated rhetorical relations by a GDA tagset. Our summarization method basically uses spreading activation over the content graph, followed by particular postprocesses to increase readability of the resultant summary. Experimental evaluation shows our method is at least equal to or better than Lead method for summarizing news paper articles.

  12. Tracking Visible Targets Automatically

    NASA Technical Reports Server (NTRS)

    Armstrong, R. W.

    1984-01-01

    Report summarizes techniques for automatic pointing of scientific instruments by reference to visible targets. Applications foreseen in industrial robotics. Measurement done by image analysis based on gradient edge location, image-centroid location and/or outline matching.

  13. Dynamic video summarization of home video

    NASA Astrophysics Data System (ADS)

    Lienhart, Rainer W.

    1999-12-01

    An increasing number of people own and use camcorders to make videos that capture their experiences and document their lives. These videos easily add up to many hours of material. Oddly, most of them are put into a storage box and never touched or watched again. The reasons for this are manifold. Firstly, the raw video material is unedited, and is therefore long-winded and lacking visually appealing effects. Video editing would help, but, it is still too time-consuming; people rarely find the time to do it. Secondly, watching the same tape more than a few times can be boring, since the video lacks any variation or surprise during playback. Automatic video abstracting algorithms can provide a method for processing videos so that users will want to play the material more often. However, existing automatic abstracting algorithms have been designed for feature films, newscasts or documentaries, and thus are inappropriate for home video material and raw video footage in general. In this paper, we present new algorithms for generating amusing, visually appealing and variable video abstracts of home video material automatically. They make use of a new, empirically motivated approach, also presented in the paper, to cluster time-stamped shots hierarchically into meaningful units. Last but not least, we propose a simple and natural extension of the way people acquire video - so-called on-the-fly annotations - which will allow a completely new set of applications on raw video footage as well as enable better and more selective automatic video abstracts. Moreover, our algorithms are not restricted to home video but can also be applied to raw video footage in general.

  14. 29 CFR 779.313 - Requirements summarized.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RETAILERS OF GOODS OR SERVICES Exemptions for Certain Retail or Service Establishments Statutory Meaning of Retail Or Service Establishment § 779.313 Requirements summarized. The statutory definition of the term “retail or service establishment” found in section 13(a)(2), clearly provides that an establishment to...

  15. REGIONAL AIR POLLUTION STUDY, EMISSION INVENTORY SUMMARIZATION

    EPA Science Inventory

    As part of the Regional Air Pollution Study (RAPS), data for an air pollution emission inventory are summarized for point and area sources in the St. Louis Air Quality Control Region. Data for point sources were collected for criteria and noncriteria pollutants, hydrocarbons, sul...

  16. HARVEST, a longitudinal patient record summarizer

    PubMed Central

    Hirsch, Jamie S; Tanenbaum, Jessica S; Lipsky Gorman, Sharon; Liu, Connie; Schmitz, Eric; Hashorva, Dritan; Ervits, Artem; Vawdrey, David; Sturm, Marc; Elhadad, Noémie

    2015-01-01

    Objective To describe HARVEST, a novel point-of-care patient summarization and visualization tool, and to conduct a formative evaluation study to assess its effectiveness and gather feedback for iterative improvements. Materials and methods HARVEST is a problem-based, interactive, temporal visualization of longitudinal patient records. Using scalable, distributed natural language processing and problem salience computation, the system extracts content from the patient notes and aggregates and presents information from multiple care settings. Clinical usability was assessed with physician participants using a timed, task-based chart review and questionnaire, with performance differences recorded between conditions (standard data review system and HARVEST). Results HARVEST displays patient information longitudinally using a timeline, a problem cloud as extracted from notes, and focused access to clinical documentation. Despite lack of familiarity with HARVEST, when using a task-based evaluation, performance and time-to-task completion was maintained in patient review scenarios using HARVEST alone or the standard clinical information system at our institution. Subjects reported very high satisfaction with HARVEST and interest in using the system in their daily practice. Discussion HARVEST is available for wide deployment at our institution. Evaluation provided informative feedback and directions for future improvements. Conclusions HARVEST was designed to address the unmet need for clinicians at the point of care, facilitating review of essential patient information. The deployment of HARVEST in our institution allows us to study patient record summarization as an informatics intervention in a real-world setting. It also provides an opportunity to learn how clinicians use the summarizer, enabling informed interface and content iteration and optimization to improve patient care. PMID:25352564

  17. System for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2006-07-04

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  18. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2010-04-06

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  19. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2008-01-01

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  20. The Role of Instructions in Testing Summarizing Ability.

    ERIC Educational Resources Information Center

    Cohen, Andrew D.

    The effects of specific guidelines in the taking and rating of tests of language summarizing ability were investigated, as well as interrater agreement regarding the rating of specific ideas within the summaries. The tests involved respondents reading source texts and providing written summaries as a measure of their reading comprehension and…

  1. Effective Replays and Summarization of Virtual Experiences

    PubMed Central

    Ponto, Kevin; Kohlmann, Joe; Gleicher, Michael

    2012-01-01

    Direct replays of the experience of a user in a virtual environment are difficult for others to watch due to unnatural camera motions. We present methods for replaying and summarizing these egocentric experiences that effectively communicate the users observations while reducing unwanted camera movements. Our approach summarizes the viewpoint path as a concise sequence of viewpoints that cover the same parts of the scene. The core of our approach is a novel content dependent metric that can be used to identify similarities between viewpoints. This enables viewpoints to be grouped by similar contextual view information and provides a means to generate novel viewpoints that can encapsulate a series of views. These resulting encapsulated viewpoints are used to synthesize new camera paths that convey the content of the original viewers experience. Projecting the initial movement of the user back on the scene can be used to convey the details of their observations, and the extracted viewpoints can serve as bookmarks for control or analysis. Finally we present performance analysis along with two forms of validation to test whether the extracted viewpoints are representative of the viewers original observations and to test for the overall effectiveness of the presented replay methods. PMID:22402688

  2. Disease Related Knowledge Summarization Based on Deep Graph Search

    PubMed Central

    Wu, Xiaofang; Yang, Zhihao; Li, ZhiHeng; Lin, Hongfei; Wang, Jian

    2015-01-01

    The volume of published biomedical literature on disease related knowledge is expanding rapidly. Traditional information retrieval (IR) techniques, when applied to large databases such as PubMed, often return large, unmanageable lists of citations that do not fulfill the searcher's information needs. In this paper, we present an approach to automatically construct disease related knowledge summarization from biomedical literature. In this approach, firstly Kullback-Leibler Divergence combined with mutual information metric is used to extract disease salient information. Then deep search based on depth first search (DFS) is applied to find hidden (indirect) relations between biomedical entities. Finally random walk algorithm is exploited to filter out the weak relations. The experimental results show that our approach achieves a precision of 60% and a recall of 61% on salient information extraction for Carcinoma of bladder and outperforms the method of Combo. PMID:26413521

  3. Summarizing X-ray Stellar Spectra

    NASA Astrophysics Data System (ADS)

    Lee, Hyunsook; Kashyap, V.; XAtlas Collaboration

    2008-05-01

    XAtlas is a spectrum database made with the High Resolution Transmission Grating on the Chandra X-ray Observatory, after painstaking detailed emission measure analysis to extract quantified information. Here, we explore the possibility of summarizing this spectral information into relatively convenient measurable quantities via dimension reduction methods. Principal component analysis, simple component analysis, projection pursuit, independent component analysis, and parallel coordinates are employed to enhance any patterned structures embedded in the high dimensional space. We discuss pros and cons of each dimension reduction method as a part of developing clustering algorithms for XAtlas. The biggest challenge from analyzing XAtlas was handling missing values that pertain astrophysical importance. This research was supported by NASA/AISRP grant NNG06GF17G and NASA contract NAS8-39073.

  4. Summarizing cellular responses as biological process networks

    PubMed Central

    2013-01-01

    Background Microarray experiments can simultaneously identify thousands of genes that show significant perturbation in expression between two experimental conditions. Response networks, computed through the integration of gene interaction networks with expression perturbation data, may themselves contain tens of thousands of interactions. Gene set enrichment has become standard for summarizing the results of these analyses in terms functionally coherent collections of genes such as biological processes. However, even these methods can yield hundreds of enriched functions that may overlap considerably. Results We describe a new technique called Markov chain Monte Carlo Biological Process Networks (MCMC-BPN) capable of reporting a highly non-redundant set of links between processes that describe the molecular interactions that are perturbed under a specific biological context. Each link in the BPN represents the perturbed interactions that serve as the interfaces between the two processes connected by the link. We apply MCMC-BPN to publicly available liver-related datasets to demonstrate that the networks formed by the most probable inter-process links reported by MCMC-BPN show high relevance to each biological condition. We show that MCMC-BPN’s ability to discern the few key links from in a very large solution space by comparing results from two other methods for detecting inter-process links. Conclusions MCMC-BPN is successful in using few inter-process links to explain as many of the perturbed gene-gene interactions as possible. Thereby, BPNs summarize the important biological trends within a response network by reporting a digestible number of inter-process links that can be explored in greater detail. PMID:23895181

  5. An unsupervised method for summarizing egocentric sport videos

    NASA Astrophysics Data System (ADS)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.

  6. Adaptive detection of missed text areas in OCR outputs: application to the automatic assessment of OCR quality in mass digitization projects

    NASA Astrophysics Data System (ADS)

    Ben Salah, Ahmed; Ragot, Nicolas; Paquet, Thierry

    2013-01-01

    The French National Library (BnF*) has launched many mass digitization projects in order to give access to its collection. The indexation of digital documents on Gallica (digital library of the BnF) is done through their textual content obtained thanks to service providers that use Optical Character Recognition softwares (OCR). OCR softwares have become increasingly complex systems composed of several subsystems dedicated to the analysis and the recognition of the elements in a page. However, the reliability of these systems is always an issue at stake. Indeed, in some cases, we can find errors in OCR outputs that occur because of an accumulation of several errors at different levels in the OCR process. One of the frequent errors in OCR outputs is the missed text components. The presence of such errors may lead to severe defects in digital libraries. In this paper, we investigate the detection of missed text components to control the OCR results from the collections of the French National Library. Our verification approach uses local information inside the pages based on Radon transform descriptors and Local Binary Patterns descriptors (LBP) coupled with OCR results to control their consistency. The experimental results show that our method detects 84.15% of the missed textual components, by comparing the OCR ALTO files outputs (produced by the service providers) to the images of the document.

  7. Contextual Text Mining

    ERIC Educational Resources Information Center

    Mei, Qiaozhu

    2009-01-01

    With the dramatic growth of text information, there is an increasing need for powerful text mining systems that can automatically discover useful knowledge from text. Text is generally associated with all kinds of contextual information. Those contexts can be explicit, such as the time and the location where a blog article is written, and the

  8. Contextual Text Mining

    ERIC Educational Resources Information Center

    Mei, Qiaozhu

    2009-01-01

    With the dramatic growth of text information, there is an increasing need for powerful text mining systems that can automatically discover useful knowledge from text. Text is generally associated with all kinds of contextual information. Those contexts can be explicit, such as the time and the location where a blog article is written, and the…

  9. Medical textbook summarization and guided navigation using statistical sentence extraction.

    PubMed

    Whalen, Gregory

    2005-01-01

    We present a method for automated medical textbook and encyclopedia summarization. Using statistical sentence extraction and semantic relationships, we extract sentences from text returned as part of an existing textbook search (similar to a book index). Our system guides users to the information they desire by summarizing the content of each relevant chapter or section returned in the search. The summary is tailored to contain sentences that specifically address the user's search terms. Our clustering method selects sentences that contain concepts specifically addressing the context of the query term in each of the returned sections. Our method examines conceptual relationships from the UMLS and selects clusters of concepts using Expectation Maximization (EM). Sentences associated with the concept clusters are shown to the user. We evaluated whether our extracted summary provides a suitable answer to the user's question. PMID:16779153

  10. Blind summarization: content-adaptive video summarization using time-series analysis

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Radhakrishnan, Regunathan; Peker, Kadir A.

    2006-01-01

    Severe complexity constraints on consumer electronic devices motivate us to investigate general-purpose video summarization techniques that are able to apply a common hardware setup to multiple content genres. On the other hand, we know that high quality summaries can only be produced with domain-specific processing. In this paper, we present a time-series analysis based video summarization technique that provides a general core to which we are able to add small content-specific extensions for each genre. The proposed time-series analysis technique consists of unsupervised clustering of samples taken through sliding windows from the time series of features obtained from the content. We classify content into two broad categories, scripted content such as news and drama, and unscripted content such as sports and surveillance. The summarization problem then reduces to finding either finding semantic boundaries of the scripted content or detecting highlights in the unscripted content. The proposed technique is essentially an event detection technique and is thus best suited to unscripted content, however, we also find applications to scripted content. We thoroughly examine the trade-off between content-neutral and content-specific processing for effective summarization for a number of genres, and find that our core technique enables us to minimize the complexity of the content-specific processing and to postpone it to the final stage. We achieve the best results with unscripted content such as sports and surveillance video in terms of quality of summaries and minimizing content-specific processing. For other genres such as drama, we find that more content-specific processing is required. We also find that judicious choice of key audio-visual object detectors enables us to minimize the complexity of the content-specific processing while maintaining its applicability to a broad range of genres. We will present a demonstration of our proposed technique at the conference.

  11. Reorganized text.

    PubMed

    2015-05-01

    Reorganized Text: In the Original Investigation titled “Patterns of Hospital Utilization for Head and Neck Cancer Care: Changing Demographics” posted online in the January 29, 2015, issue of JAMA Otolaryngology–Head & Neck Surgery (doi:10.1001 /jamaoto.2014.3603), information was copied within sections and text rearranged to accommodate Continuing Medical Education quiz formatting. The information from the topic statements of each paragraph in the Hypothesis Testing subsection of the Methods section was collected in a new first paragraph for that subsection, which reads as follows: “Several hypotheses regarding the causes of regionalization of HNCA care were tested using the NIS data: (1) increasing patient comorbidities over time, causing a shift in care to teaching institutions that would theoretically be better equipped to handle such increased comorbidities; (2) shifting of payer status; (3) increased proportion of prior radiation therapy; and (4) a higher fraction of more complex procedures being referred and performed at teaching institutions.” In addition, the phrase "As summarized in Table3," was added to the beginning of paragraph 6 of the Discussion section, and the call-out to Table 3 in the middle of that paragraph was deleted. Finally, paragraphs 6 and 7 of the Discussion section were combined. PMID:25996397

  12. Automatic Imitation

    ERIC Educational Resources Information Center

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  13. More than a "Basic Skill": Breaking down the Complexities of Summarizing for ABE/ESL Learners

    ERIC Educational Resources Information Center

    Ouellette-Schramm, Jennifer

    2015-01-01

    This article describes the complex cognitive and linguistic challenges of summarizing expository text at vocabulary, syntactic, and rhetorical levels. It then outlines activities to help ABE/ESL learners develop corresponding skills.

  14. Machine Translation from Text

    NASA Astrophysics Data System (ADS)

    Habash, Nizar; Olive, Joseph; Christianson, Caitlin; McCary, John

    Machine translation (MT) from text, the topic of this chapter, is perhaps the heart of the GALE project. Beyond being a well defined application that stands on its own, MT from text is the link between the automatic speech recognition component and the distillation component. The focus of MT in GALE is on translating from Arabic or Chinese to English. The three languages represent a wide range of linguistic diversity and make the GALE MT task rather challenging and exciting.

  15. Simplifying access to a Clinical Data Repository using schema summarization.

    PubMed

    Yu, Cong; Hanauer, David A; Athey, Brian D; Jagadish, Hosagrahar V; States, David J

    2007-01-01

    The University of Michigan Clinical Data Repository (CDR) integrates over 25 data sources, and as a result has a schema that is too complex to be directly queried by clinical researchers. Schema summarization uses abstract elements and links to summarize a complex schema and allows users with limited knowledge of the underlying database structure to effectively issue queries to the CDR for clinical and translational research. PMID:18694259

  16. DeTEXT: A Database for Evaluating Text Extraction from Biomedical Literature Figures

    PubMed Central

    Yin, Xu-Cheng; Yang, Chun; Pei, Wei-Yi; Man, Haixia; Zhang, Jun; Learned-Miller, Erik; Yu, Hong

    2015-01-01

    Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information. A high-quality ground truth standard can greatly facilitate the development of an automated system. This article describes DeTEXT: A database for evaluating text extraction from biomedical literature figures. It is the first publicly available, human-annotated, high quality, and large-scale figure-text dataset with 288 full-text articles, 500 biomedical figures, and 9308 text regions. This article describes how figures were selected from open-access full-text biomedical articles and how annotation guidelines and annotation tools were developed. We also discuss the inter-annotator agreement and the reliability of the annotations. We summarize the statistics of the DeTEXT data and make available evaluation protocols for DeTEXT. Finally we lay out challenges we observed in the automated detection and recognition of figure text and discuss research directions in this area. DeTEXT is publicly available for downloading at http://prir.ustb.edu.cn/DeTEXT/. PMID:25951377

  17. MeSH indexing based on automatically generated summaries

    PubMed Central

    2013-01-01

    Background MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. Results We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Conclusions Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading. PMID:23802936

  18. Text Mining.

    ERIC Educational Resources Information Center

    Trybula, Walter J.

    1999-01-01

    Reviews the state of research in text mining, focusing on newer developments. The intent is to describe the disparate investigations currently included under the term text mining and provide a cohesive structure for these efforts. A summary of research identifies key organizations responsible for pushing the development of text mining. A section…

  19. Text Superstructures.

    ERIC Educational Resources Information Center

    Hoskins, Suzanne Bratcher

    1986-01-01

    Draws from the work of J. Kinneavy to identify text superstructures that are considered organizational patterns within larger structures: literary, expository, persuasive, and expressive writing. (HOD)

  20. Video Analytics for Indexing, Summarization and Searching of Video Archives

    SciTech Connect

    Trease, Harold E.; Trease, Lynn L.

    2009-08-01

    This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful" content from image and video data.

  1. Text Sets.

    ERIC Educational Resources Information Center

    Giorgis, Cyndi; Johnson, Nancy J.

    2002-01-01

    Presents annotations of approximately 30 titles grouped in text sets. Defines a text set as five to ten books on a particular topic or theme. Discusses books on the following topics: living creatures; pirates; physical appearance; natural disasters; and the Irish potato famine. (SG)

  2. Automatic transmission

    SciTech Connect

    Miura, M.; Aoki, H.

    1988-02-02

    An automatic transmission is described comprising: an automatic transmission mechanism portion comprising a single planetary gear unit and a dual planetary gear unit; carriers of both of the planetary gear units that are integral with one another; an input means for inputting torque to the automatic transmission mechanism, clutches for operatively connecting predetermined ones of planetary gear elements of both of the planetary gear units to the input means and braking means for restricting the rotation of predetermined ones of planetary gear elements of both of the planetary gear units. The clutches are disposed adjacent one another at an end portion of the transmission for defining a clutch portion of the transmission; a first clutch portion which is attachable to the automatic transmission mechanism portion for comprising the clutch portion when attached thereto; a second clutch portion that is attachable to the automatic transmission mechanism portion in place of the first clutch portion for comprising the clutch portion when so attached. The first clutch portion comprising first clutch for operatively connecting the input means to a ring gear of the single planetary gear unit and a second clutch for operatively connecting the input means to a single gear of the automatic transmission mechanism portion. The second clutch portion comprising a the first clutch, the second clutch, and a third clutch for operatively connecting the input member to a ring gear of the dual planetary gear unit.

  3. A fuzzy ontology and its application to news summarization.

    PubMed

    Lee, Chang-Shing; Jian, Zhi-Wei; Huang, Lin-Kai

    2005-10-01

    In this paper, a fuzzy ontology and its application to news summarization are presented. The fuzzy ontology with fuzzy concepts is an extension of the domain ontology with crisp concepts. It is more suitable to describe the domain knowledge than domain ontology for solving the uncertainty reasoning problems. First, the domain ontology with various events of news is predefined by domain experts. The document preprocessing mechanism will generate the meaningful terms based on the news corpus and the Chinese news dictionary defined by the domain expert. Then, the meaningful terms will be classified according to the events of the news by the term classifier. The fuzzy inference mechanism will generate the membership degrees for each fuzzy concept of the fuzzy ontology. Every fuzzy concept has a set of membership degrees associated with various events of the domain ontology. In addition, a news agent based on the fuzzy ontology is also developed for news summarization. The news agent contains five modules, including a retrieval agent, a document preprocessing mechanism, a sentence path extractor, a sentence generator, and a sentence filter to perform news summarization. Furthermore, we construct an experimental website to test the proposed approach. The experimental results show that the news agent based on the fuzzy ontology can effectively operate for news summarization. PMID:16240764

  4. Gaze-enabled Egocentric Video Summarization via Constrained Submodular Maximization

    PubMed Central

    Xut, Jia; Mukherjee, Lopamudra; Li, Yin; Warner, Jamieson; Rehg, James M.; Singht, Vikas

    2016-01-01

    With the proliferation of wearable cameras, the number of videos of users documenting their personal lives using such devices is rapidly increasing. Since such videos may span hours, there is an important need for mechanisms that represent the information content in a compact form (i.e., shorter videos which are more easily browsable/sharable). Motivated by these applications, this paper focuses on the problem of egocentric video summarization. Such videos are usually continuous with significant camera shake and other quality issues. Because of these reasons, there is growing consensus that direct application of standard video summarization tools to such data yields unsatisfactory performance. In this paper, we demonstrate that using gaze tracking information (such as fixation and saccade) significantly helps the summarization task. It allows meaningful comparison of different image frames and enables deriving personalized summaries (gaze provides a sense of the camera wearer's intent). We formulate a summarization model which captures common-sense properties of a good summary, and show that it can be solved as a submodular function maximization with partition matroid constraints, opening the door to a rich body of work from combinatorial optimization. We evaluate our approach on a new gaze-enabled egocentric video dataset (over 15 hours), which will be a valuable standalone resource. PMID:26973428

  5. Investigation of Learners' Perceptions for Video Summarization and Recommendation

    ERIC Educational Resources Information Center

    Yang, Jie Chi; Chen, Sherry Y.

    2012-01-01

    Recently, multimedia-based learning is widespread in educational settings. A number of studies investigate how to develop effective techniques to manage a huge volume of video sources, such as summarization and recommendation. However, few studies examine how these techniques affect learners' perceptions in multimedia learning systems. This…

  6. A Summarization System for Chinese News from Multiple Sources.

    ERIC Educational Resources Information Center

    Chen, Hsin-Hsi; Kuo, June-Jei; Huang, Sheng-Jie; Lin, Chuan-Jie; Wung, Hung-Chia

    2003-01-01

    Proposes a summarization system for multiple documents that employs named entities and other signatures to cluster news from different sources, as well as punctuation marks, linking elements, and topic chains to identify the meaningful units (MUs). Using nouns and verbs to identify similar MUs, focusing and browsing models are applied to represent…

  7. Abstractive Summarization of Drug Dosage Regimens for Supporting Drug Comparison.

    PubMed

    Ugon, Adrien; Berthelot, Hélène; Venot, Alain; Favre, Madeleine; Duclos, Catherine; Lamy, Jean-Baptiste

    2015-01-01

    Complicated dosage regimens often reduce adherence to drug treatments. The ease-of-administration must thus be taken into account when prescribing. Given one drug, there exists often several dosage regimens. Hence, comparison to similar drugs is difficult. Simplifying and summarizing them appears to be a required task for supporting General Practitioners to find the drug with the simplest regimen for the patient. We propose a summarization in two steps: first prunes out all low-importance information, and second proceed to fusion of remaining information. Rules for pruning and fusion strategies were designed by an expert in drug models. Evaluation was conducted on a dataset of 169 drugs. The agreement rate was 27.2%. We demonstrate that applying rules leads to a result that is correct by a computational point of view, but the result is often meaningless for the GP. We conclude with recommendations for further work. PMID:26152958

  8. Improving Web Search and Navigation Using Summarization Process

    NASA Astrophysics Data System (ADS)

    Carbonaro, Antonella

    The paper presents a summarization process for enabling personalized searching framework facilitating the user access and navigation through desired contents. The system will express key concepts and relationships describing resources in a formal machine-processable representation. A WordNet-based knowledge representation could be used for content analysis and concept recognition, for reasoning processes and for enabling user-friendly and intelligent content exploration.

  9. Summarization-Based Image Resizing by Intelligent Object Carving.

    PubMed

    Dong, Weiming; Zhou, Ning; Lee, Tong-Yee; Wu, Fuzhang; Kong, Yan; Zhang, Xiaopeng

    2013-07-22

    Image resizing can be more effectively achieved with a better understanding of image semantics. In this paper, similar patterns that exist in many real-world images. are analyzed. By interactively detecting similar objects in an image, the image content can be summarized rather than simply distorted or cropped. This method enables the manipulation of image pixels or patches as well as semantic objects in the scene during image resizing process. Given the special nature of similar objects in a general image, the integration of a novel object carving operator with the multi-operator framework is proposed for summarizing similar objects. The object removal sequence in the summarization strategy directly affects resizing quality. The method by which to evaluate the visual importance of the object as well as to optimally select the candidates for object carving is demonstrated. To achieve practical resizing applications for general images, a template matching-based method is developed. This method can detect similar objects even when they are of various colors, transformed in terms of perspective, or partially occluded. To validate the proposed method, comparisons with state-of-the-art resizing techniques and a user study were conducted. Convincing visual results are shown to demonstrate the effectiveness of the proposed method. PMID:23898014

  10. Summarization-based image resizing by intelligent object carving.

    PubMed

    Dong, Weiming; Zhou, Ning; Lee, Tong-Yee; Wu, Fuzhang; Kong, Yan; Zhang, Xiaopeng

    2014-01-01

    Image resizing can be more effectively achieved with a better understanding of image semantics. In this paper, similar patterns that exist in many real-world images are analyzed. By interactively detecting similar objects in an image, the image content can be summarized rather than simply distorted or cropped. This method enables the manipulation of image pixels or patches as well as semantic objects in the scene during image resizing process. Given the special nature of similar objects in a general image, the integration of a novel object carving (OC) operator with the multi-operator framework is proposed for summarizing similar objects. The object removal sequence in the summarization strategy directly affects resizing quality. The method by which to evaluate the visual importance of the object as well as to optimally select the candidates for object carving is demonstrated. To achieve practical resizing applications for general images, a template matching-based method is developed. This method can detect similar objects even when they are of various colors, transformed in terms of perspective, or partially occluded. To validate the proposed method, comparisons with state-of-the-art resizing techniques and a user study were conducted. Convincing visual results are shown to demonstrate the effectiveness of the proposed method. PMID:24201330

  11. Personalized summarization using user preference for m-learning

    NASA Astrophysics Data System (ADS)

    Lee, Sihyoung; Yang, Seungji; Ro, Yong Man; Kim, Hyoung Joong

    2008-02-01

    As the Internet and multimedia technology is becoming advanced, the number of digital multimedia contents is also becoming abundant in learning area. In order to facilitate the access of digital knowledge and to meet the need of a lifelong learning, e-learning could be the helpful alternative way to the conventional learning paradigms. E-learning is known as a unifying term to express online, web-based and technology-delivered learning. Mobile-learning (m-learning) is defined as e-learning through mobile devices using wireless transmission. In a survey, more than half of the people remarked that the re-consumption was one of the convenient features in e-learning. However, it is not easy to find user's preferred segmentation from a full version of lengthy e-learning content. Especially in m-learning, a content-summarization method is strongly required because mobile devices are limited to low processing power and battery capacity. In this paper, we propose a new user preference model for re-consumption to construct personalized summarization for re-consumption. The user preference for re-consumption is modeled based on user actions with statistical model. Based on the user preference model for re-consumption with personalized user actions, our method discriminates preferred parts over the entire content. Experimental results demonstrated successful personalized summarization.

  12. Capturing User Reading Behaviors for Personalized Document Summarization

    SciTech Connect

    Xu, Songhua; Jiang, Hao; Lau, Francis

    2011-01-01

    We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.

  13. Reading to Summarize in English and Chinese: A Tale of Two Languages?

    ERIC Educational Resources Information Center

    Yu, Guoxing

    2008-01-01

    The cognitive demands of summary writing are dependent upon the type of summary to be produced. This paper reports part of a larger study in which 157 Chinese undergraduates were asked to write summaries of extended English texts in both English and Chinese. It examines the differential effects of the use of the two languages on summarization as a…

  14. A Qualitative Study on the Use of Summarizing Strategies in Elementary Education

    ERIC Educational Resources Information Center

    Susar Kirmizi, Fatma; Akkaya, Nevin

    2011-01-01

    The objective of this study is to reveal how well summarizing strategies are used by Grade 4 and Grade 5 students as a reading comprehension strategy. This study was conducted in Buca, Izmir and the document analysis method, a qualitative research strategy, was employed. The study used a text titled "Environmental Pollution" and an "Evaluation…

  15. AUTOMATIC COUNTER

    DOEpatents

    Robinson, H.P.

    1960-06-01

    An automatic counter of alpha particle tracks recorded by a sensitive emulsion of a photographic plate is described. The counter includes a source of mcdulated dark-field illumination for developing light flashes from the recorded particle tracks as the photographic plate is automatically scanned in narrow strips. Photoelectric means convert the light flashes to proportional current pulses for application to an electronic counting circuit. Photoelectric means are further provided for developing a phase reference signal from the photographic plate in such a manner that signals arising from particle tracks not parallel to the edge of the plate are out of phase with the reference signal. The counting circuit includes provision for rejecting the out-of-phase signals resulting from unoriented tracks as well as signals resulting from spurious marks on the plate such as scratches, dust or grain clumpings, etc. The output of the circuit is hence indicative only of the tracks that would be counted by a human operator.

  16. A Graph Summarization Algorithm Based on RFID Logistics

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Hu, Kongfa; Lu, Zhipeng; Zhao, Li; Chen, Ling

    Radio Frequency Identification (RFID) applications are set to play an essential role in object tracking and supply chain management systems. The volume of data generated by a typical RFID application will be enormous as each item will generate a complete history of all the individual locations that it occupied at every point in time. The movement trails of such RFID data form gigantic commodity flowgraph representing the locations and durations of the path stages traversed by each item. In this paper, we use graph to construct a warehouse of RFID commodity flows, and introduce a database-style operation to summarize graphs, which produces a summary graph by grouping nodes based on user-selected node attributes, further allows users to control the hierarchy of summaries. It can cut down the size of graphs, and provide convenience for users to study just on the shrunk graph which they interested. Through extensive experiments, we demonstrate the effectiveness and efficiency of the proposed method.

  17. Improving text recognition by distinguishing scene and overlay text

    NASA Astrophysics Data System (ADS)

    Quehl, Bernhard; Yang, Haojin; Sack, Harald

    2015-02-01

    Video texts are closely related to the content of a video. They provide a valuable source for indexing and interpretation of video data. Text detection and recognition task in images or videos typically distinguished between overlay and scene text. Overlay text is artificially superimposed on the image at the time of editing and scene text is text captured by the recording system. Typically, OCR systems are specialized on one kind of text type. However, in video images both types of text can be found. In this paper, we propose a method to automatically distinguish between overlay and scene text to dynamically control and optimize post processing steps following text detection. Based on a feature combination a Support Vector Machine (SVM) is trained to classify scene and overlay text. We show how this distinction in overlay and scene text improves the word recognition rate. Accuracy of the proposed methods has been evaluated by using publicly available test data sets.

  18. Summarization and visualization of target trajectories from massive video archives

    NASA Astrophysics Data System (ADS)

    Yue, Zhanfeng; Narasimha, Pramod L.; Topiwala, Pankaj

    2009-05-01

    Video, especially massive video archives, is by nature dense information medium. Compactly presenting the activities of targets of interest provides an efficient and cost saving way to analyze the content of the video. In this paper, we propose a video content analysis system to summarize and visualize the trajectories of targets from massive video archives. We first present an adaptive appearance-based algorithm to robustly track the targets in a particle filtering framework. It provides high performance while facilitating implementation of this algorithm in hardware with parallel processing. Phase correlation algorithm is used to estimate the motion of the observation platform which is then compensated in order to extract the independent trajectories of the targets. Based on the trajectory information, we develop the interface for browsing the videos which enables us to directly manipulate the video. The user could scroll over objects to view their trajectories. If interested, he/she could click on the object and drag it along the displayed path. The actual video will be played in synchronous to the mouse movement.

  19. Query-biased Summarization Considering Difference of Paragraphs

    NASA Astrophysics Data System (ADS)

    Otani, Chikara; Hoo, Moon Kyeng; Oda, Yasushi; Furue, Toshihiko; Uchida, Yoshitaka; Yoshie, Osamu

    Most existing query-biased summarization methods generate the summary using extracted sentences based on similarity measure between all sentences in documents and the query. If there are plural sentences having high similarity to the query in the documents, however, these methods cannot decide from which sentence the summary should be made. This paper proposes an algorithm considering difference of paragraphs, adopting new indicator that shows the difference between one paragraph and the others. In a word space composed of all words in the target document, the algorithm determines the axis that maximizes the difference when a paragraph and the others are projected onto it. There are many combinations of a paragraph and a set of other paragraphs. For each combination, the above-mentioned axis that maximizes the difference and gives a conformity degree to the given query is calculated. With these conformities, the algorithm decides one paragraph for generating the summary. To obtain the axes, topic distinctiveness factor analysis is applied. The basic idea for making final summary is concatenating the sentences extracted from the paragraph. The resultant summary is evaluated from the points of readability, understandability and the easiness to judge whether the link works well or not.

  20. Automatic stabilization

    NASA Technical Reports Server (NTRS)

    Haus, FR

    1936-01-01

    This report concerns the study of automatic stabilizers and extends it to include the control of the three-control system of the airplane instead of just altitude control. Some of the topics discussed include lateral disturbed motion, static stability, the mathematical theory of lateral motion, and large angles of incidence. Various mechanisms and stabilizers are also discussed. The feeding of Diesel engines by injection pumps actuated by engine compression, achieves the required high speeds of injection readily and permits rigorous control of the combustible charge introduced into each cylinder and of the peak pressure in the resultant cycle.

  1. Automatic transmission

    SciTech Connect

    Miki, N.

    1988-10-11

    This patent describes an automatic transmission including a fluid torque converter, a first gear unit having three forward-speed gears and a single reverse gear, a second gear unit having a low-speed gear and a high-speed gear, and a hydraulic control system, the hydraulic control system comprising: a source of pressurized fluid; a first shift valve for controlling the shifting between the first-speed gear and the second-speed gear of the first gear unit; a second shift valve for controlling the shifting between the second-speed gear and the third-speed gear of the first gear unit; a third shift valve equipped with a spool having two positions for controlling the shifting between the low-speed gear and the high-speed gear of the second gear unit; a manual selector valve having a plurality of shift positions for distributing the pressurized fluid supply from the source of pressurized fluid to the first, second and third shift valves respectively; first, second and third solenoid valves corresponding to the first, second and third shift valves, respectively for independently controlling the operation of the respective shift valves, thereby establishing a six forward-speed automatic transmission by combining the low-speed gear and the high-speed gear of the second gear unit with each of the first-speed gear, the second speed gear and the third-speed gear of the first gear unit; and means to fixedly position the spool of the third shift valve at one of the two positions by supplying the pressurized fluid to the third shift valve when the manual selector valve is shifted to a particular shift position, thereby locking the second gear unit in one of low-speed gear and the high-speed gear, whereby the six forward-speed automatic transmission is converted to a three forward-speed automatic transmission when the manual selector valve is shifted to the particular shift position.

  2. Automatic transmission

    SciTech Connect

    Ohkubo, M.

    1988-02-16

    An automatic transmission is described combining a stator reversing type torque converter and speed changer having first and second sun gears comprising: (a) a planetary gear train composed of first and second planetary gears sharing one planetary carrier in common; (b) a clutch and requisite brakes to control the planetary gear train; and (c) a speed-increasing or speed-decreasing mechanism is installed both in between a turbine shaft coupled to a turbine of the stator reversing type torque converter and the first sun gear of the speed changer, and in between a stator shaft coupled to a reversing stator and the second sun gear of the speed changer.

  3. Invite, listen, and summarize: a patient-centered communication technique.

    PubMed

    Boyle, Dennis; Dwinnell, Brian; Platt, Frederic

    2005-01-01

    The need for physicians to have patient-centered communication skills is reflected in the educational objectives of numerous medical schools' curricula and in the competencies required by groups such as the Accreditation Council for Graduate Medical Education. An innovative method for teaching communications skills has been developed at the University of Colorado School of Medicine as part of its three-year, longitudinal course focusing on basic clinical skills required of all physicians. The method emphasizes techniques of open-ended inquiry, empathy, and engagement to gather data. Students refer to the method as ILS, or Invite, Listen, and Summarize. ILS was developed to combat the high-physician-control interview techniques, characterized by a series of "yes" or "no" questions. The authors began teaching the ILS approach in 2001 as one basic exercise and have since developed a two-year longitudinal communications curriculum. ILS is easy to use and remember, and it emphasizes techniques that have been shown in other studies to achieve the three basic functions of the medical interview: creating rapport, collecting good data, and improving compliance. The skills are taught using standardized patients in a series of four small-group exercises. Videotaped standardized patient encounters are used to evaluate the students. Tutors come from a variety of disciplines and receive standardized training. The curriculum has been well received. Despite the fact that the formal curriculum only occurs in the first two years, there is some evidence that it is improving students' interviewing skills at the end of their third year. PMID:15618088

  4. Automatic transmission

    SciTech Connect

    Aoki, H.

    1989-03-21

    An automatic transmission is described, comprising: a torque converter including an impeller having a connected member, a turbine having an input member and a reactor; and an automatic transmission mechanism having first to third clutches and plural gear units including a single planetary gear unit with a ring gear and a dual planetary gear unit with a ring gear. The single and dual planetary gear units have respective carriers integrally coupled with each other and respective sun gears integrally coupled with each other, the input member of the turbine being coupled with the ring gear of the single planetary gear unit through the first clutch, and being coupled with the sun gear through the second clutch. The connected member of the impeller is coupled with the ring gear of the dual planetary gear of the dual planetary gear unit is made to be and ring gear of the dual planetary gear unit is made to be restrained as required, and the carrier is coupled with an output member.

  5. Automatic transmission

    SciTech Connect

    Hamane, M.; Ohri, H.

    1989-03-21

    This patent describes an automatic transmission connected between a drive shaft and a driven shaft and comprising: a planetary gear mechanism including a first gear driven by the drive shaft, a second gear operatively engaged with the first gear to transmit speed change output to the driven shaft, and a third gear operatively engaged with the second gear to control the operation thereof; centrifugally operated clutch means for driving the first gear and the second gear. It also includes a ratchet type one-way clutch for permitting rotation of the third gear in the same direction as that of the drive shaft but preventing rotation in the reverse direction; the clutch means comprising a ratchet pawl supporting plate coaxially disposed relative to the drive shaft and integrally connected to the third gear, the ratchet pawl supporting plate including outwardly projection radial projections united with one another at base portions thereof.

  6. Recent progress in automatically extracting information from the pharmacogenomic literature

    PubMed Central

    Garten, Yael; Coulet, Adrien; Altman, Russ B

    2011-01-01

    The biomedical literature holds our understanding of pharmacogenomics, but it is dispersed across many journals. In order to integrate our knowledge, connect important facts across publications and generate new hypotheses we must organize and encode the contents of the literature. By creating databases of structured pharmocogenomic knowledge, we can make the value of the literature much greater than the sum of the individual reports. We can, for example, generate candidate gene lists or interpret surprising hits in genome-wide association studies. Text mining automatically adds structure to the unstructured knowledge embedded in millions of publications, and recent years have seen a surge in work on biomedical text mining, some specific to pharmacogenomics literature. These methods enable extraction of specific types of information and can also provide answers to general, systemic queries. In this article, we describe the main tasks of text mining in the context of pharmacogenomics, summarize recent applications and anticipate the next phase of text mining applications. PMID:21047206

  7. Effects of Teacher-Directed and Student-Interactive Summarization Instruction on Reading Comprehension and Written Summarization of Korean Fourth Graders

    ERIC Educational Resources Information Center

    Jeong, Jongseong

    2009-01-01

    The purpose of this study was to investigate how Korean fourth graders' performance on reading comprehension and written summarization changes as a function of instruction in summarization across test times. Seventy five Korean fourth graders from three classes were randomly assigned to the collaborative summarization, direct instruction, and…

  8. Automatic transmission

    SciTech Connect

    Miura, M.; Inuzuka, T.

    1986-08-26

    1. An automatic transmission with four forward speeds and one reverse position, is described which consists of: an input shaft; an output member; first and second planetary gear sets each having a sun gear, a ring gear and a carrier supporting a pinion in mesh with the sun gear and ring gear; the carrier of the first gear set, the ring gear of the second gear set and the output member all being connected; the ring gear of the first gear set connected to the carrier of the second gear set; a first clutch means for selectively connecting the input shaft to the sun gear of the first gear set, including friction elements, a piston selectively engaging the friction elements and a fluid servo in which hydraulic fluid is selectively supplied to the piston; a second clutch means for selectively connecting the input shaft to the sun gear of the second gear set a third clutch means for selectively connecting the input shaft to the carrier of the second gear set including friction elements, a piston selectively engaging the friction elements and a fluid servo in which hydraulic fluid is selectively supplied to the piston; a first drive-establishing means for selectively preventing rotation of the ring gear of the first gear set and the carrier of the second gear set in only one direction and, alternatively, in any direction; a second drive-establishing means for selectively preventing rotation of the sun gear of the second gear set; and a drum being open to the first planetary gear set, with a cylindrical intermediate wall, an inner peripheral wall and outer peripheral wall and forming the hydraulic servos of the first and third clutch means between the intermediate wall and the inner peripheral wall and between the intermediate wall and the outer peripheral wall respectively.

  9. Automatic Informative Abstracting and Extracting. Annual Report.

    ERIC Educational Resources Information Center

    Earl, L.L.; Robison, H.R.

    This fourth annual report summarizes the investigation of (1) a "sentence dictionary" and (2) a "word government dictionary" for use in automatic abstracting and extracting systems. The theory behind the sentence dictionary and its compilation is that a separation of significant from nonsignificant sentences can be accomplished on the basis of

  10. Text Mining for Neuroscience

    NASA Astrophysics Data System (ADS)

    Tirupattur, Naveen; Lapish, Christopher C.; Mukhopadhyay, Snehasis

    2011-06-01

    Text mining, sometimes alternately referred to as text analytics, refers to the process of extracting high-quality knowledge from the analysis of textual data. Text mining has wide variety of applications in areas such as biomedical science, news analysis, and homeland security. In this paper, we describe an approach and some relatively small-scale experiments which apply text mining to neuroscience research literature to find novel associations among a diverse set of entities. Neuroscience is a discipline which encompasses an exceptionally wide range of experimental approaches and rapidly growing interest. This combination results in an overwhelmingly large and often diffuse literature which makes a comprehensive synthesis difficult. Understanding the relations or associations among the entities appearing in the literature not only improves the researchers current understanding of recent advances in their field, but also provides an important computational tool to formulate novel hypotheses and thereby assist in scientific discoveries. We describe a methodology to automatically mine the literature and form novel associations through direct analysis of published texts. The method first retrieves a set of documents from databases such as PubMed using a set of relevant domain terms. In the current study these terms yielded a set of documents ranging from 160,909 to 367,214 documents. Each document is then represented in a numerical vector form from which an Association Graph is computed which represents relationships between all pairs of domain terms, based on co-occurrence. Association graphs can then be subjected to various graph theoretic algorithms such as transitive closure and cycle (circuit) detection to derive additional information, and can also be visually presented to a human researcher for understanding. In this paper, we present three relatively small-scale problem-specific case studies to demonstrate that such an approach is very successful in replicating a neuroscience expert's mental model of object-object associations entirely by means of text mining. These preliminary results provide the confidence that this type of text mining based research approach provides an extremely powerful tool to better understand the literature and drive novel discovery for the neuroscience community.

  11. Evaluation Methods of The Text Entities

    ERIC Educational Resources Information Center

    Popa, Marius

    2006-01-01

    The paper highlights some evaluation methods to assess the quality characteristics of the text entities. The main concepts used in building and evaluation processes of the text entities are presented. Also, some aggregated metrics for orthogonality measurements are presented. The evaluation process for automatic evaluation of the text entities is…

  12. Traduction automatique et terminologie automatique (Automatic Translation and Automatic Terminology

    ERIC Educational Resources Information Center

    Dansereau, Jules

    1978-01-01

    An exposition of reasons why a system of automatic translation could not use a terminology bank except as a source of information. The fundamental difference between the two tools is explained and examples of translation and mistranslation are given as evidence of the limits and possibilities of each process. (Text is in French.) (AMH)

  13. ETAT: Expository Text Analysis Tool.

    PubMed

    Vidal-Abarca, Eduardo; Reyes, Héctor; Gilabert, Ramiro; Calpe, Javier; Soria, Emilio; Graesser, Arthur C

    2002-02-01

    Qualitative methods that analyze the coherence of expository texts not only are time consuming, but also present challenges in collecting data on coding reliability. We describe software that analyzes expository texts more rapidly and produces a notable level of objectivity. ETAT (Expository Text Analysis Tool) analyzes the coherence of expository texts. ETAT adopts a symbolic representational system, known as conceptual graph structures. ETAT follows three steps: segmentation of a text into nodes, classification of the unidentified nodes, and linking the nodes with relational arcs. ETAT automatically constructs a graph in the form of nodes and their interrelationships, along with various attendant statistics and information about noninterrelated, isolated nodes. ETAT was developed in Java, so it is compatible with virtually all computer systems. PMID:12060996

  14. Automatic transmission adapter kit

    SciTech Connect

    Stich, R.L.; Neal, W.D.

    1987-02-10

    This patent describes, in a four-wheel-drive vehicle apparatus having a power train including an automatic transmission and a transfer case, an automatic transmission adapter kit for installation of a replacement automatic transmission of shorter length than an original automatic transmission in the four-wheel-drive vehicle. The adapter kit comprises: an extension housing interposed between the replacement automatic transmission and the transfer case; an output shaft, having a first end which engages the replacement automatic transmission and a second end which engages the transfer case; first sealing means for sealing between the extension housing and the replacement automatic transmission; second sealing means for sealing between the extension housing and the transfer case; and fastening means for connecting the extension housing between the replacement automatic transmission and the transfer case.

  15. Automatic fluid dispenser

    NASA Technical Reports Server (NTRS)

    Sakellaris, P. C. (Inventor)

    1977-01-01

    Fluid automatically flows to individual dispensing units at predetermined times from a fluid supply and is available only for a predetermined interval of time after which an automatic control causes the fluid to drain from the individual dispensing units. Fluid deprivation continues until the beginning of a new cycle when the fluid is once again automatically made available at the individual dispensing units.

  16. An anatomy of automatism.

    PubMed

    Mackay, R D

    2015-07-01

    The automatism defence has been described as a quagmire of law and as presenting an intractable problem. Why is this so? This paper will analyse and explore the current legal position on automatism. In so doing, it will identify the problems which the case law has created, including the distinction between sane and insane automatism and the status of the 'external factor doctrine', and comment briefly on recent reform proposals. PMID:26378105

  17. Automatic crack propagation tracking

    NASA Technical Reports Server (NTRS)

    Shephard, M. S.; Weidner, T. J.; Yehia, N. A. B.; Burd, G. S.

    1985-01-01

    A finite element based approach to fully automatic crack propagation tracking is presented. The procedure presented combines fully automatic mesh generation with linear fracture mechanics techniques in a geometrically based finite element code capable of automatically tracking cracks in two-dimensional domains. The automatic mesh generator employs the modified-quadtree technique. Crack propagation increment and direction are predicted using a modified maximum dilatational strain energy density criterion employing the numerical results obtained by meshes of quadratic displacement and singular crack tip finite elements. Example problems are included to demonstrate the procedure.

  18. Automatic differentiation bibliography

    SciTech Connect

    Corliss, G.F.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  19. Techniques for automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Moore, R. K.

    1983-05-01

    A brief insight into some of the algorithms that lie behind current automatic speech recognition system is provided. Early phonetically based approaches were not particularly successful, due mainly to a lack of appreciation of the problems involved. These problems are summarized, and various recognition techniques are reviewed in the contect of the solutions that they provide. It is pointed out that the majority of currently available speech recognition equipments employ a "whole-word' pattern matching approach which, although relatively simple, has proved particularly successful in its ability to recognize speech. The concepts of time-normalizing plays a central role in this type of recognition process and a family of such algorithms is described in detail. The technique of dynamic time warping is not only capable of providing good performance for isolated word recognition, but how it is also extended to the recognition of connected speech (thereby removing one of the most severe limitations of early speech recognition equipment).

  20. Autoclass: An automatic classification system

    NASA Technical Reports Server (NTRS)

    Stutz, John; Cheeseman, Peter; Hanson, Robin

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.

  1. Application of nonlinear transformations to automatic flight control

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Su, R.; Hunt, L. R.

    1984-01-01

    The theory of transformations of nonlinear systems to linear ones is applied to the design of an automatic flight controller for the UH-1H helicopter. The helicopter mathematical model is described and it is shown to satisfy the necessary and sufficient conditions for transformability. The mapping is constructed, taking the nonlinear model to canonical form. The performance of the automatic control system in a detailed simulation on the flight computer is summarized.

  2. PERSIVAL, a System for Personalized Search and Summarization over Multimedia Healthcare Information.

    ERIC Educational Resources Information Center

    McKeown, Kathleen R.; Chang, Shih-Fu; Cimino, James; Feiner, Steven K.; Friedman, Carol; Gravano, Luis; Hatzivassiloglou, Vasileios; Johnson, Steven; Jordan, Desmond A.; Klavans, Judith L.; Kushniruk, Andre; Patel, Vimla; Teufel, Simone

    This paper reports on the ongoing development of PERSIVAL (Personalized Retrieval and Summarization of Image, Video, and Language), a system designed to provide personalized access to a distributed digital library of medical literature and consumer health information. The goal for PERSIVAL is to tailor search, presentation, and summarization of…

  3. Text documents as social networks

    NASA Astrophysics Data System (ADS)

    Balinsky, Helen; Balinsky, Alexander; Simske, Steven J.

    2012-03-01

    The extraction of keywords and features is a fundamental problem in text data mining. Document processing applications directly depend on the quality and speed of the identification of salient terms and phrases. Applications as disparate as automatic document classification, information visualization, filtering and security policy enforcement all rely on the quality of automatically extracted keywords. Recently, a novel approach to rapid change detection in data streams and documents has been developed. It is based on ideas from image processing and in particular on the Helmholtz Principle from the Gestalt Theory of human perception. By modeling a document as a one-parameter family of graphs with its sentences or paragraphs defining the vertex set and with edges defined by Helmholtz's principle, we demonstrated that for some range of the parameters, the resulting graph becomes a small-world network. In this article we investigate the natural orientation of edges in such small world networks. For two connected sentences, we can say which one is the first and which one is the second, according to their position in a document. This will make such a graph look like a small WWW-type network and PageRank type algorithms will produce interesting ranking of nodes in such a document.

  4. Automatic Differentiation Package

    Energy Science and Technology Software Center (ESTSC)

    2007-03-01

    Sacado is an automatic differentiation package for C++ codes using operator overloading and C++ templating. Sacado provide forward, reverse, and Taylor polynomial automatic differentiation classes and utilities for incorporating these classes into C++ codes. Users can compute derivatives of computations arising in engineering and scientific applications, including nonlinear equation solving, time integration, sensitivity analysis, stability analysis, optimization and uncertainity quantification.

  5. Mediation and Automatization.

    ERIC Educational Resources Information Center

    Hutchins, Edwin

    This paper discusses the relationship between the mediation of task performance by some structure that is not inherent in the task domain itself and the phenomenon of automatization, in which skilled performance becomes effortless or phenomenologically "automatic" after extensive practice. The use of a common simple explicit mediating device, a…

  6. Structuring Lecture Videos by Automatic Projection Screen Localization and Analysis.

    PubMed

    Li, Kai; Wang, Jue; Wang, Haoqian; Dai, Qionghai

    2015-06-01

    We present a fully automatic system for extracting the semantic structure of a typical academic presentation video, which captures the whole presentation stage with abundant camera motions such as panning, tilting, and zooming. Our system automatically detects and tracks both the projection screen and the presenter whenever they are visible in the video. By analyzing the image content of the tracked screen region, our system is able to detect slide progressions and extract a high-quality, non-occluded, geometrically-compensated image for each slide, resulting in a list of representative images that reconstruct the main presentation structure. Afterwards, our system recognizes text content and extracts keywords from the slides, which can be used for keyword-based video retrieval and browsing. Experimental results show that our system is able to generate more stable and accurate screen localization results than commonly-used object tracking methods. Our system also extracts more accurate presentation structures than general video summarization methods, for this specific type of video. PMID:26357345

  7. Writing Home/Decolonizing Text(s)

    ERIC Educational Resources Information Center

    Asher, Nina

    2009-01-01

    The article draws on postcolonial and feminist theories, combined with critical reflection and autobiography, and argues for generating decolonizing texts as one way to write and reclaim home in a postcolonial world. Colonizers leave home to seek power and control elsewhere, and the colonized suffer loss of home as they know it. This dislocation

  8. Writing Home/Decolonizing Text(s)

    ERIC Educational Resources Information Center

    Asher, Nina

    2009-01-01

    The article draws on postcolonial and feminist theories, combined with critical reflection and autobiography, and argues for generating decolonizing texts as one way to write and reclaim home in a postcolonial world. Colonizers leave home to seek power and control elsewhere, and the colonized suffer loss of home as they know it. This dislocation…

  9. Text File Display Program

    NASA Technical Reports Server (NTRS)

    Vavrus, J. L.

    1986-01-01

    LOOK program permits user to examine text file in pseudorandom access manner. Program provides user with way of rapidly examining contents of ASCII text file. LOOK opens text file for input only and accesses it in blockwise fashion. Handles text formatting and displays text lines on screen. User moves forward or backward in file by any number of lines or blocks. Provides ability to "scroll" text at various speeds in forward or backward directions.

  10. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  11. Calibrating Item Families and Summarizing the Results Using Family Expected Response Functions

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Johnson, Matthew S.; Williamson, David M.

    2003-01-01

    Item families, which are groups of related items, are becoming increasingly popular in complex educational assessments. For example, in automatic item generation (AIG) systems, a test may consist of multiple items generated from each of a number of item models. Item calibration or scoring for such an assessment requires fitting models that can

  12. Calibrating Item Families and Summarizing the Results Using Family Expected Response Functions

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Johnson, Matthew S.; Williamson, David M.

    2003-01-01

    Item families, which are groups of related items, are becoming increasingly popular in complex educational assessments. For example, in automatic item generation (AIG) systems, a test may consist of multiple items generated from each of a number of item models. Item calibration or scoring for such an assessment requires fitting models that can…

  13. Automatic Payroll Deposit System.

    ERIC Educational Resources Information Center

    Davidson, D. B.

    1979-01-01

    The Automatic Payroll Deposit System in Yakima, Washington's Public School District No. 7, directly transmits each employee's salary amount for each pay period to a bank or other financial institution. (Author/MLF)

  14. Automatic switching matrix

    DOEpatents

    Schlecht, Martin F.; Kassakian, John G.; Caloggero, Anthony J.; Rhodes, Bruce; Otten, David; Rasmussen, Neil

    1982-01-01

    An automatic switching matrix that includes an apertured matrix board containing a matrix of wires that can be interconnected at each aperture. Each aperture has associated therewith a conductive pin which, when fully inserted into the associated aperture, effects electrical connection between the wires within that particular aperture. Means is provided for automatically inserting the pins in a determined pattern and for removing all the pins to permit other interconnecting patterns.

  15. Clinicians’ Evaluation of Computer-Assisted Medication Summarization of Electronic Medical Records

    PubMed Central

    Zhu, Xinxin; Cimin, James J.

    2014-01-01

    Each year thousands of patients die of avoidable medication errors. When a patient is admitted to, transferred within, or discharged from a clinical facility, clinicians should review previous medication orders, current orders and future plans for care, and reconcile differences if there are any. If medication reconciliation is not accurate and systematic, medication errors such as omissions, duplications, dosing errors, or drug interactions may occur and cause harm. Computer-assisted medication applications showed promise as an intervention to reduce medication summarization inaccuracies and thus avoidable medication errors. In this study, a computer-assisted medication summarization application, designed to abstract and represent multi-source time-oriented medication data, was introduced to assist clinicians with their medication reconciliation processes. An evaluation study was carried out to assess clinical usefulness and analyze potential impact of such application. Both quantitative and qualitative methods were applied to measure clinicians' performance efficiency and inaccuracy in medication summarization process with and without the intervention of computer-assisted medication application. Clinicians' feedback indicated the feasibility of integrating such a medication summarization tool into clinical practice workflow as a complementary addition to existing electronic health record systems. The result of the study showed potential to improve efficiency and reduce inaccuracy in clinician performance of medication summarization, which could in turn improve care efficiency, quality of care, and patient safety. PMID:24393492

  16. Linguistic Summarization of Video for Fall Detection Using Voxel Person and Fuzzy Logic

    PubMed Central

    Anderson, Derek; Luke, Robert H.; Keller, James M.; Skubic, Marjorie; Rantz, Marilyn; Aud, Myra

    2009-01-01

    In this paper, we present a method for recognizing human activity from linguistic summarizations of temporal fuzzy inference curves representing the states of a three-dimensional object called voxel person. A hierarchy of fuzzy logic is used, where the output from each level is summarized and fed into the next level. We present a two level model for fall detection. The first level infers the states of the person at each image. The second level operates on linguistic summarizations of voxel person’s states and inference regarding activity is performed. The rules used for fall detection were designed under the supervision of nurses to ensure that they reflect the manner in which elders perform these activities. The proposed framework is extremely flexible. Rules can be modified, added, or removed, allowing for per-resident customization based on knowledge about their cognitive and physical ability. PMID:20046216

  17. Questioning the Text.

    ERIC Educational Resources Information Center

    Harvey, Stephanie

    2001-01-01

    One way teachers can improve students' reading comprehension is to teach them to think while reading, questioning the text and carrying on an inner conversation. This involves: choosing the text for questioning; introducing the strategy to the class; modeling thinking aloud and marking the text with stick-on notes; and allowing time for guided…

  18. Creating Vocative Texts

    ERIC Educational Resources Information Center

    Nicol, Jennifer J.

    2008-01-01

    Vocative texts are expressive poetic texts that strive to show rather than tell, that communicate felt knowledge, and that appeal to the senses. They are increasingly used by researchers to present qualitative findings, but little has been written about how to create such texts. To this end, excerpts from an inquiry into the experience and meaning…

  19. Text Coherence in Translation

    ERIC Educational Resources Information Center

    Zheng, Yanping

    2009-01-01

    In the thesis a coherent text is defined as a continuity of senses of the outcome of combining concepts and relations into a network composed of knowledge space centered around main topics. And the author maintains that in order to obtain the coherence of a target language text from a source text during the process of translation, a translator can

  20. Multi-document Summarization of Dissertation Abstracts Using a Variable-Based Framework.

    ERIC Educational Resources Information Center

    Ou, Shiyan; Khoo, Christopher S. G.; Goh, Dion H.

    2003-01-01

    Proposes a variable-based framework for multi-document summarization of dissertation abstracts in the fields of sociology and psychology that makes use of the macro- and micro-level discourse structure of dissertation abstracts as well as cross-document structure. Provides a list of indicator phrases that denote different aspects of the problem…

  1. Utilizing Marzano's Summarizing and Note Taking Strategies on Seventh Grade Students' Mathematics Performance

    ERIC Educational Resources Information Center

    Jeanmarie-Gardner, Charmaine

    2013-01-01

    A quasi-experimental research study was conducted that investigated the academic impact of utilizing Marzano's summarizing and note taking strategies on mathematic achievement. A sample of seventh graders from a middle school located on Long Island's North Shore was tested to determine whether significant differences existed in mathematic test

  2. Using Expected Growth Size Estimates To Summarize Test Score Changes. ERIC/AE Digest.

    ERIC Educational Resources Information Center

    Russell, Michael

    An earlier Digest described the shortcomings of three methods commonly used to summarize changes in test scores. This Digest describes two less commonly used approaches for examining changes in test scores, those of Standardized Growth Estimates and Effect Sizes. Aspects of these two approaches are combined and applied to the Iowa Test of Basic…

  3. Indian Education in America. Summarizing a Collection of Essays by Vine Deloria, Jr.

    ERIC Educational Resources Information Center

    Simonelli, Richard

    1991-01-01

    Summarizes 11 themes of Deloria's "Indian Education in America," including Native versus Western worldview; history of Indian education; Indian versus professional identity; community as key to survival; destructive aspects of American education; necessity of tribal context for education and knowledge; and reconciliation of science and tribal…

  4. Legal Provisions on Expanded Functions for Dental Hygienists and Assistants. Summarized by State. Second Edition.

    ERIC Educational Resources Information Center

    Johnson, Donald W.; Holz, Frank M.

    This second edition summarizes and interprets, from the pertinent documents of each state, those provisions which establish and regulate the tasks of hygienists and assistants, with special attention given to expanded functions. Information is updated for all jurisdictions through the end of 1973, based chiefly on materials received in response to

  5. Utilizing Marzano's Summarizing and Note Taking Strategies on Seventh Grade Students' Mathematics Performance

    ERIC Educational Resources Information Center

    Jeanmarie-Gardner, Charmaine

    2013-01-01

    A quasi-experimental research study was conducted that investigated the academic impact of utilizing Marzano's summarizing and note taking strategies on mathematic achievement. A sample of seventh graders from a middle school located on Long Island's North Shore was tested to determine whether significant differences existed in mathematic test…

  6. Empirical Analysis of Exploiting Review Helpfulness for Extractive Summarization of Online Reviews

    ERIC Educational Resources Information Center

    Xiong, Wenting; Litman, Diane

    2014-01-01

    We propose a novel unsupervised extractive approach for summarizing online reviews by exploiting review helpfulness ratings. In addition to using the helpfulness ratings for review-level filtering, we suggest using them as the supervision of a topic model for sentence-level content scoring. The proposed method is metadata-driven, requiring no…

  7. iBIOMES Lite: Summarizing Biomolecular Simulation Data in Limited Settings

    PubMed Central

    2015-01-01

    As the amount of data generated by biomolecular simulations dramatically increases, new tools need to be developed to help manage this data at the individual investigator or small research group level. In this paper, we introduce iBIOMES Lite, a lightweight tool for biomolecular simulation data indexing and summarization. The main goal of iBIOMES Lite is to provide a simple interface to summarize computational experiments in a setting where the user might have limited privileges and limited access to IT resources. A command-line interface allows the user to summarize, publish, and search local simulation data sets. Published data sets are accessible via static hypertext markup language (HTML) pages that summarize the simulation protocols and also display data analysis graphically. The publication process is customized via extensible markup language (XML) descriptors while the HTML summary template is customized through extensible stylesheet language (XSL). iBIOMES Lite was tested on different platforms and at several national computing centers using various data sets generated through classical and quantum molecular dynamics, quantum chemistry, and QM/MM. The associated parsers currently support AMBER, GROMACS, Gaussian, and NWChem data set publication. The code is available at https://github.com/jcvthibault/ibiomes. PMID:24830957

  8. Summarizing Monte Carlo Results in Methodological Research: The Single-Factor, Fixed-Effects ANCOVA Case.

    ERIC Educational Resources Information Center

    Harwell, Michael

    2003-01-01

    Used meta analytic methods to summarize results of Monte Carlo studies of test size and power of the F test in the single-factor, fixed-effects analysis of covariance model, updating and extending narrative reviews of this literature. (SLD)

  9. Empirical Analysis of Exploiting Review Helpfulness for Extractive Summarization of Online Reviews

    ERIC Educational Resources Information Center

    Xiong, Wenting; Litman, Diane

    2014-01-01

    We propose a novel unsupervised extractive approach for summarizing online reviews by exploiting review helpfulness ratings. In addition to using the helpfulness ratings for review-level filtering, we suggest using them as the supervision of a topic model for sentence-level content scoring. The proposed method is metadata-driven, requiring no

  10. Synthesis of salinosporamide A and its analogs as 20S proteasome inhibitors and SAR summarization.

    PubMed

    Ma, Yuheng; Qu, Lili; Liu, Zhenming; Zhang, Liangren; Yang, Zhenjun; Zhang, Lihe

    2011-12-01

    Salinosporamide A (4), a secondary metabolite of the marine actinomycete Salinispora tropica, is a potent inhibitor of 20S proteasome that is currently in clinical trials for the treatment of cancers. Herein, we described various synthetic strategies of 4 and summarized the SAR of 4 and its analogs. PMID:21824108

  11. Effects on Science Summarization of a Reading Comprehension Intervention for Adolescents with Behavior and Attention Disorders

    ERIC Educational Resources Information Center

    Rogevich, Mary E.; Perin, Dolores

    2008-01-01

    Sixty-three adolescent boys with behavioral disorders (BD), 31 of whom had comorbid attention deficit hyperactivity disorder (ADHD), participated in a self-regulated strategy development intervention called Think Before Reading, Think While Reading, Think After Reading, With Written Summarization (TWA-WS). TWA-WS adapted Linda Mason's TWA…

  12. iBIOMES Lite: summarizing biomolecular simulation data in limited settings.

    PubMed

    Thibault, Julien C; Cheatham, Thomas E; Facelli, Julio C

    2014-06-23

    As the amount of data generated by biomolecular simulations dramatically increases, new tools need to be developed to help manage this data at the individual investigator or small research group level. In this paper, we introduce iBIOMES Lite, a lightweight tool for biomolecular simulation data indexing and summarization. The main goal of iBIOMES Lite is to provide a simple interface to summarize computational experiments in a setting where the user might have limited privileges and limited access to IT resources. A command-line interface allows the user to summarize, publish, and search local simulation data sets. Published data sets are accessible via static hypertext markup language (HTML) pages that summarize the simulation protocols and also display data analysis graphically. The publication process is customized via extensible markup language (XML) descriptors while the HTML summary template is customized through extensible stylesheet language (XSL). iBIOMES Lite was tested on different platforms and at several national computing centers using various data sets generated through classical and quantum molecular dynamics, quantum chemistry, and QM/MM. The associated parsers currently support AMBER, GROMACS, Gaussian, and NWChem data set publication. The code is available at https://github.com/jcvthibault/ibiomes . PMID:24830957

  13. Text File Comparator

    NASA Technical Reports Server (NTRS)

    Kotler, R. S.

    1983-01-01

    File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

  14. Making Sense of Texts

    ERIC Educational Resources Information Center

    Harper, Rebecca G.

    2014-01-01

    This article addresses the triadic nature regarding meaning construction of texts. Grounded in Rosenblatt's (1995; 1998; 2004) Transactional Theory, research conducted in an undergraduate Language Arts curriculum course revealed that when presented with unfamiliar texts, students used prior experiences, social interactions, and literary…

  15. Composing Texts, Composing Lives.

    ERIC Educational Resources Information Center

    Perl, Sondra

    1994-01-01

    Using composition, reader response, critical, and feminist theories, a teacher demonstrates how adult students respond critically to literary texts and how teachers must critically analyze the texts of their teaching practice. Both students and teachers can use writing to bring their experiences to interpretation. (SK)

  16. Solar Energy Project: Text.

    ERIC Educational Resources Information Center

    Tullock, Bruce, Ed.; And Others

    The text is a compilation of background information which should be useful to teachers wishing to obtain some technical information on solar technology. Twenty sections are included which deal with topics ranging from discussion of the sun's composition to the legal implications of using solar energy. The text is intended to provide useful…

  17. The Perfect Text.

    ERIC Educational Resources Information Center

    Russo, Ruth

    1998-01-01

    A chemistry teacher describes the elements of the ideal chemistry textbook. The perfect text is focused and helps students draw a coherent whole out of the myriad fragments of information and interpretation. The text would show chemistry as the central science necessary for understanding other sciences and would also root chemistry firmly in the…

  18. YORUBA, INTERMEDIATE TEXTS.

    ERIC Educational Resources Information Center

    MCCLURE, H. DAVID; OYEWALE, JOHN O.

    THIS COURSE IS BASED ON A SERIES OF BRIEF MONOLOGUES RECORDED BY A WESTERN-EDUCATED NATIVE SPEAKER OF YORUBA FROM THE OYO AREA. THE TAPES CONSTITUTE THE CENTRAL PART OF THE COURSE, WITH THE TEXT INTENDED AS SUPPLEMENTARY AND AUXILIARY MATERIAL. THE TEXT TOPICS WERE CHOSEN FOR THEIR SPECIAL RELEVANCE TO PEACE CORPS VOLUNTEERS WHO EXPECT TO USE…

  19. Automatic recording spectroradiometer system.

    PubMed

    Heaps, W L

    1971-09-01

    A versatile, mobile, automatic recording spectroradiometer of high precision and accuracy has been developed. The instrument is a single-beam device with an alternate reference beam intended primarily for measurements of spectral irradiance. However, it is equally useful for measurement of spectral radiance, transmittance, or reflectance. The system is programmed for automatic operation. The output is in the form of an automatic digital recording of both measurements and control data. Instrument operation integrates the following characteristics: wavelength-by-wavelength operation in intervals of 0.1 nm to 50 nm; time-integrated measurements of spectral flux; internal calibration reference source; and monitored signals for wavelength position, test source total output, and photodetector dark current. The system's operating characteristics and specifications have been determined and are set forth here. Performance for three types of sources and correction of measurements to zero-bandpass equivalence is demonstrated. PMID:20111268

  20. AUTOMATIC COUNTING APPARATUS

    DOEpatents

    Howell, W.D.

    1957-08-20

    An apparatus for automatically recording the results of counting operations on trains of electrical pulses is described. The disadvantages of prior devices utilizing the two common methods of obtaining the count rate are overcome by this apparatus; in the case of time controlled operation, the disclosed system automatically records amy information stored by the scaler but not transferred to the printer at the end of the predetermined time controlled operations and, in the case of count controlled operation, provision is made to prevent a weak sample from occupying the apparatus for an excessively long period of time.

  1. Summarizing scale-free networks based on virtual and real links

    NASA Astrophysics Data System (ADS)

    Bei, Yijun; Lin, Zhen; Chen, Deren

    2016-02-01

    Techniques to summarize and cluster graphs are indispensable to understand the internal characteristics of large complex networks. However, existing methods that analyze graphs mainly focus on aggregating strong-interaction vertices into the same group without considering the node properties, particularly multi-valued attributes. This study aims to develop a unified framework based on the concept of a virtual graph by integrating attributes and structural similarities. We propose a summarizing graph based on virtual and real links (SGVR) approach to aggregate similar nodes in a scale-free graph into k non-overlapping groups based on user-selected attributes considering both virtual links (attributes) and real links (graph structures). An effective data structure called HB-Graph is adopted to adjust the subgroups and optimize the grouping results. Extensive experiments are carried out on actual and synthetic datasets. Results indicate that our proposed method is both effective and efficient.

  2. Noisy text categorization.

    PubMed

    Vinciarelli, Alessandro

    2005-12-01

    This work presents categorization experiments performed over noisy texts. By noisy, we mean any text obtained through an extraction process (affected by errors) from media other than digital texts (e.g., transcriptions of speech recordings extracted with a recognition system). The performance of a categorization system over the clean and noisy (Word Error Rate between approximately 10 and approximately 50 percent) versions of the same documents is compared. The noisy texts are obtained through handwriting recognition and simulation of optical character recognition. The results show that the performance loss is acceptable for Recall values up to 60-70 percent depending on the noise sources. New measures of the extraction process performance, allowing a better explanation of the categorization results, are proposed. PMID:16355657

  3. The Interplay between Automatic and Control Processes in Reading.

    ERIC Educational Resources Information Center

    Walczyk, Jeffrey J.

    2000-01-01

    Reviews prominent reading theories in light of their accounts of how automatic and control processes combine to produce successful text comprehension, and the trade-offs between the two. Presents the Compensatory-Encoding Model of reading, which explicates how, when, and why automatic and control processes interact. Notes important educational…

  4. XTRN - Automatic Code Generator For C Header Files

    NASA Technical Reports Server (NTRS)

    Pieniazek, Lester A.

    1990-01-01

    Computer program XTRN, Automatic Code Generator for C Header Files, generates "extern" declarations for all globally visible identifiers contained in input C-language code. Generates external declarations by parsing input text according to syntax derived from C. Automatically provides consistent and up-to-date "extern" declarations and alleviates tedium and errors involved in manual approach. Written in C and Unix Shell.

  5. Disciplinary Variation in Automatic Sublanguage Term Identification.

    ERIC Educational Resources Information Center

    Haas, Stephanie W.

    1997-01-01

    Describes a method for automatically identifying sublanguage (SL) domain terms and revealing the patterns in which they occur in text. By applying this method to abstracts from a variety of disciplines, differences in how SL domain terminology occurs can be discerned. Findings indicate relatively consistent differences between the hard sciences

  6. Automaticity of Conceptual Magnitude

    PubMed Central

    Gliksman, Yarden; Itamar, Shai; Leibovich, Tali; Melman, Yonatan; Henik, Avishai

    2016-01-01

    What is bigger, an elephant or a mouse? This question can be answered without seeing the two animals, since these objects elicit conceptual magnitude. How is an object’s conceptual magnitude processed? It was suggested that conceptual magnitude is automatically processed; namely, irrelevant conceptual magnitude can affect performance when comparing physical magnitudes. The current study further examined this question and aimed to expand the understanding of automaticity of conceptual magnitude. Two different objects were presented and participants were asked to decide which object was larger on the screen (physical magnitude) or in the real world (conceptual magnitude), in separate blocks. By creating congruent (the conceptually larger object was physically larger) and incongruent (the conceptually larger object was physically smaller) pairs of stimuli it was possible to examine the automatic processing of each magnitude. A significant congruity effect was found for both magnitudes. Furthermore, quartile analysis revealed that the congruity was affected similarly by processing time for both magnitudes. These results suggest that the processing of conceptual and physical magnitudes is automatic to the same extent. The results support recent theories suggested that different types of magnitude processing and representation share the same core system. PMID:26879153

  7. Brut: Automatic bubble classifier

    NASA Astrophysics Data System (ADS)

    Beaumont, Christopher; Goodman, Alyssa; Williams, Jonathan; Kendrew, Sarah; Simpson, Robert

    2014-07-01

    Brut, written in Python, identifies bubbles in infrared images of the Galactic midplane; it uses a database of known bubbles from the Milky Way Project and Spitzer images to build an automatic bubble classifier. The classifier is based on the Random Forest algorithm, and uses the WiseRF implementation of this algorithm.

  8. Automaticity of Conceptual Magnitude.

    PubMed

    Gliksman, Yarden; Itamar, Shai; Leibovich, Tali; Melman, Yonatan; Henik, Avishai

    2016-01-01

    What is bigger, an elephant or a mouse? This question can be answered without seeing the two animals, since these objects elicit conceptual magnitude. How is an object's conceptual magnitude processed? It was suggested that conceptual magnitude is automatically processed; namely, irrelevant conceptual magnitude can affect performance when comparing physical magnitudes. The current study further examined this question and aimed to expand the understanding of automaticity of conceptual magnitude. Two different objects were presented and participants were asked to decide which object was larger on the screen (physical magnitude) or in the real world (conceptual magnitude), in separate blocks. By creating congruent (the conceptually larger object was physically larger) and incongruent (the conceptually larger object was physically smaller) pairs of stimuli it was possible to examine the automatic processing of each magnitude. A significant congruity effect was found for both magnitudes. Furthermore, quartile analysis revealed that the congruity was affected similarly by processing time for both magnitudes. These results suggest that the processing of conceptual and physical magnitudes is automatic to the same extent. The results support recent theories suggested that different types of magnitude processing and representation share the same core system. PMID:26879153

  9. Reactor component automatic grapple

    DOEpatents

    Greenaway, Paul R.

    1982-01-01

    A grapple for handling nuclear reactor components in a medium such as liquid sodium which, upon proper seating and alignment of the grapple with the component as sensed by a mechanical logic integral to the grapple, automatically seizes the component. The mechanical logic system also precludes seizure in the absence of proper seating and alignment.

  10. Automatic Transmission Vehicle Injuries

    PubMed Central

    Fidler, Malcolm

    1973-01-01

    Four drivers sustained severe injuries when run down by their own automatic cars while adjusting the carburettor or throttle linkages. The transmission had been left in the “Drive” position and the engine was idling. This accident is easily avoidable. PMID:4695693

  11. Exploring Automatization Processes.

    ERIC Educational Resources Information Center

    DeKeyser, Robert M.

    1996-01-01

    Presents the rationale for and the results of a pilot study attempting to document in detail how automatization takes place as the result of different kinds of intensive practice. Results show that reaction times and error rates gradually decline with practice, and the practice effect is skill-specific. (36 references) (CK)

  12. Automatic multiple applicator electrophoresis

    NASA Technical Reports Server (NTRS)

    Grunbaum, B. W.

    1977-01-01

    Easy-to-use, economical device permits electrophoresis on all known supporting media. System includes automatic multiple-sample applicator, sample holder, and electrophoresis apparatus. System has potential applicability to fields of taxonomy, immunology, and genetics. Apparatus is also used for electrofocusing.

  13. Automatic Program Synthesis Reports.

    ERIC Educational Resources Information Center

    Biermann, A. W.; And Others

    Some of the major results of future goals of an automatic program synthesis project are described in the two papers that comprise this document. The first paper gives a detailed algorithm for synthesizing a computer program from a trace of its behavior. Since the algorithm involves a search, the length of time required to do the synthesis of…

  14. Automatic channel switching device

    NASA Technical Reports Server (NTRS)

    Ball, M.; Olnowich, H. T.

    1967-01-01

    Automatic channel switching device operates with all three triple modular redundant channels when there are no errors. When a failure occurs, channel and module switching isolate the failure to a specific channel. Since only one must operate correctly, reliability is increased.

  15. Automatic sweep circuit

    DOEpatents

    Keefe, Donald J. (Lemont, IL)

    1980-01-01

    An automatically sweeping circuit for searching for an evoked response in an output signal in time with respect to a trigger input. Digital counters are used to activate a detector at precise intervals, and monitoring is repeated for statistical accuracy. If the response is not found then a different time window is examined until the signal is found.

  16. AUTOmatic Message PACKing Facility

    Energy Science and Technology Software Center (ESTSC)

    2004-07-01

    AUTOPACK is a library that provides several useful features for programs using the Message Passing Interface (MPI). Features included are: 1. automatic message packing facility 2. management of send and receive requests. 3. management of message buffer memory. 4. determination of the number of anticipated messages from a set of arbitrary sends, and 5. deterministic message delivery for testing purposes.

  17. Automatic soldering machine

    NASA Technical Reports Server (NTRS)

    Stein, J. A.

    1974-01-01

    Fully-automatic tube-joint soldering machine can be used to make leakproof joints in aluminum tubes of 3/16 to 2 in. in diameter. Machine consists of temperature-control unit, heater transformer and heater head, vibrator, and associated circuitry controls, and indicators.

  18. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  19. The earliest medical texts.

    PubMed

    Frey, E F

    The first civilization known to have had an extensive study of medicine and to leave written records of its practices and procedures was that of ancient Egypt. The oldest extant Egyptian medical texts are six papyri from the period between 2000 B.C. and 1500 B.C.: the Kahun Medical Papyrus, the Ramesseum IV and Ramesseum V Papyri, the Edwin Smith Surgical Papyrus, The Ebers Medical Papyrus and the Hearst Medical Papyrus. These texts, most of them based on older texts dating possibly from 3000 B.C., are comparatively free of the magician's approach to treating illness. Egyptian medicine influenced the medicine of neighboring cultures, including the culture of ancient Greece. From Greece, its influence spread onward, thereby affecting Western civilization significantly. PMID:2463895

  20. Text Exchange System

    NASA Technical Reports Server (NTRS)

    Snyder, W. V.; Hanson, R. J.

    1986-01-01

    Text Exchange System (TES) exchanges and maintains organized textual information including source code, documentation, data, and listings. System consists of two computer programs and definition of format for information storage. Comprehensive program used to create, read, and maintain TES files. TES developed to meet three goals: First, easy and efficient exchange of programs and other textual data between similar and dissimilar computer systems via magnetic tape. Second, provide transportable management system for textual information. Third, provide common user interface, over wide variety of computing systems, for all activities associated with text exchange.

  1. Taming the Wild Text

    ERIC Educational Resources Information Center

    Allyn, Pam

    2012-01-01

    As a well-known advocate for promoting wider reading and reading engagement among all children--and founder of a reading program for foster children--Pam Allyn knows that struggling readers often face any printed text with fear and confusion, like Max in the book Where the Wild Things Are. She argues that teachers need to actively create a

  2. Text as Image.

    ERIC Educational Resources Information Center

    Woal, Michael; Corn, Marcia Lynn

    As electronically mediated communication becomes more prevalent, print is regaining the original pictorial qualities which graphemes (written signs) lost when primitive pictographs (or picture writing) and ideographs (simplified graphemes used to communicate ideas as well as to represent objects) evolved into first written, then printed, texts of

  3. Taming the Wild Text

    ERIC Educational Resources Information Center

    Allyn, Pam

    2012-01-01

    As a well-known advocate for promoting wider reading and reading engagement among all children--and founder of a reading program for foster children--Pam Allyn knows that struggling readers often face any printed text with fear and confusion, like Max in the book Where the Wild Things Are. She argues that teachers need to actively create a…

  4. Polymorphous Perversity in Texts

    ERIC Educational Resources Information Center

    Johnson-Eilola, Johndan

    2012-01-01

    Here's the tricky part: If we teach ourselves and our students that texts are made to be broken apart, remixed, remade, do we lose the polymorphous perversity that brought us pleasure in the first place? Does the pleasure of transgression evaporate when the borders are opened?

  5. Formalization and separation: A systematic basis for interpreting approaches to summarizing science for climate policy.

    PubMed

    Sundqvist, Göran; Bohlin, Ingemar; Hermansen, Erlend A T; Yearley, Steven

    2015-06-01

    In studies of environmental issues, the question of how to establish a productive interplay between science and policy is widely debated, especially in relation to climate change. The aim of this article is to advance this discussion and contribute to a better understanding of how science is summarized for policy purposes by bringing together two academic discussions that usually take place in parallel: the question of how to deal with formalization (structuring the procedures for assessing and summarizing research, e.g. by protocols) and separation (maintaining a boundary between science and policy in processes of synthesizing science for policy). Combining the two dimensions, we draw a diagram onto which different initiatives can be mapped. A high degree of formalization and separation are key components of the canonical image of scientific practice. Influential Science and Technology Studies analysts, however, are well known for their critiques of attempts at separation and formalization. Three examples that summarize research for policy purposes are presented and mapped onto the diagram: the Intergovernmental Panel on Climate Change, the European Union's Science for Environment Policy initiative, and the UK Committee on Climate Change. These examples bring out salient differences concerning how formalization and separation are dealt with. Discussing the space opened up by the diagram, as well as the limitations of the attraction to its endpoints, we argue that policy analyses, including much Science and Technology Studies work, are in need of a more nuanced understanding of the two crucial dimensions of formalization and separation. Accordingly, two analytical claims are presented, concerning trajectories, how organizations represented in the diagram move over time, and mismatches, how organizations fail to handle the two dimensions well in practice. PMID:26477199

  6. Automatic inference of indexing rules for MEDLINE

    PubMed Central

    Névéol, Aurélie; Shooshan, Sonya E; Claveau, Vincent

    2008-01-01

    Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP) to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI), a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI. PMID:19025687

  7. A hierarchical structure for automatic meshing and adaptive FEM analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Saxena, Mukul; Perucchio, Renato

    1987-01-01

    A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.

  8. How to Summarize a 6,000-Word Paper in a Six-Minute Video Clip

    PubMed Central

    Vachon, Patrick; Daudelin, Genevieve; Hivon, Myriam

    2013-01-01

    As part of our research team's knowledge transfer and exchange (KTE) efforts, we created a six-minute video clip that summarizes, in plain language, a scientific paper that describes why and how three teams of academic entrepreneurs developed new health technologies. Recognizing that video-based KTE strategies can be a valuable tool for health services and policy researchers, this paper explains the constraints and sources of inspiration that shaped our video production process. Aiming to provide practical guidance, we describe the steps and tools that we used to identify, refine and package the key content of the scientific paper into an original video format. PMID:23968634

  9. Health information text characteristics.

    PubMed

    Leroy, Gondy; Eryilmaz, Evren; Laroya, Benjamin T

    2006-01-01

    Millions of people search online for medical text, but these texts are often too complicated to understand. Readability evaluations are mostly based on surface metrics such as character or words counts and sentence syntax, but content is ignored. We compared four types of documents, easy and difficult WebMD documents, patient blogs, and patient educational material, for surface and content-based metrics. The documents differed significantly in reading grade levels and vocabulary used. WebMD pages with high readability also used terminology that was more consumer-friendly. Moreover, difficult documents are harder to understand due to their grammar and word choice and because they discuss more difficult topics. This indicates that we can simplify many documents by focusing on word choice in addition to sentence structure, however, for difficult documents this may be insufficient. PMID:17238387

  10. The Texting Principal

    ERIC Educational Resources Information Center

    Kessler, Susan Stone

    2009-01-01

    The author was appointed principal of a large, urban comprehensive high school in spring 2008. One of the first things she had to figure out was how she would develop a connection with her students when there were so many of them--nearly 2,000--and only one of her. Texts may be exchanged more quickly than having a conversation over the phone,

  11. Semi-Supervised Data Summarization: Using Spectral Libraries to Improve Hyperspectral Clustering

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Shu, H. P.; Mazzoni, D.; Castano, R.

    2005-01-01

    Hyperspectral imagers produce very large images, with each pixel recorded at hundreds or thousands of different wavelengths. The ability to automatically generate summaries of these data sets enables several important applications, such as quickly browsing through a large image repository or determining the best use of a limited bandwidth link (e.g., determining which images are most critical for full transmission). Clustering algorithms can be used to generate these summaries, but traditional clustering methods make decisions based only on the information contained in the data set. In contrast, we present a new method that additionally leverages existing spectral libraries to identify materials that are likely to be present in the image target area. We find that this approach simultaneously reduces runtime and produces summaries that are more relevant to science goals.

  12. Happiness in texting times

    PubMed Central

    Hevey, David; Hand, Karen; MacLachlan, Malcolm

    2015-01-01

    Assessing national levels of happiness has become an important research and policy issue in recent years. We examined happiness and satisfaction in Ireland using phone text messaging to collect large-scale longitudinal data from 3,093 members of the general Irish population. For six consecutive weeks, participants’ happiness and satisfaction levels were assessed. For four consecutive weeks (weeks 2–5) a different random third of the sample got feedback on the previous week’s mean happiness and satisfaction ratings. Text messaging proved a feasible means of assessing happiness and satisfaction, with almost three quarters (73%) of participants completing all assessments. Those who received feedback on the previous week’s mean ratings were eight times more likely to complete the subsequent assessments than those not receiving feedback. Providing such feedback data on mean levels of happiness and satisfaction did not systematically bias subsequent ratings either toward or away from these normative anchors. Texting is a simple and effective means to collect population level happiness and satisfaction data. PMID:26441804

  13. Automatism and driving offences.

    PubMed

    Rumbold, John

    2013-10-01

    Automatism is a rarely used defence, but it is particularly used for driving offences because many are strict liability offences. Medical evidence is almost always crucial to argue the defence, and it is important to understand the bars that limit the use of automatism so that the important medical issues can be identified. The issue of prior fault is an important public safeguard to ensure that reasonable precautions are taken to prevent accidents. The total loss of control definition is more problematic, especially with disorders of more gradual onset like hypoglycaemic episodes. In these cases the alternative of 'effective loss of control' would be fairer. This article explores several cases, how the criteria were applied to each, and the types of medical assessment required. PMID:24112330

  14. Automatic carrier acquisition system

    NASA Technical Reports Server (NTRS)

    Bunce, R. C. (Inventor)

    1973-01-01

    An automatic carrier acquisition system for a phase locked loop (PLL) receiver is disclosed. It includes a local oscillator, which sweeps the receiver to tune across the carrier frequency uncertainty range until the carrier crosses the receiver IF reference. Such crossing is detected by an automatic acquisition detector. It receives the IF signal from the receiver as well as the IF reference. It includes a pair of multipliers which multiply the IF signal with the IF reference in phase and in quadrature. The outputs of the multipliers are filtered through bandpass filters and power detected. The output of the power detector has a signal dc component which is optimized with respect to the noise dc level by the selection of the time constants of the filters as a function of the sweep rate of the local oscillator.

  15. Automatic transmission control method

    SciTech Connect

    Hasegawa, H.; Ishiguro, T.

    1989-07-04

    This patent describes a method of controlling an automatic transmission of an automotive vehicle. The transmission has a gear train which includes a brake for establishing a first lowest speed of the transmission, the brake acting directly on a ring gear which meshes with a pinion, the pinion meshing with a sun gear in a planetary gear train, the ring gear connected with an output member, the sun gear being engageable and disengageable with an input member of the transmission by means of a clutch. The method comprises the steps of: detecting that a shift position of the automatic transmission has been shifted to a neutral range; thereafter introducing hydraulic pressure to the brake if present vehicle velocity is below a predetermined value, whereby the brake is engaged to establish the first lowest speed; and exhausting hydraulic pressure from the brake if present vehicle velocity is higher than a predetermined value, whereby the brake is disengaged.

  16. Automatic vehicle monitoring

    NASA Technical Reports Server (NTRS)

    Bravman, J. S.; Durrani, S. H.

    1976-01-01

    Automatic vehicle monitoring systems are discussed. In a baseline system for highway applications, each vehicle obtains position information through a Loran-C receiver in rural areas and through a 'signpost' or 'proximity' type sensor in urban areas; the vehicle transmits this information to a central station via a communication link. In an advance system, the vehicle carries a receiver for signals emitted by satellites in the Global Positioning System and uses a satellite-aided communication link to the central station. An advanced railroad car monitoring system uses car-mounted labels and sensors for car identification and cargo status; the information is collected by electronic interrogators mounted along the track and transmitted to a central station. It is concluded that automatic vehicle monitoring systems are technically feasible but not economically feasible unless a large market develops.

  17. Automatic Abstraction in Planning

    NASA Technical Reports Server (NTRS)

    Christensen, J.

    1991-01-01

    Traditionally, abstraction in planning has been accomplished by either state abstraction or operator abstraction, neither of which has been fully automatic. We present a new method, predicate relaxation, for automatically performing state abstraction. PABLO, a nonlinear hierarchical planner, implements predicate relaxation. Theoretical, as well as empirical results are presented which demonstrate the potential advantages of using predicate relaxation in planning. We also present a new definition of hierarchical operators that allows us to guarantee a limited form of completeness. This new definition is shown to be, in some ways, more flexible than previous definitions of hierarchical operators. Finally, a Classical Truth Criterion is presented that is proven to be sound and complete for a planning formalism that is general enough to include most classical planning formalisms that are based on the STRIPS assumption.

  18. Hysteroscopy video summarization and browsing by estimating the physician's attention on video segments.

    PubMed

    Gavião, Wilson; Scharcanski, Jacob; Frahm, Jan-Michael; Pollefeys, Marc

    2012-01-01

    Specialists often need to browse through libraries containing many diagnostic hysteroscopy videos searching for similar cases, or even to review the video of one particular case. Video searching and browsing can be used in many situations, like in case-based diagnosis when videos of previously diagnosed cases are compared, in case referrals, in reviewing the patient records, as well as for supporting medical research (e.g. in human reproduction). However, in terms of visual content, diagnostic hysteroscopy videos contain lots of information, but only a reduced number of frames are actually useful for diagnosis/prognosis purposes. In order to facilitate the browsing task, we propose in this paper a technique for estimating the clinical relevance of video segments in diagnostic hysteroscopies. Basically, the proposed technique associates clinical relevance with the attention attracted by a diagnostic hysteroscopy video segment during the video acquisition (i.e. during the diagnostic hysteroscopy conducted by a specialist). We show that the resulting video summary allows specialists to browse the video contents nonlinearly, while avoiding spending time on spurious visual information. In this work, we review state-of-art methods for summarizing general videos and how they apply to diagnostic hysteroscopy videos (considering their specific characteristics), and conclude that our proposed method contributes to the field with a summarization and representation method specific for video hysteroscopies. The experimental results indicate that our method tends to produce compact video summaries without discarding clinically relevant information. PMID:21920798

  19. Automatic digital image registration

    NASA Technical Reports Server (NTRS)

    Goshtasby, A.; Jain, A. K.; Enslin, W. R.

    1982-01-01

    This paper introduces a general procedure for automatic registration of two images which may have translational, rotational, and scaling differences. This procedure involves (1) segmentation of the images, (2) isolation of dominant objects from the images, (3) determination of corresponding objects in the two images, and (4) estimation of transformation parameters using the center of gravities of objects as control points. An example is given which uses this technique to register two images which have translational, rotational, and scaling differences.

  20. Automatic thermal switches

    NASA Technical Reports Server (NTRS)

    Cunningham, J. W.; Wing, L. D.

    1980-01-01

    Two automatic switches control heat flow from one thermally conductive plate to another. One switch permits heat flow to outside; other limits heat flow. In one switch, heat on conductive plate activates piston that forces saddle against plate. Heat carriers then conduct heat to second plate that radiates it away. After temperature is first plate drops, piston contracts and spring breaks thermal contact with plate. In second switch, action is reversed.

  1. Linguistically informed digital fingerprints for text

    NASA Astrophysics Data System (ADS)

    Uzuner, Özlem

    2006-02-01

    Digital fingerprinting, watermarking, and tracking technologies have gained importance in the recent years in response to growing problems such as digital copyright infringement. While fingerprints and watermarks can be generated in many different ways, use of natural language processing for these purposes has so far been limited. Measuring similarity of literary works for automatic copyright infringement detection requires identifying and comparing creative expression of content in documents. In this paper, we present a linguistic approach to automatically fingerprinting novels based on their expression of content. We use natural language processing techniques to generate "expression fingerprints". These fingerprints consist of both syntactic and semantic elements of language, i.e., syntactic and semantic elements of expression. Our experiments indicate that syntactic and semantic elements of expression enable accurate identification of novels and their paraphrases, providing a significant improvement over techniques used in text classification literature for automatic copy recognition. We show that these elements of expression can be used to fingerprint, label, or watermark works; they represent features that are essential to the character of works and that remain fairly consistent in the works even when works are paraphrased. These features can be directly extracted from the contents of the works on demand and can be used to recognize works that would not be correctly identified either in the absence of pre-existing labels or by verbatim-copy detectors.

  2. TRMM Gridded Text Products

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2007-01-01

    NASA's Tropical Rainfall Measuring Mission (TRMM) has many products that contain instantaneous or gridded rain rates often among many other parameters. However, these products because of their completeness can often seem intimidating to users just desiring surface rain rates. For example one of the gridded monthly products contains well over 200 parameters. It is clear that if only rain rates are desired, this many parameters might prove intimidating. In addition, for many good reasons these products are archived and currently distributed in HDF format. This also can be an inhibiting factor in using TRMM rain rates. To provide a simple format and isolate just the rain rates from the many other parameters, the TRMM product created a series of gridded products in ASCII text format. This paper describes the various text rain rate products produced. It provides detailed information about parameters and how they are calculated. It also gives detailed format information. These products are used in a number of applications with the TRMM processing system. The products are produced from the swath instantaneous rain rates and contain information from the three major TRMM instruments: radar, radiometer, and combined. They are simple to use, human readable, and small for downloading.

  3. Recognizing musical text

    NASA Astrophysics Data System (ADS)

    Clarke, Alastair T.; Brown, B. M.; Thorne, M. P.

    1993-08-01

    This paper reports on some recent developments in a software product that recognizes printed music notation. There are a number of computer systems available which assist in the task of printing music; however the full potential of these systems cannot be realized until the musical text has been entered into the computer. It is this problem that we address in this paper. The software we describe, which uses computationally inexpensive methods, is designed to analyze a music score, previously read by a flat bed scanner, and to extract the musical information that it contains. The paper discusses the methods used to recognize the musical text: these involve sampling the image at strategic points and using this information to estimate the musical symbol. It then discusses some hard problems that have been encountered during the course of the research; for example the recognition of chords and note clusters. It also reports on the progress that has been made in solving these problems and concludes with a discussion of work that needs to be undertaken over the next five years in order to transform this research prototype into a commercial product.

  4. Robust Text Detection in Natural Scene Images.

    PubMed

    Yin, Xu-Cheng; Yin, Xuwang; Huang, Kaizhu; Hao, Hong-Wei

    2013-09-26

    Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks. In this paper, we propose an accurate and robust method for detecting texts in natural scene images. A fast and effective pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSERs) as character candidates using the strategy of minimizing regularized variations. Character candidates are grouped into text candidates by the single-link clustering algorithm, where distance weights and clustering threshold are learned automatically by a novel self-training distance metric learning algorithm. The posterior probabilities of text candidates corresponding to non-text are estimated with a character classifier; text candidates with high non-text probabilities are eliminated and texts are identified with a text classifier. The proposed system is evaluated on the ICDAR 2011 Robust Reading Competition database; the f measure is over 76%, much better than the state-of-the-art performance of 71%. Experiments on multilingual, street view, multi-orientation and even born-digital databases also demonstrate the effectiveness of the proposed method. Finally, an online demo of our proposed scene text detection system has been set up at http://kems.ustb.edu.cn/learning/yin/dtext. PMID:24080709

  5. Evidence Summarized in Attorneys' Closing Arguments Predicts Acquittals in Criminal Trials of Child Sexual Abuse

    PubMed Central

    Stolzenberg, Stacia N.; Lyon, Thomas D.

    2014-01-01

    Evidence summarized in attorney's closing arguments of criminal child sexual abuse cases (N = 189) was coded to predict acquittal rates. Ten variables were significant bivariate predictors; five variables significant at p < .01 were entered into a multivariate model. Cases were likely to result in an acquittal when the defendant was not charged with force, the child maintained contact with the defendant after the abuse occurred, or the defense presented a hearsay witness regarding the victim's statements, a witness regarding the victim's character, or a witness regarding another witnesses' character (usually the mother). The findings suggest that jurors might believe that child molestation is akin to a stereotype of violent rape and that they may be swayed by defense challenges to the victim's credibility and the credibility of those close to the victim. PMID:24920247

  6. Editing cues for content-based analysis and summarization of motion pictures

    NASA Astrophysics Data System (ADS)

    Ferman, Ahmet M.; Tekalp, A. Murat

    1997-12-01

    This paper introduces techniques that exploit common film editing practices to perform content-based analysis and summarization of video programs. By observing certain editing conventions we determine the intended associations between shots that constitute a coherent sequence, and utilize this information to generate meaningful semantic decompositions of streams. Dynamic composition, shot pacing, motion continuity, and shot transitions are among the editing tools that we consider for high-level analysis. We also develop techniques for detecting establishing shots in a video program, and demonstrate how they can be used for efficient query processing and summary generation. The proposed framework facilitates such queries as finding shots that occur at the same location or within the same time frame; it also provides a powerful tool for semi-automated EDL and script generation.

  7. Interactive exploration of surveillance video through action shot summarization and trajectory visualization.

    PubMed

    Meghdadi, Amir H; Irani, Pourang

    2013-12-01

    We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems. PMID:24051778

  8. Effects of learning structure and summarization during computer-based instruction

    NASA Astrophysics Data System (ADS)

    Werner, Lynn

    The purpose of this study was to investigate the effects of learning strategy and summarization within a computer-based chemistry and physics program. Students worked individually or in cooperative dyads to complete science instruction; half of them completed summaries over the instructional content when directed to do so. The study examined the effects of learning strategy and summarization on posttest and enroute performance, attitude, time-on-task, and interaction behaviors. Results indicated no significant differences for posttest performance. Results for enroute performance indicated that practice scores for students who did not write summaries were significantly higher than for those who wrote summaries. Enroute results did not indicate a significant difference between those working in cooperative dyads and those working alone. Results for time-on-task indicated a significant interaction between learning strategy and summary condition. Students in the cooperative-no summary condition spent significantly more time on practice than those in the cooperative-summary condition. Furthermore, subjects in the individual-no summary condition spent significantly more time on practice than those in the cooperative-summary condition. Attitudes toward the computer-based program were generally positive. Students expressed positive attitudes toward the interactive exercises, the atomic animations and the on-line, interactive periodic table. Attitude scores also showed that students expressed positive feelings about the particular learning strategy to which they were assigned. Results from the study also indicated that students in the two cooperative conditions interacted together in somewhat different ways. Dyads in the summary condition exhibited significantly more helping behaviors and task-related behaviors than dyads in the no summary condition. The results of this study have implications for the design of computer-based instruction and the use of this medium with cooperative learning strategies.

  9. Robust Text Detection in Natural Scene Images.

    PubMed

    Yin, Xu-Cheng; Yin, Xuwang; Huang, Kaizhu; Hao, Hong-Wei

    2014-05-01

    Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks. In this paper, we propose an accurate and robust method for detecting texts in natural scene images. A fast and effective pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSERs) as character candidates using the strategy of minimizing regularized variations. Character candidates are grouped into text candidates by the single-link clustering algorithm, where distance weights and clustering threshold are learned automatically by a novel self-training distance metric learning algorithm. The posterior probabilities of text candidates corresponding to non-text are estimated with a character classifier; text candidates with high non-text probabilities are eliminated and texts are identified with a text classifier. The proposed system is evaluated on the ICDAR 2011 Robust Reading Competition database; the f-measure is over 76%, much better than the state-of-the-art performance of 71%. Experiments on multilingual, street view, multi-orientation and even born-digital databases also demonstrate the effectiveness of the proposed method. PMID:26353230

  10. Identifying discourse connectives in biomedical text.

    PubMed

    Ramesh, Balaji Polepalli; Yu, Hong

    2010-01-01

    Discourse connectives are words or phrases that connect or relate two coherent sentences or phrases and indicate the presence of discourse relations. Automatic recognition of discourse connectives may benefit many natural language processing applications. In this pilot study, we report the development of the supervised machine-learning classifiers with conditional random fields (CRFs) for automatically identifying discourse connectives in full-text biomedical articles. Our first classifier was trained on the open-domain 1 million token Penn Discourse Tree Bank (PDTB). We performed cross validation on biomedical articles (approximately 100K word tokens) that we annotated. The results show that the classifier trained on PDTB data attained a 0.55 F1-score for identifying discourse connectives in biomedical text, while the cross-validation results in the biomedical text attained a 0.69 F1-score, a much better performance despite a much smaller training size. Our preliminary analysis suggests the existence of domain-specific features, and we speculate that domain-adaption approaches may further improve performance. PMID:21347060

  11. Recognition as Translating Images into Text

    NASA Astrophysics Data System (ADS)

    Barnard, Kobus; Duygulu, Pinar; Forsyth, David A.

    2003-01-01

    We present an overview of a new paradigm for tackling long standing computer vision problems. Specifically our approach is to build statistical models which translate from a visual representations (images) to semantic ones (associated text). As providing optimal text for training is difficult at best, we propose working with whatever associated text is available in large quantities. Examples include large image collections with keywords, museum image collections with descriptive text, news photos, and images on the web. In this paper we discuss how the translation approach can give a handle on difficult questions such as: What counts as an object? Which objects are easy to recognize and which are hard? Which objects are indistinguishable using our features? How to integrate low level vision processes such as feature based segmentation, with high level processes such as grouping. We also summarize some of the models proposed for translating from visual information to text, and some of the methods used to evaluate their performance.

  12. Mining for Surprise Events within Text Streams

    SciTech Connect

    Whitney, Paul D.; Engel, David W.; Cramer, Nicholas O.

    2009-04-30

    This paper summarizes algorithms and analysis methodology for mining the evolving content in text streams. Text streams include news, press releases from organizations, speeches, Internet blogs, etc. These data are a fundamental source for detecting and characterizing strategic intent of individuals and organizations as well as for detecting abrupt or surprising events within communities. Specifically, an analyst may need to know if and when the topic within a text stream changes. Much of the current text feature methodology is focused on understanding and analyzing a single static collection of text documents. Corresponding analytic activities include summarizing the contents of the collection, grouping the documents based on similarity of content, and calculating concise summaries of the resulting groups. The approach reported here focuses on taking advantage of the temporal characteristics in a text stream to identify relevant features (such as change in content), and also on the analysis and algorithmic methodology to communicate these characteristics to a user. We present a variety of algorithms for detecting essential features within a text stream. A critical finding is that the characteristics used to identify features in a text stream are uncorrelated with the characteristics used to identify features in a static document collection. Our approach for communicating the information back to the user is to identify feature (word/phrase) groups. These resulting algorithms form the basis of developing software tools for a user to analyze and understand the content of text streams. We present analysis using both news information and abstracts from technical articles, and show how these algorithms provide understanding of the contents of these text streams.

  13. Reading Text While Driving

    PubMed Central

    Horrey, William J.; Hoffman, Joshua D.

    2015-01-01

    Objective In this study, we investigated how drivers adapt secondary-task initiation and time-sharing behavior when faced with fluctuating driving demands. Background Reading text while driving is particularly detrimental; however, in real-world driving, drivers actively decide when to perform the task. Method In a test track experiment, participants were free to decide when to read messages while driving along a straight road consisting of an area with increased driving demands (demand zone) followed by an area with low demands. A message was made available shortly before the vehicle entered the demand zone. We manipulated the type of driving demands (baseline, narrow lane, pace clock, combined), message format (no message, paragraph, parsed), and the distance from the demand zone when the message was available (near, far). Results In all conditions, drivers started reading messages (drivers’ first glance to the display) before entering or before leaving the demand zone but tended to wait longer when faced with increased driving demands. While reading messages, drivers looked more or less off road, depending on types of driving demands. Conclusions For task initiation, drivers avoid transitions from low to high demands; however, they are not discouraged when driving demands are already elevated. Drivers adjust time-sharing behavior according to driving demands while performing secondary tasks. Nonetheless, such adjustment may be less effective when total demands are high. Application This study helps us to understand a driver’s role as an active controller in the context of distracted driving and provides insights for developing distraction interventions. PMID:25850162

  14. Motor automaticity in Parkinson's disease.

    PubMed

    Wu, Tao; Hallett, Mark; Chan, Piu

    2015-10-01

    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson's disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  15. AUTOMATIC FREQUENCY CONTROL SYSTEM

    DOEpatents

    Hansen, C.F.; Salisbury, J.D.

    1961-01-10

    A control is described for automatically matching the frequency of a resonant cavity to that of a driving oscillator. The driving oscillator is disconnected from the cavity and a secondary oscillator is actuated in which the cavity is the frequency determining element. A low frequency is mixed with the output of the driving oscillator and the resultant lower and upper sidebands are separately derived. The frequencies of the sidebands are compared with the secondary oscillator frequency. deriving a servo control signal to adjust a tuning element in the cavity and matching the cavity frequency to that of the driving oscillator. The driving oscillator may then be connected to the cavity.

  16. Automatic level control circuit

    NASA Technical Reports Server (NTRS)

    Toole, P. C.; Mccarthy, D. M. (Inventor)

    1983-01-01

    An automatic level control circuit for an operational amplifier for minimizing spikes or instantaneous gain of the amplifier at a low period wherein no signal is received on the input is provided. The apparatus includes a multibranch circuit which is connected between an output terminal and a feedback terminal. A pair of zener diodes are connected back to back in series with a capacitor provided in one of the branches. A pair of voltage dividing resistors are connected in another of the branches and a second capacitor is provided in the remaining branch of controlling the high frequency oscillations of the operational amplifier.

  17. [The effect of reading tasks on learning from multiple texts].

    PubMed

    Kobayashi, Keiichi

    2014-06-01

    This study examined the effect of reading tasks on the integration of content and source information from multiple texts. Undergraduate students (N = 102) read five newspaper articles about a fictitious incident in either a summarization task condition or an evaluation task condition. Then, they performed an integration test and a source choice test, which assessed their understanding of a situation described in the texts and memory for the sources of text information. The results indicated that the summarization and evaluation task groups were not significantly different in situational understanding. However, the summarization task group significantly surpassed the evaluation task group for source memory. No significant correlation between the situational understanding and the source memory was found for the summarization group, whereas a significant positive correlation was found for the evaluation group. The results are discussed in terms of the documents model framework. PMID:25016841

  18. Automatic readout micrometer

    DOEpatents

    Lauritzen, T.

    A measuring system is described for surveying and very accurately positioning objects with respect to a reference line. A principle use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse of fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  19. Automatic readout micrometer

    DOEpatents

    Lauritzen, Ted

    1982-01-01

    A measuring system is disclosed for surveying and very accurately positioning objects with respect to a reference line. A principal use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse or fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  20. Development of a Summarized Health Index (SHI) for Use in Predicting Survival in Sea Turtles

    PubMed Central

    Li, Tsung-Hsien; Chang, Chao-Chin; Cheng, I-Jiunn; Lin, Suen-Chuain

    2015-01-01

    Veterinary care plays an influential role in sea turtle rehabilitation, especially in endangered species. Physiological characteristics, hematological and plasma biochemistry profiles, are useful references for clinical management in animals, especially when animals are during the convalescence period. In this study, these factors associated with sea turtle surviving were analyzed. The blood samples were collected when sea turtles remained alive, and then animals were followed up for surviving status. The results indicated that significantly negative correlation was found between buoyancy disorders (BD) and sea turtle surviving (p < 0.05). Furthermore, non-surviving sea turtles had significantly higher levels of aspartate aminotranspherase (AST), creatinine kinase (CK), creatinine and uric acid (UA) than surviving sea turtles (all p < 0.05). After further analysis by multiple logistic regression model, only factors of BD, creatinine and UA were included in the equation for calculating summarized health index (SHI) for each individual. Through evaluation by receiver operating characteristic (ROC) curve, the result indicated that the area under curve was 0.920 ± 0.037, and a cut-off SHI value of 2.5244 showed 80.0% sensitivity and 86.7% specificity in predicting survival. Therefore, the developed SHI could be a useful index to evaluate health status of sea turtles and to improve veterinary care at rehabilitation facilities. PMID:25803431

  1. Summarizing polygenic risks for complex diseases in a clinical whole genome report

    PubMed Central

    Kong, Sek Won; Lee, In-Hee; Leschiner, Ignaty; Krier, Joel; Kraft, Peter; Rehm, Heidi L.; Green, Robert C.; Kohane, Isaac S.; MacRae, Calum A.

    2015-01-01

    Purpose Disease-causing mutations and pharmacogenomic variants are of primary interest for clinical whole-genome sequencing. However, estimating genetic liability for common complex diseases using established risk alleles might one day prove clinically useful. Methods We compared polygenic scoring methods using a case-control data set with independently discovered risk alleles in the MedSeq Project. For eight traits of clinical relevance in both the primary-care and cardiomyopathy study cohorts, we estimated multiplicative polygenic risk scores using 161 published risk alleles and then normalized using the population median estimated from the 1000 Genomes Project. Results Our polygenic score approach identified the overrepresentation of independently discovered risk alleles in cases as compared with controls using a large-scale genome-wide association study data set. In addition to normalized multiplicative polygenic risk scores and rank in a population, the disease prevalence and proportion of heritability explained by known common risk variants provide important context in the interpretation of modern multilocus disease risk models. Conclusion Our approach in the MedSeq Project demonstrates how complex trait risk variants from an individual genome can be summarized and reported for the general clinician and also highlights the need for definitive clinical studies to obtain reference data for such estimates and to establish clinical utility. PMID:25341114

  2. Development of a Summarized Health Index (SHI) for use in predicting survival in sea turtles.

    PubMed

    Li, Tsung-Hsien; Chang, Chao-Chin; Cheng, I-Jiunn; Lin, Suen-Chuain

    2015-01-01

    Veterinary care plays an influential role in sea turtle rehabilitation, especially in endangered species. Physiological characteristics, hematological and plasma biochemistry profiles, are useful references for clinical management in animals, especially when animals are during the convalescence period. In this study, these factors associated with sea turtle surviving were analyzed. The blood samples were collected when sea turtles remained alive, and then animals were followed up for surviving status. The results indicated that significantly negative correlation was found between buoyancy disorders (BD) and sea turtle surviving (p < 0.05). Furthermore, non-surviving sea turtles had significantly higher levels of aspartate aminotranspherase (AST), creatinine kinase (CK), creatinine and uric acid (UA) than surviving sea turtles (all p < 0.05). After further analysis by multiple logistic regression model, only factors of BD, creatinine and UA were included in the equation for calculating summarized health index (SHI) for each individual. Through evaluation by receiver operating characteristic (ROC) curve, the result indicated that the area under curve was 0.920 ± 0.037, and a cut-off SHI value of 2.5244 showed 80.0% sensitivity and 86.7% specificity in predicting survival. Therefore, the developed SHI could be a useful index to evaluate health status of sea turtles and to improve veterinary care at rehabilitation facilities. PMID:25803431

  3. Automatic document navigation for digital content remastering

    NASA Astrophysics Data System (ADS)

    Lin, Xiaofan; Simske, Steven J.

    2003-12-01

    This paper presents a novel method of automatically adding navigation capabilities to re-mastered electronic books. We first analyze the need for a generic and robust system to automatically construct navigation links into re-mastered books. We then introduce the core algorithm based on text matching for building the links. The proposed method utilizes the tree-structured dictionary and directional graph of the table of contents to efficiently conduct the text matching. Information fusion further increases the robustness of the algorithm. The experimental results on the MIT Press digital library project are discussed and the key functional features of the system are illustrated. We have also investigated how the quality of the OCR engine affects the linking algorithm. In addition, the analogy between this work and Web link mining has been pointed out.

  4. Text Mining the History of Medicine.

    PubMed

    Thompson, Paul; Batista-Navarro, Riza Theresa; Kontonatsios, Georgios; Carter, Jacob; Toon, Elizabeth; McNaught, John; Timmermann, Carsten; Worboys, Michael; Ananiadou, Sophia

    2016-01-01

    Historical text archives constitute a rich and diverse source of information, which is becoming increasingly readily accessible, due to large-scale digitisation efforts. However, it can be difficult for researchers to explore and search such large volumes of data in an efficient manner. Text mining (TM) methods can help, through their ability to recognise various types of semantic information automatically, e.g., instances of concepts (places, medical conditions, drugs, etc.), synonyms/variant forms of concepts, and relationships holding between concepts (which drugs are used to treat which medical conditions, etc.). TM analysis allows search systems to incorporate functionality such as automatic suggestions of synonyms of user-entered query terms, exploration of different concepts mentioned within search results or isolation of documents in which concepts are related in specific ways. However, applying TM methods to historical text can be challenging, according to differences and evolutions in vocabulary, terminology, language structure and style, compared to more modern text. In this article, we present our efforts to overcome the various challenges faced in the semantic analysis of published historical medical text dating back to the mid 19th century. Firstly, we used evidence from diverse historical medical documents from different periods to develop new resources that provide accounts of the multiple, evolving ways in which concepts, their variants and relationships amongst them may be expressed. These resources were employed to support the development of a modular processing pipeline of TM tools for the robust detection of semantic information in historical medical documents with varying characteristics. We applied the pipeline to two large-scale medical document archives covering wide temporal ranges as the basis for the development of a publicly accessible semantically-oriented search system. The novel resources are available for research purposes, while the processing pipeline and its modules may be used and configured within the Argo TM platform. PMID:26734936

  5. Text Mining the History of Medicine

    PubMed Central

    Thompson, Paul; Batista-Navarro, Riza Theresa; Kontonatsios, Georgios; Carter, Jacob; Toon, Elizabeth; McNaught, John; Timmermann, Carsten; Worboys, Michael; Ananiadou, Sophia

    2016-01-01

    Historical text archives constitute a rich and diverse source of information, which is becoming increasingly readily accessible, due to large-scale digitisation efforts. However, it can be difficult for researchers to explore and search such large volumes of data in an efficient manner. Text mining (TM) methods can help, through their ability to recognise various types of semantic information automatically, e.g., instances of concepts (places, medical conditions, drugs, etc.), synonyms/variant forms of concepts, and relationships holding between concepts (which drugs are used to treat which medical conditions, etc.). TM analysis allows search systems to incorporate functionality such as automatic suggestions of synonyms of user-entered query terms, exploration of different concepts mentioned within search results or isolation of documents in which concepts are related in specific ways. However, applying TM methods to historical text can be challenging, according to differences and evolutions in vocabulary, terminology, language structure and style, compared to more modern text. In this article, we present our efforts to overcome the various challenges faced in the semantic analysis of published historical medical text dating back to the mid 19th century. Firstly, we used evidence from diverse historical medical documents from different periods to develop new resources that provide accounts of the multiple, evolving ways in which concepts, their variants and relationships amongst them may be expressed. These resources were employed to support the development of a modular processing pipeline of TM tools for the robust detection of semantic information in historical medical documents with varying characteristics. We applied the pipeline to two large-scale medical document archives covering wide temporal ranges as the basis for the development of a publicly accessible semantically-oriented search system. The novel resources are available for research purposes, while the processing pipeline and its modules may be used and configured within the Argo TM platform. PMID:26734936

  6. Comparison of automatic control systems

    NASA Technical Reports Server (NTRS)

    Oppelt, W

    1941-01-01

    This report deals with a reciprocal comparison of an automatic pressure control, an automatic rpm control, an automatic temperature control, and an automatic directional control. It shows the difference between the "faultproof" regulator and the actual regulator which is subject to faults, and develops this difference as far as possible in a parallel manner with regard to the control systems under consideration. Such as analysis affords, particularly in its extension to the faults of the actual regulator, a deep insight into the mechanism of the regulator process.

  7. Summarization of Injury and Fatality Factors Involving Children and Youth in Grain Storage and Handling Incidents.

    PubMed

    Issa, S F; Field, W E; Hamm, K E; Cheng, Y H; Roberts, M J; Riedel, S M

    2016-01-01

    This article summarizes data gathered on 246 documented cases of children and youth under the age of 21 involved in grain storage and handling incidents in agricultural workplaces from 1964 to 2013 in the U.S. that have been entered into the Purdue Agricultural Confined Space Incident Database. The database is the result of ongoing efforts to collect and file information on documented injuries, fatalities, and entrapments in all forms of agricultural confined spaces. While the frequency of injuries and fatalities involving children and youth in agriculture has decreased in recent years, incidents related to agricultural confined spaces, especially grain storage and handling facilities, have remained largely unchanged during the same period. Approximately 21% of all documented incidents involved children and youth (age 20 and younger), and more than 77% of all documented incidents were fatal, suggesting an under-reporting of non-fatal incidents. Findings indicate that the majority of youth incidents occurred at OSHA exempt agricultural worksites. The states reporting the most incidents were Indiana, Iowa, Nebraska, Illinois, and Minnesota. Grain transport vehicles represented a significant portion of incidents involving children under the age of 16. The overwhelming majority of victims were male, and most incidents (50%) occurred in June, October, and November. Recommendations include developing intervention strategies that target OSHA exempt farms, feedlots, and seed processing facilities; preparing engineering design and best practice standards that reduce the exposure of children and youth to agricultural confined spaces; and developing gender-specific safety resources that incorporate gender-sensitive strategies to communicate safety information to the population of young males with the greatest risk of exposure to the hazards of agricultural confined spaces. PMID:27024990

  8. Summarizing motion contents of the video clip using moving edge overlaid frame (MEOF)

    NASA Astrophysics Data System (ADS)

    Yu, Tianli; Zhang, Yujin

    2001-12-01

    How to quickly and effectively exchange video information with the user is a major task for video searching engine's user interface. In this paper, we proposed to use Moving Edge Overlaid Frame (MEOF) image to summarize both the local object motion and global camera motion information of the video clip into a single image. MEOF will supplement the motion information that is generally dropped by the key frame representation, and it will enable faster perception for the user than viewing the actual video. The key technology of our MEOF generating algorithm involves the global motion estimation (GME). In order to extract the precise global motion model from general video, our GME module takes two stages, the match based initial GME and the gradient based GME refinement. The GME module also maintains a sprite image that will be aligned with the new input frame in the background after the global motion compensation transform. The difference between the aligned sprite and the new frame will be used to extract the masks that will help to pick out the moving objects' edges. The sprite is updated with each input frame and the moving edges are extracted at a constant interval. After all the frames are processed, the extracted moving edges are overlaid to the sprite according to there global motion displacement with the sprite and the temporal distance with the last frame, thus create our MEOF image. Experiments show that the MEOF representation of the video clip helps the user acquire the motion knowledge much faster and also be compact enough to serve the needs of online applications.

  9. Practical vision based degraded text recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published techniques. The system successfully produced impressive OCR accuracies (90% -to- 93%) using customized systems generated by our development framework in two industrial OCR applications: water bottle label text recognition and concrete slab plate text recognition. The system was also trained for the Arabic language alphabet, and demonstrated extremely high recognition accuracy (99%) for Arabic license name plate text recognition with processing times of 10 seconds. The accuracy and run times of the system were compared to conventional and many states of art methods, the proposed system shows excellent results.

  10. Injury narrative text classification using factorization model

    PubMed Central

    2015-01-01

    Narrative text is a useful way of identifying injury circumstances from the routine emergency department data collections. Automatically classifying narratives based on machine learning techniques is a promising technique, which can consequently reduce the tedious manual classification process. Existing works focus on using Naive Bayes which does not always offer the best performance. This paper proposes the Matrix Factorization approaches along with a learning enhancement process for this task. The results are compared with the performance of various other classification approaches. The impact on the classification results from the parameters setting during the classification of a medical text dataset is discussed. With the selection of right dimension k, Non Negative Matrix Factorization-model method achieves 10 CV accuracy of 0.93. PMID:26043671

  11. AUTOMATIC HAND COUNTER

    DOEpatents

    Mann J.R.; Wainwright, A.E.

    1963-06-11

    An automatic, personnel-operated, alpha-particle hand monitor is described which functions as a qualitative instrument to indicate to the person using it whether his hands are cold'' or hot.'' The monitor is activated by a push button and includes several capacitor-triggered thyratron tubes. Upon release of the push button, the monitor starts the counting of the radiation present on the hands of the person. If the count of the radiation exceeds a predetermined level within a predetermined time, then a capacitor will trigger a first thyratron tube to light a hot'' lamp. If, however, the count is below such level during this time period, another capacitor will fire a second thyratron to light a safe'' lamp. (AEC)

  12. Automatic Bayesian polarity determination

    NASA Astrophysics Data System (ADS)

    Pugh, D. J.; White, R. S.; Christie, P. A. F.

    2016-04-01

    The polarity of the first motion of a seismic signal from an earthquake is an important constraint in earthquake source inversion. Microseismic events often have low signal-to-noise ratios, which may lead to difficulties estimating the correct first-motion polarities of the arrivals. This paper describes a probabilistic approach to polarity picking that can be both automated and combined with manual picking. This approach includes a quantitative estimate of the uncertainty of the polarity, improving calculation of the polarity probability density function for source inversion. It is sufficiently fast to be incorporated into an automatic processing workflow. When used in source inversion, the results are consistent with those from manual observations. In some cases, they produce a clearer constraint on the range of high-probability source mechanims, and are better constrained than source mechanisms determined using a uniform probability of an incorrect polarity pick.

  13. Automatic thermal switch

    NASA Technical Reports Server (NTRS)

    Wing, L. D.; Cunningham, J. W. (Inventor)

    1981-01-01

    An automatic thermal switch to control heat flow includes a first thermally conductive plate, a second thermally conductive plate and a thermal transfer plate pivotally mounted between the first and second plates. A phase change power unit, including a plunger connected to the transfer plate, is in thermal contact with the first thermally conductive plate. A biasing element, connected to the transfer plate, biases the transfer plate in a predetermined position with respect to the first and second plates. When the phase change power unit is actuated by an increase in heat transmitted through the first plate, the plunger extends and pivots the transfer plate to vary the thermal conduction between the first and second plates through the transfer plate. The biasing element, transfer plate and piston can be arranged to provide either a normally closed or normally open thermally conductive path between the first and second plates.

  14. Networked Automatic Optical Telescopes

    NASA Astrophysics Data System (ADS)

    Mattox, J. R.

    2000-05-01

    Many groups around the world are developing automated or robotic optical observatories. The coordinated operation of automated optical telescopes at diverse sites could provide observing prospects which are not otherwise available, e.g., continuous optical photometry without diurnal interruption. Computer control and scheduling also offers the prospect of effective response to transient events such as γ -ray bursts. These telescopes could also serve science education by providing high-quality CCD data for educators and students. The Automatic Telescope Network (ATN) project has been undertaken to promote networking of automated telescopes. A web site is maintained at http://gamma.bu.edu/atn/. The development of such networks will be facilitated by the existence of standards. A set of standard commands for instrument and telescope control systems will allow for the creation of software for an ``observatory control system'' which can be used at any facility which complies with the TCS and ICS standards. Also, there is a strong need for standards for the specification of observations to be done, and reports on the results and status of observations. A proposed standard for this is the Remote Telescope Markup Language (RTML), which is expected to be described in another poster in this session. It may thus be feasible for amateur-astronomers to soon buy all necessary equipment and software to field an automatic telescope. The owner/operator could make otherwise unused telescope time available to the network in exchange for the utilization of other telescopes in the network --- including occasional utilization of meter-class telescopes with research-grade CCD detectors at good sites.

  15. Automatic alkaloid removal system.

    PubMed

    Yahaya, Muhammad Rizuwan; Hj Razali, Mohd Hudzari; Abu Bakar, Che Abdullah; Ismail, Wan Ishak Wan; Muda, Wan Musa Wan; Mat, Nashriyah; Zakaria, Abd

    2014-01-01

    This alkaloid automated removal machine was developed at Instrumentation Laboratory, Universiti Sultan Zainal Abidin Malaysia that purposely for removing the alkaloid toxicity from Dioscorea hispida (DH) tuber. It is a poisonous plant where scientific study has shown that its tubers contain toxic alkaloid constituents, dioscorine. The tubers can only be consumed after it poisonous is removed. In this experiment, the tubers are needed to blend as powder form before inserting into machine basket. The user is need to push the START button on machine controller for switching the water pump ON by then creating turbulence wave of water in machine tank. The water will stop automatically by triggering the outlet solenoid valve. The powders of tubers are washed for 10 minutes while 1 liter of contaminated water due toxin mixture is flowing out. At this time, the controller will automatically triggered inlet solenoid valve and the new water will flow in machine tank until achieve the desire level that which determined by ultra sonic sensor. This process will repeated for 7 h and the positive result is achieved and shows it significant according to the several parameters of biological character ofpH, temperature, dissolve oxygen, turbidity, conductivity and fish survival rate or time. From that parameter, it also shows the positive result which is near or same with control water and assuming was made that the toxin is fully removed when the pH of DH powder is near with control water. For control water, the pH is about 5.3 while water from this experiment process is 6.0 and before run the machine the pH of contaminated water is about 3.8 which are too acid. This automated machine can save time for removing toxicity from DH compared with a traditional method while less observation of the user. PMID:24783795

  16. Humans in Space: Summarizing the Medico-Biological Results of the Space Shuttle Program

    NASA Technical Reports Server (NTRS)

    Risin, Diana; Stepaniak, P. C.; Grounds, D. J.

    2011-01-01

    As we celebrate the 50th anniversary of Gagarin's flight that opened the era of Humans in Space we also commemorate the 30th anniversary of the Space Shuttle Program (SSP) which was triumphantly completed by the flight of STS-135 on July 21, 2011. These were great milestones in the history of Human Space Exploration. Many important questions regarding the ability of humans to adapt and function in space were answered for the past 50 years and many lessons have been learned. Significant contribution to answering these questions was made by the SSP. To ensure the availability of the Shuttle Program experiences to the international space community NASA has made a decision to summarize the medico-biological results of the SSP in a fundamental edition that is scheduled to be completed by the end of 2011 beginning 2012. The goal of this edition is to define the normal responses of the major physiological systems to short-duration space flights and provide a comprehensive source of information for planning, ensuring successful operational activities and for management of potential medical problems that might arise during future long-term space missions. The book includes the following sections: 1. History of Shuttle Biomedical Research and Operations; 2. Medical Operations Overview Systems, Monitoring, and Care; 3. Biomedical Research Overview; 4. System-specific Adaptations/Responses, Issues, and Countermeasures; 5. Multisystem Issues and Countermeasures. In addition, selected operational documents will be presented in the appendices. The chapters are written by well-recognized experts in appropriate fields, peer reviewed, and edited by physicians and scientists with extensive expertise in space medical operations and space-related biomedical research. As Space Exploration continues the major question whether humans are capable of adapting to long term presence and adequate functioning in space habitats remains to be answered We expect that the comprehensive review of the medico-biological results of the SSP along with the data collected during the missions on the space stations (Mir and ISS) provides a good starting point in seeking the answer to this question.

  17. Automatic Evidence Retrieval for Systematic Reviews

    PubMed Central

    Choong, Miew Keen; Galgani, Filippo; Dunn, Adam G

    2014-01-01

    Background Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing’s effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Objective Our goal was to evaluate an automatic method for citation snowballing’s capacity to identify and retrieve the full text and/or abstracts of cited articles. Methods Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. Results The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. Conclusions The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews. PMID:25274020

  18. Automatic Coal-Mining System

    NASA Technical Reports Server (NTRS)

    Collins, E. R., Jr.

    1985-01-01

    Coal cutting and removal done with minimal hazard to people. Automatic coal mine cutting, transport and roof-support movement all done by automatic machinery. Exposure of people to hazardous conditions reduced to inspection tours, maintenance, repair, and possibly entry mining.

  19. Nonverbatim Captioning in Dutch Television Programs: A Text Linguistic Approach

    ERIC Educational Resources Information Center

    Schilperoord, Joost; de Groot, Vanja; van Son, Nic

    2005-01-01

    In the Netherlands, as in most other European countries, closed captions for the deaf summarize texts rather than render them verbatim. Caption editors argue that in this way television viewers have enough time to both read the text and watch the program. They also claim that the meaning of the original message is properly conveyed. However, many…

  20. Use of SI Metric Units Misrepresented in College Physics Texts.

    ERIC Educational Resources Information Center

    Hooper, William

    1980-01-01

    Summarizes results of a survey that examined 13 textbooks claiming to use SI units. Tables present data concerning the SI and non-SI units actually used in each text in discussion of fluid pressure and thermal energy, and data concerning which texts do and do not use SI as claimed. (CS)

  1. Nonverbatim Captioning in Dutch Television Programs: A Text Linguistic Approach

    ERIC Educational Resources Information Center

    Schilperoord, Joost; de Groot, Vanja; van Son, Nic

    2005-01-01

    In the Netherlands, as in most other European countries, closed captions for the deaf summarize texts rather than render them verbatim. Caption editors argue that in this way television viewers have enough time to both read the text and watch the program. They also claim that the meaning of the original message is properly conveyed. However, many

  2. Automatic Command Sequence Generation

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Gladded, Roy; Khanampompan, Teerapat

    2007-01-01

    Automatic Sequence Generator (Autogen) Version 3.0 software automatically generates command sequences for the Mars Reconnaissance Orbiter (MRO) and several other JPL spacecraft operated by the multi-mission support team. Autogen uses standard JPL sequencing tools like APGEN, ASP, SEQGEN, and the DOM database to automate the generation of uplink command products, Spacecraft Command Message Format (SCMF) files, and the corresponding ground command products, DSN Keywords Files (DKF). Autogen supports all the major multi-mission mission phases including the cruise, aerobraking, mapping/science, and relay mission phases. Autogen is a Perl script, which functions within the mission operations UNIX environment. It consists of two parts: a set of model files and the autogen Perl script. Autogen encodes the behaviors of the system into a model and encodes algorithms for context sensitive customizations of the modeled behaviors. The model includes knowledge of different mission phases and how the resultant command products must differ for these phases. The executable software portion of Autogen, automates the setup and use of APGEN for constructing a spacecraft activity sequence file (SASF). The setup includes file retrieval through the DOM (Distributed Object Manager), an object database used to store project files. This step retrieves all the needed input files for generating the command products. Depending on the mission phase, Autogen also uses the ASP (Automated Sequence Processor) and SEQGEN to generate the command product sent to the spacecraft. Autogen also provides the means for customizing sequences through the use of configuration files. By automating the majority of the sequencing generation process, Autogen eliminates many sequence generation errors commonly introduced by manually constructing spacecraft command sequences. Through the layering of commands into the sequence by a series of scheduling algorithms, users are able to rapidly and reliably construct the desired uplink command products. With the aid of Autogen, sequences may be produced in a matter of hours instead of weeks, with a significant reduction in the number of people on the sequence team. As a result, the uplink product generation process is significantly streamlined and mission risk is significantly reduced. Autogen is used for operations of MRO, Mars Global Surveyor (MGS), Mars Exploration Rover (MER), Mars Odyssey, and will be used for operations of Phoenix. Autogen Version 3.0 is the operational version of Autogen including the MRO adaptation for the cruise mission phase, and was also used for development of the aerobraking and mapping mission phases for MRO.

  3. Important Text Characteristics for Early-Grades Text Complexity

    ERIC Educational Resources Information Center

    Fitzgerald, Jill; Elmore, Jeff; Koons, Heather; Hiebert, Elfrieda H.; Bowen, Kimberly; Sanford-Moore, Eleanor E.; Stenner, A. Jackson

    2015-01-01

    The Common Core set a standard for all children to read increasingly complex texts throughout schooling. The purpose of the present study was to explore text characteristics specifically in relation to early-grades text complexity. Three hundred fifty primary-grades texts were selected and digitized. Twenty-two text characteristics were identified

  4. Important Text Characteristics for Early-Grades Text Complexity

    ERIC Educational Resources Information Center

    Fitzgerald, Jill; Elmore, Jeff; Koons, Heather; Hiebert, Elfrieda H.; Bowen, Kimberly; Sanford-Moore, Eleanor E.; Stenner, A. Jackson

    2015-01-01

    The Common Core set a standard for all children to read increasingly complex texts throughout schooling. The purpose of the present study was to explore text characteristics specifically in relation to early-grades text complexity. Three hundred fifty primary-grades texts were selected and digitized. Twenty-two text characteristics were identified…

  5. Automatic transmission system

    SciTech Connect

    Kurihara, K.; Arai, K.

    1988-12-06

    This patent describes an automatic transmission system for a vehicle having a gear-type transmission, a clutch connected to the gear-type transmission and an actuating means responsive to an electric signal for operating the gear-type transmission and the clutch so as to shift the gear-type transmission into a target gear position. The system consists of: means for producing a first signal relating to the vehicle speed; means for producing a second signal relating to the amount of operation of an accelerator pedal; a first means responsive to the second signal for producing a rate signal indicating the rate of the operation of the accelerator pedal per unit time for each depression of the accelerator pedal; a second means responsive to the first signal for producing a maximum acceleration signal indicating the maximum acceleration of the vehicle due to the depression of the accelerator pedal; a third means responsive to the signals from the first and second means for calculating the vehicle load and producing a third signal indicating the calculating vehicle load; and a control means responsive to the first through third signals for producing a control signal for operating the actuating means so as to shift the gear-type transmission into the target gear position determined for the operating condition of the vehicle at that time.

  6. Automatic transmission system

    SciTech Connect

    Ha, J.S.

    1989-04-25

    An automatic transmission system is described for use in vehicles, which comprises: a clutch wheel containing a plurality of concentric rings of decreasing diameter, the clutch wheel being attached to an engine of the vehicle; a plurality of clutch gears corresponding in size to the concentric rings, the clutch gears being adapted to selectively and frictionally engage with the concentric rings of the clutch wheel; an accelerator pedal and a gear selector, the accelerator pedals being connected to one end of a substantially U-shaped frame member, the other end of the substantially U-shaped frame member selectively engaging with one end of one of wires received in a pair of apertures of the gear selector; a plurality of drive gear controllers and a reverse gear controller; means operatively connected with the gear selector and the plurality of drive gear controllers and reverse gear controller for selectively engaging one of the drive and reverse gear controllers depending upon the position of the gear selector; and means for individually connecting the drive and reverse gear controllers with the corresponding clutch gears whereby upon the selection of the gear selector, friction engagement is achieved between the clutch gear and the clutch wheels for rotating the wheel in the forward or reverse direction.

  7. Automatic transmission structure

    SciTech Connect

    Iwase, Y.; Morisawa, K.

    1987-03-24

    An automatic transmission is described comprising: an output shaft of the transmission including a stepped portion; a parking gear spline-connected with the output shaft on a first side of the stepped portion; a plurality of governor values mounted on a rear side of the parking gear and radially disposed around the output shaft on the first side of the stepped portion; a speed meter drive gear spline-connected with the output shaft on a second side of the stepped portion and on a rear side of the governor valves; and an annular spacer fitted on the output shaft on the second side of the stepped portion between the governor valves and the speed meter drive gear to abut on each of the governor valves and the speed meter drive gear. The annular member is constructed separately from the speed meter drive gear and has an outer diameter larger than an outer diameter of the speed meter drive gear thereby resulting in a contact area between the annular space and the speed meter drive gear which is smaller than a contact area between the annular spacer and the rear side of the governor valves; the drive gear being axially secured relative to the output shaft by a bearing thereby enabling a fixed axial positioning of the annular spacer on the output shaft.

  8. Electronically controlled automatic transmission

    SciTech Connect

    Ohkubo, M.; Shiba, H.; Nakamura, K.

    1989-03-28

    This patent describes an electronically controlled automatic transmission having a manual valve working in connection with a manual shift lever, shift valves operated by solenoid valves which are driven by an electronic control circuit previously memorizing shift patterns, and a hydraulic circuit controlled by these manual valve and shift valves for driving brakes and a clutch in order to change speed. Shift patterns of 2-range and L-range, in addition to a shift pattern of D-range, are memorized previously in the electronic control circuit, an operation switch is provided which changes the shift pattern of the electronic control circuit to any shift pattern among those of D-range, 2-range and L-range at time of the manual shift lever being in a D-range position, a releasable lock mechanism is provided which prevents the manual shift lever from entering 2-range and L-range positions, and the hydraulic circuit is set to a third speed mode when the manual shift lever is in the D-range position. The circuit is set to a second speed mode when it is in the 2-range position, and the circuit is set to a first speed mode when it is in the L-range position, respectively, in case where the shift valves are not working.

  9. Automatic EEG spike detection.

    PubMed

    Harner, Richard

    2009-10-01

    Since the 1970s advances in science and technology during each succeeding decade have renewed the expectation of efficient, reliable automatic epileptiform spike detection (AESD). But even when reinforced with better, faster tools, clinically reliable unsupervised spike detection remains beyond our reach. Expert-selected spike parameters were the first and still most widely used for AESD. Thresholds for amplitude, duration, sharpness, rise-time, fall-time, after-coming slow waves, background frequency, and more have been used. It is still unclear which of these wave parameters are essential, beyond peak-peak amplitude and duration. Wavelet parameters are very appropriate to AESD but need to be combined with other parameters to achieve desired levels of spike detection efficiency. Artificial Neural Network (ANN) and expert-system methods may have reached peak efficiency. Support Vector Machine (SVM) technology focuses on outliers rather than centroids of spike and nonspike data clusters and should improve AESD efficiency. An exemplary spike/nonspike database is suggested as a tool for assessing parameters and methods for AESD and is available in CSV or Matlab formats from the author at brainvue@gmail.com. Exploratory Data Analysis (EDA) is presented as a graphic method for finding better spike parameters and for the step-wise evaluation of the spike detection process. PMID:19780347

  10. Automatic Welding System

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Robotic welding has been of interest to industrial firms because it offers higher productivity at lower cost than manual welding. There are some systems with automated arc guidance available, but they have disadvantages, such as limitations on types of materials or types of seams that can be welded; susceptibility to stray electrical signals; restricted field of view; or tendency to contaminate the weld seam. Wanting to overcome these disadvantages, Marshall Space Flight Center, aided by Hayes International Corporation, developed system that uses closed-circuit TV signals for automatic guidance of the welding torch. NASA granted license to Combined Technologies, Inc. for commercial application of the technology. They developed a refined and improved arc guidance system. CTI in turn, licensed the Merrick Corporation, also of Nashville, for marketing and manufacturing of the new system, called the CT2 Optical Trucker. CT2 is a non-contracting system that offers adaptability to broader range of welding jobs and provides greater reliability in high speed operation. It is extremely accurate and can travel at high speed of up to 150 inches per minute.

  11. Electronically controlled automatic transmission

    SciTech Connect

    Smith, R.B.; Daubenmier, J.A.; Zielke, J.I.

    1992-01-28

    This patent describes an electronically controlled automatic transmission control system for an automotive vehicle. It comprises multiple ratio gearing and multiple pressure operated clutches and brakes adapted to establish and disestablish multiple torque flow paths through the gearing from an engine; a source of regulated line pressure, a valve circuit connecting the line pressure source to the clutches and brakes; a hydrokinetic unit having a bladed impeller adapted to be driven by the engine and a bladed turbine connected to torque input elements of the gearing; a regulator valve means in the circuit for regulating pressure of fluid in the hydrokinetic unit; throttle pressure solenoid valve means communicating with the regulator valve means for developing an engine torque signal; a throttle pressure signal passage connecting the throttle pressure solenoid valve means with the source of regulated line pressure whereby the latter responds to the torque signal to increase line pressure with increasing torque; and fail-safe valve means communicating with the regulator valve means and the throttle pressure signal passage and responding to a decrease in the torque signal below a calibrated value to distribute the regulated pressure of the regulator valve means to the line pressure source thereby preserving line pressure above a calibrated minimum value.

  12. Automatic drilling control system

    SciTech Connect

    Ball, J.W.

    1987-05-05

    An automatic drilling control system is described for a drilling apparatus having a rig with a crown block and a traveling block. A draw works include an engine, a drum powered by the engine, clutches, and controls, a drilling line wound on the drum and rolled up or fed out during drilling by the engine. The drilling line extends through the crown block and the traveling block and connects to a fixed point. The line portion from the crown block to the fixed point is the dead line. The crown block and traveling block form a pulley system for supporting a drill pipe to raise or lower the same during drilling. A hydraulic pressure sensor connects to the dead line to measure the tension. A weight indicator gauge adjacent to the controls connects to the pressure sensor by a hydraulic line. A brake, having a brake handle, controls the rate of feed out of the drilling line to determine the tension on the dead line.

  13. Identifying and classifying biomedical perturbations in text

    PubMed Central

    Rodriguez-Esteban, Raul; Roberts, Phoebe M.; Crawford, Matthew E.

    2009-01-01

    Molecular perturbations provide a powerful toolset for biomedical researchers to scrutinize the contributions of individual molecules in biological systems. Perturbations qualify the context of experimental results and, despite their diversity, share properties in different dimensions in ways that can be formalized. We propose a formal framework to describe and classify perturbations that allows accumulation of knowledge in order to inform the process of biomedical scientific experimentation and target analysis. We apply this framework to develop a novel algorithm for automatic detection and characterization of perturbations in text and show its relevance in the study of gene–phenotype associations and protein–protein interactions in diabetes and cancer. Analyzing perturbations introduces a novel view of the multivariate landscape of biological systems. PMID:19074486

  14. BaffleText: a Human Interactive Proof

    NASA Astrophysics Data System (ADS)

    Chew, Monica; Baird, Henry S.

    2003-01-01

    Internet services designed for human use are being abused by programs. We present a defense against such attacks in the form of a CAPTCHA (Completely Automatic Public Turing test to tell Computers and Humans Apart) that exploits the difference in ability between humans and machines in reading images of text. CAPTCHAs are a special case of 'human interactive proofs,' a broad class of security protocols that allow people to identify themselves over networks as members of given groups. We point out vulnerabilities of reading-based CAPTCHAs to dictionary and computer-vision attacks. We also draw on the literature on the psychophysics of human reading, which suggests fresh defenses available to CAPTCHAs. Motivated by these considerations, we propose BaffleText, a CAPTCHA which uses non-English pronounceable words to defend against dictionary attacks, and Gestalt-motivated image-masking degradations to defend against image restoration attacks. Experiments on human subjects confirm the human legibility and user acceptance of BaffleText images. We have found an image-complexity measure that correlates well with user acceptance and assists in engineering the generation of challenges to fit the ability gap. Recent computer-vision attacks, run independently by Mori and Jitendra, suggest that BaffleText is stronger than two existing CAPTCHAs.

  15. Automatic transmission apparatus

    SciTech Connect

    Hiketa, M.

    1987-10-06

    An automatic transmission apparatus is described comprising: an input shaft, an output shaft disposed behind and coaxially with the input shaft, a counter shaft disposed substantially parallel to both of the input and output shafts, a first gear train including a first gear provided on the input shaft and a second gear provided on the counter shaft to be meshed with the first gear so as to form a first power transmitting path, first friction clutch means operative selectively to make and break the first power transmitting path, a second gear train including a third gear provided through one-way clutch means on a rear end portion of the input shaft and a fourth gear provided on the counter shaft to be meshed with the third gear so as to form a second power transmitting path, second friction clutch means provided at a front end portion of the output shaft, a third gear train including a fifth gear provided on a rear end portion of the counter shaft and a sixth gear provided on the output shaft to be meshed with the fifth gear so as to form a fourth power transmitting path, third friction clutch means operative selectively to make and break the fourth power transmitting path, fourth friction clutch means operative selectively to make and break the second power transmitting path, a fourth gear train including a seventh gear provided on the counter shaft and an eighth gear provided on the output shaft and fifth friction clutch means operative selectively to make and break the fifth power transmitting path.

  16. Guiding Students through Expository Text with Text Feature Walks

    ERIC Educational Resources Information Center

    Kelley, Michelle J.; Clausen-Grace, Nicki

    2010-01-01

    The Text Feature Walk is a structure created and employed by the authors that guides students in the reading of text features in order to access prior knowledge, make connections, and set a purpose for reading expository text. Results from a pilot study are described in order to illustrate the benefits of using the Text Feature Walk over…

  17. Clothes Dryer Automatic Termination Evaluation

    SciTech Connect

    TeGrotenhuis, Ward E.

    2014-10-01

    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this report shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.

  18. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1988-01-01

    The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.

  19. Text analysis methods, text analysis apparatuses, and articles of manufacture

    DOEpatents

    Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M

    2014-10-28

    Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.

  20. Automatic safety rod for reactors

    DOEpatents

    Germer, John H.

    1988-01-01

    An automatic safety rod for a nuclear reactor containing neutron absorbing material and designed to be inserted into a reactor core after a loss-of-core flow. Actuation is based upon either a sudden decrease in core pressure drop or the pressure drop decreases below a predetermined minimum value. The automatic control rod includes a pressure regulating device whereby a controlled decrease in operating pressure due to reduced coolant flow does not cause the rod to drop into the core.

  1. Automatic Collision Avoidance Technology (ACAT)

    NASA Technical Reports Server (NTRS)

    Swihart, Donald E.; Skoog, Mark A.

    2007-01-01

    This document represents two views of the Automatic Collision Avoidance Technology (ACAT). One viewgraph presentation reviews the development and system design of Automatic Collision Avoidance Technology (ACAT). Two types of ACAT exist: Automatic Ground Collision Avoidance (AGCAS) and Automatic Air Collision Avoidance (AACAS). The AGCAS Uses Digital Terrain Elevation Data (DTED) for mapping functions, and uses Navigation data to place aircraft on map. It then scans DTED in front of and around aircraft and uses future aircraft trajectory (5g) to provide automatic flyup maneuver when required. The AACAS uses data link to determine position and closing rate. It contains several canned maneuvers to avoid collision. Automatic maneuvers can occur at last instant and both aircraft maneuver when using data link. The system can use sensor in place of data link. The second viewgraph presentation reviews the development of a flight test and an evaluation of the test. A review of the operation and comparison of the AGCAS and a pilot's performance are given. The same review is given for the AACAS is given.

  2. Mining the Text: 34 Text Features that Can Ease or Obstruct Text Comprehension and Use

    ERIC Educational Resources Information Center

    White, Sheida

    2012-01-01

    This article presents 34 characteristics of texts and tasks ("text features") that can make continuous (prose), noncontinuous (document), and quantitative texts easier or more difficult for adolescents and adults to comprehend and use. The text features were identified by examining the assessment tasks and associated texts in the national

  3. Mining the Text: 34 Text Features that Can Ease or Obstruct Text Comprehension and Use

    ERIC Educational Resources Information Center

    White, Sheida

    2012-01-01

    This article presents 34 characteristics of texts and tasks ("text features") that can make continuous (prose), noncontinuous (document), and quantitative texts easier or more difficult for adolescents and adults to comprehend and use. The text features were identified by examining the assessment tasks and associated texts in the national…

  4. Torpedo: topic periodicity discovery from text data

    NASA Astrophysics Data System (ADS)

    Wang, Jingjing; Deng, Hongbo; Han, Jiawei

    2015-05-01

    Although history may not repeat itself, many human activities are inherently periodic, recurring daily, weekly, monthly, yearly or following some other periods. Such recurring activities may not repeat the same set of keywords, but they do share similar topics. Thus it is interesting to mine topic periodicity from text data instead of just looking at the temporal behavior of a single keyword/phrase. Some previous preliminary studies in this direction prespecify a periodic temporal template for each topic. In this paper, we remove this restriction and propose a simple yet effective framework Torpedo to mine periodic/recurrent patterns from text, such as news articles, search query logs, research papers, and web blogs. We first transform text data into topic-specific time series by a time dependent topic modeling module, where each of the time series characterizes the temporal behavior of a topic. Then we use time series techniques to detect periodicity. Hence we both obtain a clear view of how topics distribute over time and enable the automatic discovery of periods that are inherent in each topic. Theoretical and experimental analyses demonstrate the advantage of Torpedo over existing work.

  5. Automatic Neural Processing of Disorder-Related Stimuli in Social Anxiety Disorder: Faces and More

    PubMed Central

    Schulz, Claudia; Mothes-Lasch, Martin; Straube, Thomas

    2013-01-01

    It has been proposed that social anxiety disorder (SAD) is associated with automatic information processing biases resulting in hypersensitivity to signals of social threat such as negative facial expressions. However, the nature and extent of automatic processes in SAD on the behavioral and neural level is not entirely clear yet. The present review summarizes neuroscientific findings on automatic processing of facial threat but also other disorder-related stimuli such as emotional prosody or negative words in SAD. We review initial evidence for automatic activation of the amygdala, insula, and sensory cortices as well as for automatic early electrophysiological components. However, findings vary depending on tasks, stimuli, and neuroscientific methods. Only few studies set out to examine automatic neural processes directly and systematic attempts are as yet lacking. We suggest that future studies should: (1) use different stimulus modalities, (2) examine different emotional expressions, (3) compare findings in SAD with other anxiety disorders, (4) use more sophisticated experimental designs to investigate features of automaticity systematically, and (5) combine different neuroscientific methods (such as functional neuroimaging and electrophysiology). Finally, the understanding of neural automatic processes could also provide hints for therapeutic approaches. PMID:23745116

  6. The Challenge of Challenging Text

    ERIC Educational Resources Information Center

    Shanahan, Timothy; Fisher, Douglas; Frey, Nancy

    2012-01-01

    The Common Core State Standards emphasize the value of teaching students to engage with complex text. But what exactly makes a text complex, and how can teachers help students develop their ability to learn from such texts? The authors of this article discuss five factors that determine text complexity: vocabulary, sentence structure, coherence,

  7. Technical Vocabulary in Specialised Texts.

    ERIC Educational Resources Information Center

    Chung, Teresa Mihwa; Nation, Paul

    2003-01-01

    Describes two studies of technical vocabulary, one using an anatomy text and the other an applied linguistics text. Technical vocabulary was found by rating words in the texts on a four-step scale. Found that technical vocabulary made up a very substantial proportion of both the different words and the running words in texts. (Author/VWL)

  8. Texts in Homes and Communities.

    ERIC Educational Resources Information Center

    Pahl, Kate

    This paper considers how children's text making is shaped by the environment in which the texts are made. By considering texts made in classrooms and texts made in homes, the paper explores how classrooms and homes interact with children's (6-7 year old boys) reflective processes as they create artifacts--drawings, models, and writings. The paper…

  9. Text Complexity and the CCSS

    ERIC Educational Resources Information Center

    Aspen Institute, 2012

    2012-01-01

    What is meant by text complexity is a measurement of how challenging a particular text is to read. There are a myriad of different ways of explaining what makes text challenging to read, from the sophistication of the vocabulary employed to the length of its sentences to even measurements of how the text as a whole coheres. Research shows that no…

  10. The Challenge of Challenging Text

    ERIC Educational Resources Information Center

    Shanahan, Timothy; Fisher, Douglas; Frey, Nancy

    2012-01-01

    The Common Core State Standards emphasize the value of teaching students to engage with complex text. But what exactly makes a text complex, and how can teachers help students develop their ability to learn from such texts? The authors of this article discuss five factors that determine text complexity: vocabulary, sentence structure, coherence,…

  11. Text-Attentional Convolutional Neural Network for Scene Text Detection.

    PubMed

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results. PMID:27093723

  12. Text analysis devices, articles of manufacture, and text analysis methods

    DOEpatents

    Turner, Alan E; Hetzler, Elizabeth G; Nakamura, Grant C

    2013-05-28

    Text analysis devices, articles of manufacture, and text analysis methods are described according to some aspects. In one aspect, a text analysis device includes processing circuitry configured to analyze initial text to generate a measurement basis usable in analysis of subsequent text, wherein the measurement basis comprises a plurality of measurement features from the initial text, a plurality of dimension anchors from the initial text and a plurality of associations of the measurement features with the dimension anchors, and wherein the processing circuitry is configured to access a viewpoint indicative of a perspective of interest of a user with respect to the analysis of the subsequent text, and wherein the processing circuitry is configured to use the viewpoint to generate the measurement basis.

  13. Text-Attentional Convolutional Neural Network for Scene Text Detection

    NASA Astrophysics Data System (ADS)

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.

  14. Automatic system for computer program documentation

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.; Elliott, R. W.; Arseven, S.; Colunga, D.

    1972-01-01

    Work done on a project to design an automatic system for computer program documentation aids was made to determine what existing programs could be used effectively to document computer programs. Results of the study are included in the form of an extensive bibliography and working papers on appropriate operating systems, text editors, program editors, data structures, standards, decision tables, flowchart systems, and proprietary documentation aids. The preliminary design for an automated documentation system is also included. An actual program has been documented in detail to demonstrate the types of output that can be produced by the proposed system.

  15. Processing medical reports to automatically populate ontologies.

    PubMed

    Borrego, Luís; Quaresma, Paulo

    2013-01-01

    Medical reports are, quite often, written and stored in computer systems in a non-structured free text form. As a consequence, the information contained in these reports is not easily available and it is not possible to take it into account by medical decision support systems. We propose a methodology to automatically process and analyze medical reports, identifying concepts and their instances, and populating a new ontology. This methodology is based in natural language processing techniques using linguistic and statistical information. The proposed system was applied successfully to a set of medical reports from the Veterinary Hospital of the University of Évora. PMID:23388282

  16. Automatic addressing of telemetry channels

    SciTech Connect

    Lucero, L A

    1982-08-01

    To simplify telemetry software development, a design that eliminates the use of software instructions to address telemetry channels is being implemented in our telemetry systems. By using the direct memory access function of the RCA 1802 microprocessor, once initialized, addressing of telemetry channels is automatic, requiring no software. In this report the automatic addressing of telemetry channels (AATC) scheme is compared with an earlier technique that uses software. In comparison, the automatic addressing scheme effectively increases the software capability of the microprocessor, simplifies telemetry dataset encoding, eases dataset changes, and may decrease the electronic hardware count. The software addressing technique uses at least three instructions to address each channel. The automatic addressing technique requires no software instructions. Instead, addressing is performed using a direct memory access cycle stealing technique. Application of an early version of this addressing scheme to telemetry Type 1, Dataset 3, opened up the capability to execute 400 more microprocessor instructions than could be executed using the software addressing scheme. The present version of the automatic addressing scheme uses a section of PROM reserved for telemetry channel addresses. Encoding for a dataset is accomplished by programming the PROM with channel addresses in the order they are to be monitored. The telemetry Type 2 software was written using the software addressing scheme, then rewritten using the automatic addressing scheme. While 1000 bytes of memory were required by the software addressing scheme, the automatic addressing scheme required only 396 bytes. A number of prototypes using AATC have been built and tested in a full telemetry lab unit. All have worked successfully.

  17. Text2Video: text-driven facial animation using MPEG-4

    NASA Astrophysics Data System (ADS)

    Rurainsky, J.; Eisert, P.

    2005-07-01

    We present a complete system for the automatic creation of talking head video sequences from text messages. Our system converts the text into MPEG-4 Facial Animation Parameters and synthetic voice. A user selected 3D character will perform lip movements synchronized to the speech data. The 3D models created from a single image vary from realistic people to cartoon characters. A voice selection for different languages and gender as well as a pitch shift component enables a personalization of the animation. The animation can be shown on different displays and devices ranging from 3GPP players on mobile phones to real-time 3D render engines. Therefore, our system can be used in mobile communication for the conversion of regular SMS messages to MMS animations.

  18. An evaluation of an automatic markup system

    SciTech Connect

    Taghva, K.; Condit, A.; Borsack, J.

    1995-04-01

    One predominant application of OCR is the recognition of full text documents for information retrieval. Modern retrieval systems exploit both the textual content of the document as well as its structure. The relationship between textual content and character accuracy have been the focus of recent studies. It has been shown that due to the redundancies in text, average precision and recall is not heavily affected by OCR character errors. What is not fully known is to what extent OCR devices can provide reliable information that can be used to capture the structure of the document. In this paper, the authors present a preliminary report on the design and evaluation of a system to automatically markup technical documents, based on information provided by an OCR device. The device the authors use differs from traditional OCR devices in that it not only performs optical character recognition, but also provides detailed information about page layout, word geometry, and font usage. Their automatic markup program, which they call Autotag, uses this information, combined with dictionary, lookup and content analysis, to identify structural components of the text. These include the document title, author information, abstract, sections, section titles, paragraphs, sentences, and de-hyphenated words. A visual examination of the hardcopy will be compared to the output of their markup system to determine its correctness.

  19. Text editor on a chip

    SciTech Connect

    Jung Wan Cho; Heung Kyu Lee

    1983-01-01

    The authors propose a processor which provides useful facilities for implementing text editing commands. The processor now being developed is a component of the general front-end editing system which parses the program text and processes the text. This processor attached to a conventional microcomputer system bus executes screen editing functions. Conventional text editing is a typical application of the microprocessors. But in this paper emphasis is given to the firmware and hardware processing of texts in order that the processor can be fabricated in a single VLSI chip. To increase the overall regularity and decrease the design cost, the basic instructions are text editing oriented with short basic cycles. 6 references.

  20. Automatic Identification of Topic Tags from Texts Based on Expansion-Extraction Approach

    ERIC Educational Resources Information Center

    Yang, Seungwon

    2013-01-01

    Identifying topics of a textual document is useful for many purposes. We can organize the documents by topics in digital libraries. Then, we could browse and search for the documents with specific topics. By examining the topics of a document, we can quickly understand what the document is about. To augment the traditional manual way of topic

  1. Automatic Word Sense Disambiguation of Acronyms and Abbreviations in Clinical Texts

    ERIC Educational Resources Information Center

    Moon, Sungrim

    2012-01-01

    The use of acronyms and abbreviations is increasing profoundly in the clinical domain in large part due to the greater adoption of electronic health record (EHR) systems and increased electronic documentation within healthcare. A single acronym or abbreviation may have multiple different meanings or senses. Comprehending the proper meaning of an…

  2. Use of a New Set of Linguistic Features to Improve Automatic Assessment of Text Readability

    ERIC Educational Resources Information Center

    Yoshimi, Takehiko; Kotani, Katsunori; Isahara, Hitoshi

    2012-01-01

    The present paper proposes and evaluates a readability assessment method designed for Japanese learners of EFL (English as a foreign language). The proposed readability assessment method is constructed by a regression algorithm using a new set of linguistic features that were employed separately in previous studies. The results showed that the…

  3. Automatic Identification of Topic Tags from Texts Based on Expansion-Extraction Approach

    ERIC Educational Resources Information Center

    Yang, Seungwon

    2013-01-01

    Identifying topics of a textual document is useful for many purposes. We can organize the documents by topics in digital libraries. Then, we could browse and search for the documents with specific topics. By examining the topics of a document, we can quickly understand what the document is about. To augment the traditional manual way of topic…

  4. Automatic Word Sense Disambiguation of Acronyms and Abbreviations in Clinical Texts

    ERIC Educational Resources Information Center

    Moon, Sungrim

    2012-01-01

    The use of acronyms and abbreviations is increasing profoundly in the clinical domain in large part due to the greater adoption of electronic health record (EHR) systems and increased electronic documentation within healthcare. A single acronym or abbreviation may have multiple different meanings or senses. Comprehending the proper meaning of an

  5. A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text

    ERIC Educational Resources Information Center

    Nguyen, Bao-An; Yang, Don-Lin

    2012-01-01

    An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…

  6. Exploiting vibration-based spectral signatures for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Crider, Lauren; Kangas, Scott

    2014-06-01

    Feature extraction algorithms for vehicle classification techniques represent a large branch of Automatic Target Recognition (ATR) efforts. Traditionally, vehicle ATR techniques have assumed time series vibration data collected from multiple accelerometers are a function of direct path, engine driven signal energy. If data, however, is highly dependent on measurement location these pre-established feature extraction algorithms are ineffective. In this paper, we examine the consequences of analyzing vibration data potentially contingent upon transfer path effects by exploring the sensitivity of sensor location. We summarize our analysis of spectral signatures from each accelerometer and investigate similarities within the data.

  7. Text Editing in Chemistry Instruction.

    ERIC Educational Resources Information Center

    Ngu, Bing Hiong; Low, Renae; Sweller, John

    2002-01-01

    Describes experiments with Australian high school students that investigated differences in performance on chemistry word problems between two learning strategies: text editing, and conventional problem solving. Concluded that text editing had no advantage over problem solving in stoichiometry problems, and that the suitability of a text editing…

  8. Too Dumb for Complex Texts?

    ERIC Educational Resources Information Center

    Bauerlein, Mark

    2011-01-01

    High school students' lack of experience and practice with reading complex texts is a primary cause of their difficulties with college-level reading. Filling the syllabus with digital texts does little to address this deficiency. Complex texts demand three dispositions from readers: a willingness to probe works characterized by dense meanings, the…

  9. Slippery Texts and Evolving Literacies

    ERIC Educational Resources Information Center

    Mackey, Margaret

    2007-01-01

    The idea of "slippery texts" provides a useful descriptor for materials that mutate and evolve across different media. Eight adult gamers, encountering the slippery text "American McGee's Alice," demonstrate a variety of ways in which players attempt to manage their attention as they encounter a new text with many resonances. The range of their…

  10. Text Editing in Chemistry Instruction.

    ERIC Educational Resources Information Center

    Ngu, Bing Hiong; Low, Renae; Sweller, John

    2002-01-01

    Describes experiments with Australian high school students that investigated differences in performance on chemistry word problems between two learning strategies: text editing, and conventional problem solving. Concluded that text editing had no advantage over problem solving in stoichiometry problems, and that the suitability of a text editing

  11. Choosing Software for Text Processing.

    ERIC Educational Resources Information Center

    Mason, Robert M.

    1983-01-01

    Review of text processing software for microcomputers covers data entry, text editing, document formatting, and spelling and proofreading programs including "Wordstar,""PeachText,""PerfectWriter,""Select," and "The Word Plus.""The Whole Earth Software Catalog" and a new terminal to be manufactured for OCLC by IBM are mentioned. (EJS)

  12. Informational Text and the CCSS

    ERIC Educational Resources Information Center

    Aspen Institute, 2012

    2012-01-01

    What constitutes an informational text covers a broad swath of different types of texts. Biographies & memoirs, speeches, opinion pieces & argumentative essays, and historical, scientific or technical accounts of a non-narrative nature are all included in what the Common Core State Standards (CCSS) envisions as informational text. Also included…

  13. Text Signals Influence Team Artifacts

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Rysavy, Monica D.; Taricani, Ellen

    2015-01-01

    This exploratory quasi-experimental investigation describes the influence of text signals on team visual map artifacts. In two course sections, four-member teams were given one of two print-based text passage versions on the course-related topic "Social influence in groups" downloaded from Wikipedia; this text had two paragraphs, each…

  14. Text Signals Influence Team Artifacts

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Rysavy, Monica D.; Taricani, Ellen

    2015-01-01

    This exploratory quasi-experimental investigation describes the influence of text signals on team visual map artifacts. In two course sections, four-member teams were given one of two print-based text passage versions on the course-related topic "Social influence in groups" downloaded from Wikipedia; this text had two paragraphs, each

  15. The Only Safe SMS Texting Is No SMS Texting.

    PubMed

    Toth, Cheryl; Sacopulos, Michael J

    2015-01-01

    Many physicians and practice staff use short messaging service (SMS) text messaging to communicate with patients. But SMS text messaging is unencrypted, insecure, and does not meet HIPAA requirements. In addition, the short and abbreviated nature of text messages creates opportunities for misinterpretation, and can negatively impact patient safety and care. Until recently, asking patients to sign a statement that they understand and accept these risks--as well as having policies, device encryption, and cyber insurance in place--would have been enough to mitigate the risk of using SMS text in a medical practice. But new trends and policies have made SMS text messaging unsafe under any circumstance. This article explains these trends and policies, as well as why only secure texting or secure messaging should be used for physician-patient communication. PMID:26856033

  16. ParaText : scalable text analysis and visualization.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-07-01

    Automated analysis of unstructured text documents (e.g., web pages, newswire articles, research publications, business reports) is a key capability for solving important problems in areas including decision making, risk assessment, social network analysis, intelligence analysis, scholarly research and others. However, as data sizes continue to grow in these areas, scalable processing, modeling, and semantic analysis of text collections becomes essential. In this paper, we present the ParaText text analysis engine, a distributed memory software framework for processing, modeling, and analyzing collections of unstructured text documents. Results on several document collections using hundreds of processors are presented to illustrate the exibility, extensibility, and scalability of the the entire process of text modeling from raw data ingestion to application analysis.

  17. Text Association Analysis and Ambiguity in Text Mining

    NASA Astrophysics Data System (ADS)

    Bhonde, S. B.; Paikrao, R. L.; Rahane, K. U.

    2010-11-01

    Text Mining is the process of analyzing a semantically rich document or set of documents to understand the content and meaning of the information they contain. The research in Text Mining will enhance human's ability to process massive quantities of information, and it has high commercial values. Firstly, the paper discusses the introduction of TM its definition and then gives an overview of the process of text mining and the applications. Up to now, not much research in text mining especially in concept/entity extraction has focused on the ambiguity problem. This paper addresses ambiguity issues in natural language texts, and presents a new technique for resolving ambiguity problem in extracting concept/entity from texts. In the end, it shows the importance of TM in knowledge discovery and highlights the up-coming challenges of document mining and the opportunities it offers.

  18. Opinion Integration and Summarization

    ERIC Educational Resources Information Center

    Lu, Yue

    2011-01-01

    As Web 2.0 applications become increasingly popular, more and more people express their opinions on the Web in various ways in real time. Such wide coverage of topics and abundance of users make the Web an extremely valuable source for mining people's opinions about all kinds of topics. However, since the opinions are usually expressed as

  19. Opinion Integration and Summarization

    ERIC Educational Resources Information Center

    Lu, Yue

    2011-01-01

    As Web 2.0 applications become increasingly popular, more and more people express their opinions on the Web in various ways in real time. Such wide coverage of topics and abundance of users make the Web an extremely valuable source for mining people's opinions about all kinds of topics. However, since the opinions are usually expressed as…

  20. Statement Summarizing Research Findings on the Issue of the Relationship Between Food-Additive-Free Diets and Hyperkinesis in Children.

    ERIC Educational Resources Information Center

    Lipton, Morris; Wender, Esther

    The National Advisory Committee on Hyperkinesis and Food Additives paper summarized some research findings on the issue of the relationship between food-additive-free diets and hyperkinesis in children. Based on several challenge studies, it is concluded that the evidence generally refutes Dr. B. F. Feingold's claim that artificial colorings in

  1. The Effect of a Summarization-Based Cumulative Retelling Strategy on Listening Comprehension of College Students with Visual Impairments

    ERIC Educational Resources Information Center

    Tuncer, A. Tuba; Altunay, Banu

    2006-01-01

    Because students with visual impairments need auditory materials in order to access information, listening comprehension skills are important to their academic success. The present study investigated the effectiveness of summarization-based cumulative retelling strategy on the listening comprehension of four visually impaired college students. An…

  2. Statement Summarizing Research Findings on the Issue of the Relationship Between Food-Additive-Free Diets and Hyperkinesis in Children.

    ERIC Educational Resources Information Center

    Lipton, Morris; Wender, Esther

    The National Advisory Committee on Hyperkinesis and Food Additives paper summarized some research findings on the issue of the relationship between food-additive-free diets and hyperkinesis in children. Based on several challenge studies, it is concluded that the evidence generally refutes Dr. B. F. Feingold's claim that artificial colorings in…

  3. Rewriting and Paraphrasing Source Texts in Second Language Writing

    ERIC Educational Resources Information Center

    Shi, Ling

    2012-01-01

    The present study is based on interviews with 48 students and 27 instructors in a North American university and explores whether students and professors across faculties share the same views on the use of paraphrased, summarized, and translated texts in four examples of L2 student writing. Participants' comments centered on whether the paraphrases…

  4. Automatic analysis of multispectral images

    NASA Astrophysics Data System (ADS)

    Desouza, R. C. M.; Mitsuo, Fernando Augusto, II; Moreira, J. C.; Dutra, L. V.

    1981-08-01

    Some ideas of automatic multispectral image analysis are introduced. Automatic multispectral image analysis plays a central role in numerically oriented remote sensing systems. It presupposes the utilization of electronic equipments, mainly computers and their peripherals, to help people to interpret the information contained in multispectral digital imagery. This necessity derives from the great amount of multispectral data gathered by remote sensors within satellites and airplanes. When the number of channels or spectral bands is increased, the interpretation becomes more complex and subjective. In some cases, for example, in harvest estimation in national or regional level, it is imperative to use computer systems to complete the work within the time required. Automatic analysis also aimes to eliminate subjective factors that appear in the human interpretation, so increasing the global precision.

  5. Automatic rapid attachable warhead section

    DOEpatents

    Trennel, A.J.

    1994-05-10

    Disclosed are a method and apparatus for automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly. 10 figures.

  6. Automatic rapid attachable warhead section

    DOEpatents

    Trennel, Anthony J.

    1994-05-10

    Disclosed are a method and apparatus for (1) automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, (2) automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, (3) manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and (4) automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly.

  7. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1990-01-01

    The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.

  8. ParaText : scalable text modeling and analysis.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-06-01

    Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language.

  9. Text analysis devices, articles of manufacture, and text analysis methods

    DOEpatents

    Turner, Alan E; Hetzler, Elizabeth G; Nakamura, Grant C

    2015-03-31

    Text analysis devices, articles of manufacture, and text analysis methods are described according to some aspects. In one aspect, a text analysis device includes a display configured to depict visible images, and processing circuitry coupled with the display and wherein the processing circuitry is configured to access a first vector of a text item and which comprises a plurality of components, to access a second vector of the text item and which comprises a plurality of components, to weight the components of the first vector providing a plurality of weighted values, to weight the components of the second vector providing a plurality of weighted values, and to combine the weighted values of the first vector with the weighted values of the second vector to provide a third vector.

  10. Detection of text strings from mixed text/graphics images

    NASA Astrophysics Data System (ADS)

    Tsai, Chien-Hua; Papachristou, Christos A.

    2000-12-01

    A robust system for text strings separation from mixed text/graphics images is presented. Based on a union-find (region growing) strategy the algorithm is thus able to classify the text from graphics and adapts to changes in document type, language category (e.g., English, Chinese and Japanese), text font style and size, and text string orientation within digital images. In addition, it allows for a document skew that usually occurs in documents, without skew correction prior to discrimination while these proposed methods such a projection profile or run length coding are not always suitable for the condition. The method has been tested with a variety of printed documents from different origins with one common set of parameters, and the experimental results of the performance of the algorithm in terms of computational efficiency are demonstrated by using several tested images from the evaluation.

  11. FAMA: Fast Automatic MOOG Analysis

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2014-02-01

    FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.

  12. Algorithms for skiascopy measurement automatization

    NASA Astrophysics Data System (ADS)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  13. Texting while driving: is speech-based text entry less risky than handheld text entry?

    PubMed

    He, J; Chaparro, A; Nguyen, B; Burge, R J; Crandall, J; Chaparro, B; Ni, R; Cao, S

    2014-11-01

    Research indicates that using a cell phone to talk or text while maneuvering a vehicle impairs driving performance. However, few published studies directly compare the distracting effects of texting using a hands-free (i.e., speech-based interface) versus handheld cell phone, which is an important issue for legislation, automotive interface design and driving safety training. This study compared the effect of speech-based versus handheld text entries on simulated driving performance by asking participants to perform a car following task while controlling the duration of a secondary text-entry task. Results showed that both speech-based and handheld text entries impaired driving performance relative to the drive-only condition by causing more variation in speed and lane position. Handheld text entry also increased the brake response time and increased variation in headway distance. Text entry using a speech-based cell phone was less detrimental to driving performance than handheld text entry. Nevertheless, the speech-based text entry task still significantly impaired driving compared to the drive-only condition. These results suggest that speech-based text entry disrupts driving, but reduces the level of performance interference compared to text entry with a handheld device. In addition, the difference in the distraction effect caused by speech-based and handheld text entry is not simply due to the difference in task duration. PMID:25089769

  14. Situational Interest in Literary Text

    PubMed

    Schraw

    1997-10-01

    This study examined relationships among text characteristics, situational interest, two measures of text understanding, and personal responses when reading a literary text. A factor analysis of ratings made after reading revealed six interrelated text characteristics. Of these, suspense, coherence and thematic complexity explained 54% of the variance in interest. Additional analyses found that situational interest was unrelated to a multiple choice test of main ideas; but was related to personal responses and holistic interpretations of the text. These results suggest that multiple aspects of literary texts are interesting to readers, and that interest is related to personal engagement variables, even when it is not related to the comprehension of main ideas. Copyright 1997Academic Press PMID:9356182

  15. Zum Uebersetzen fachlicher Texte (On the Translation of Technical Texts)

    ERIC Educational Resources Information Center

    Friederich, Wolf

    1975-01-01

    Reviews a 1974 East German publication on translation of scientific literature from Russian to German. Considers terminology, different standard levels of translation in East Germany, and other matters related to translation. (Text is in German.) (DH)

  16. Understanding and Teaching Complex Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2014-01-01

    Teachers in today's classrooms struggle every day to design instructional interventions that would build students' reading skills and strategies in order to ensure their comprehension of complex texts. Text complexity can be determined in both qualitative and quantitative ways. In this article, the authors describe various innovative…

  17. Improve Reading with Complex Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    The Common Core State Standards have cast a renewed light on reading instruction, presenting teachers with the new requirements to teach close reading of complex texts. Teachers and administrators should consider a number of essential features of close reading: They are short, complex texts; rich discussions based on worthy questions; revisiting…

  18. Understanding and Teaching Complex Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2014-01-01

    Teachers in today's classrooms struggle every day to design instructional interventions that would build students' reading skills and strategies in order to ensure their comprehension of complex texts. Text complexity can be determined in both qualitative and quantitative ways. In this article, the authors describe various innovative

  19. Automatic Scaffolding and Measurement of Concept Mapping for EFL Students to Write Summaries

    ERIC Educational Resources Information Center

    Yang, Yu-Fen

    2015-01-01

    An incorrect concept map may obstruct a student's comprehension when writing summaries if they are unable to grasp key concepts when reading texts. The purpose of this study was to investigate the effects of automatic scaffolding and measurement of three-layer concept maps on improving university students' writing summaries. The automatic

  20. Problem of Automatic Thesaurus Construction (K Voprosu Ob Avtomaticheskom Postroenii Tezarusa). Subject Country: USSR.

    ERIC Educational Resources Information Center

    Ivanova, I. S.

    With respect to automatic indexing and information retrieval, statistical analysis of word usages in written texts is finding broad application in the solution of a number of problems. One of these problems is compiling a thesaurus on a digital computer. Using two methods, a comparative experiment in automatic thesaurus construction is presented.…

  1. Sentence Similarity Analysis with Applications in Automatic Short Answer Grading

    ERIC Educational Resources Information Center

    Mohler, Michael A. G.

    2012-01-01

    In this dissertation, I explore unsupervised techniques for the task of automatic short answer grading. I compare a number of knowledge-based and corpus-based measures of text similarity, evaluate the effect of domain and size on the corpus-based measures, and also introduce a novel technique to improve the performance of the system by integrating…

  2. Automatically Assessing Lexical Sophistication: Indices, Tools, Findings, and Application

    ERIC Educational Resources Information Center

    Kyle, Kristopher; Crossley, Scott A.

    2015-01-01

    This study explores the construct of lexical sophistication and its applications for measuring second language lexical and speaking proficiency. In doing so, the study introduces the Tool for the Automatic Analysis of LExical Sophistication (TAALES), which calculates text scores for 135 classic and newly developed lexical indices related to word

  3. On Automatic Support to Indexing a Life Sciences Data Base.

    ERIC Educational Resources Information Center

    Vleduts-Stokolov, N.

    1982-01-01

    Describes technique developed as automatic support to subject heading indexing at BIOSIS based on use of formalized language for semantic representation of biological texts and subject headings. Language structures, experimental results, and analysis of journal/subject heading and author/subject heading correlation data are discussed. References…

  4. Sentence Similarity Analysis with Applications in Automatic Short Answer Grading

    ERIC Educational Resources Information Center

    Mohler, Michael A. G.

    2012-01-01

    In this dissertation, I explore unsupervised techniques for the task of automatic short answer grading. I compare a number of knowledge-based and corpus-based measures of text similarity, evaluate the effect of domain and size on the corpus-based measures, and also introduce a novel technique to improve the performance of the system by integrating

  5. Automatically Assessing Lexical Sophistication: Indices, Tools, Findings, and Application

    ERIC Educational Resources Information Center

    Kyle, Kristopher; Crossley, Scott A.

    2015-01-01

    This study explores the construct of lexical sophistication and its applications for measuring second language lexical and speaking proficiency. In doing so, the study introduces the Tool for the Automatic Analysis of LExical Sophistication (TAALES), which calculates text scores for 135 classic and newly developed lexical indices related to word…

  6. Graphonomics, Automaticity and Handwriting Assessment

    ERIC Educational Resources Information Center

    Tucha, Oliver; Tucha, Lara; Lange, Klaus W.

    2008-01-01

    A recent review of handwriting research in "Literacy" concluded that current curricula of handwriting education focus too much on writing style and neatness and neglect the aspect of handwriting automaticity. This conclusion is supported by evidence in the field of graphonomic research, where a range of experiments have been used to investigate…

  7. Automatically Preparing Safe SQL Queries

    NASA Astrophysics Data System (ADS)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  8. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  9. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB

  10. Bubble vector in automatic merging

    NASA Technical Reports Server (NTRS)

    Pamidi, P. R.; Butler, T. G.

    1987-01-01

    It is shown that it is within the capability of the DMAP language to build a set of vectors that can grow incrementally to be applied automatically and economically within a DMAP loop that serves to append sub-matrices that are generated within a loop to a core matrix. The method of constructing such vectors is explained.

  11. Combining information on multiple instrumental variables in Mendelian randomization: comparison of allele score and summarized data methods.

    PubMed

    Burgess, Stephen; Dudbridge, Frank; Thompson, Simon G

    2016-05-20

    Mendelian randomization is the use of genetic instrumental variables to obtain causal inferences from observational data. Two recent developments for combining information on multiple uncorrelated instrumental variables (IVs) into a single causal estimate are as follows: (i) allele scores, in which individual-level data on the IVs are aggregated into a univariate score, which is used as a single IV, and (ii) a summary statistic method, in which causal estimates calculated from each IV using summarized data are combined in an inverse-variance weighted meta-analysis. To avoid bias from weak instruments, unweighted and externally weighted allele scores have been recommended. Here, we propose equivalent approaches using summarized data and also provide extensions of the methods for use with correlated IVs. We investigate the impact of different choices of weights on the bias and precision of estimates in simulation studies. We show that allele score estimates can be reproduced using summarized data on genetic associations with the risk factor and the outcome. Estimates from the summary statistic method using external weights are biased towards the null when the weights are imprecisely estimated; in contrast, allele score estimates are unbiased. With equal or external weights, both methods provide appropriate tests of the null hypothesis of no causal effect even with large numbers of potentially weak instruments. We illustrate these methods using summarized data on the causal effect of low-density lipoprotein cholesterol on coronary heart disease risk. It is shown that a more precise causal estimate can be obtained using multiple genetic variants from a single gene region, even if the variants are correlated. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:26661904

  12. Intelligent Text Retrieval and Knowledge Acquisition from Texts for NASA Applications: Preprocessing Issues

    NASA Technical Reports Server (NTRS)

    2001-01-01

    In this contract, which is a component of a larger contract that we plan to submit in the coming months, we plan to study the preprocessing issues which arise in applying natural language processing techniques to NASA-KSC problem reports. The goals of this work will be to deal with the issues of: a) automatically obtaining the problem reports from NASA-KSC data bases, b) the format of these reports and c) the conversion of these reports to a format that will be adequate for our natural language software. At the end of this contract, we expect that these problems will be solved and that we will be ready to apply our natural language software to a text database of over 1000 KSC problem reports.

  13. Mobile-cloud assisted video summarization framework for efficient management of remote sensing data generated by wireless capsule sensors.

    PubMed

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-01-01

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874

  14. Mobile-Cloud Assisted Video Summarization Framework for Efficient Management of Remote Sensing Data Generated by Wireless Capsule Sensors

    PubMed Central

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-01-01

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874

  15. Text structures in medical text processing: empirical evidence and a text understanding prototype.

    PubMed Central

    Hahn, U.; Romacker, M.

    1997-01-01

    We consider the role of textual structures in medical texts. In particular, we examine the impact the lacking recognition of text phenomena has on the validity of medical knowledge bases fed by a natural language understanding front-end. First, we review the results from an empirical study on a sample of medical texts considering, in various forms of local coherence phenomena (anaphora and textual ellipses). We then discuss the representation bias emerging in the text knowledge base that is likely to occur when these phenomena are not dealt with--mainly the emergence of referentially incoherent and invalid representations. We then turn to a medical text understanding system designed to account for local text coherence. PMID:9357739

  16. Machine aided indexing from natural language text

    NASA Technical Reports Server (NTRS)

    Silvester, June P.; Genuardi, Michael T.; Klingbiel, Paul H.

    1993-01-01

    The NASA Lexical Dictionary (NLD) Machine Aided Indexing (MAI) system was designed to (1) reuse the indexing of the Defense Technical Information Center (DTIC); (2) reuse the indexing of the Department of Energy (DOE); and (3) reduce the time required for original indexing. This was done by automatically generating appropriate NASA thesaurus terms from either the other agency's index terms, or, for original indexing, from document titles and abstracts. The NASA STI Program staff devised two different ways to generate thesaurus terms from text. The first group of programs identified noun phrases by a parsing method that allowed for conjunctions and certain prepositions, on the assumption that indexable concepts are found in such phrases. Results were not always satisfactory, and it was noted that indexable concepts often occurred outside of noun phrases. The first method also proved to be too slow for the ultimate goal of interactive (online) MAI. The second group of programs used the knowledge base (KB), word proximity, and frequency of word and phrase occurrence to identify indexable concepts. Both methods are described and illustrated. Online MAI has been achieved, as well as several spinoff benefits, which are also described.

  17. Toward text understanding: classification of text documents by word map

    NASA Astrophysics Data System (ADS)

    Visa, Ari J. E.; Toivanen, Jarmo; Back, Barbro; Vanharanta, Hannu

    2000-04-01

    In many fields, for example in business, engineering, and law there is interest in the search and the classification of text documents in large databases. To information retrieval purposes there exist methods. They are mainly based on keywords. In cases where keywords are lacking the information retrieval is problematic. One approach is to use the whole text document as a search key. Neural networks offer an adaptive tool for this purpose. This paper suggests a new adaptive approach to the problem of clustering and search in large text document databases. The approach is a multilevel one based on word, sentence, and paragraph level maps. Here only the word map level is reported. The reported approach is based on smart encoding, on Self-Organizing Maps, and on document histograms. The results are very promising.

  18. Why is Light Text Harder to Read Than Dark Text?

    NASA Technical Reports Server (NTRS)

    Scharff, Lauren V.; Ahumada, Albert J.

    2005-01-01

    Scharff and Ahumada (2002, 2003) measured text legibility for light text and dark text. For paragraph readability and letter identification, responses to light text were slower and less accurate for a given contrast. Was this polarity effect (1) an artifact of our apparatus, (2) a physiological difference in the separate pathways for positive and negative contrast or (3) the result of increased experience with dark text on light backgrounds? To rule out the apparatus-artifact hypothesis, all data were collected on one monitor. Its luminance was measured at all levels used, and the spatial effects of the monitor were reduced by pixel doubling and quadrupling (increasing the viewing distance to maintain constant angular size). Luminances of vertical and horizontal square-wave gratings were compared to assess display speed effects. They existed, even for 4-pixel-wide bars. Tests for polarity asymmetries in display speed were negative. Increased experience might develop full letter templates for dark text, while recognition of light letters is based on component features. Earlier, an observer ran all conditions at one polarity and then switched. If dark and light letters were intermixed, the observer might use component features on all trials and do worse on the dark letters, reducing the polarity effect. We varied polarity blocking (completely blocked, alternating smaller blocks, and intermixed blocks). Letter identification responses times showed polarity effects at all contrasts and display resolution levels. Observers were also more accurate with higher contrasts and more pixels per degree. Intermixed blocks increased the polarity effect by reducing performance on the light letters, but only if the randomized block occurred prior to the nonrandomized block. Perhaps observers tried to use poorly developed templates, or they did not work as hard on the more difficult items. The experience hypothesis and the physiological gain hypothesis remain viable explanations.

  19. An Experimental Text-Commentary

    ERIC Educational Resources Information Center

    O'Brien, Joan

    1976-01-01

    An experimental text-commentary of selected passages from Sophocles'"Antigone" is described. The commentary is intended for students seeking more than a conventional translation who do not know enough Greek to use a standard commentary. (RM)

  20. Dangers of Texting While Driving

    MedlinePlus

    ... Help Center Consumer Enforcement International Media Public Safety Wireless Wireline Offices You are here Home / For Consumers / ... no national ban on texting or using a wireless phone while driving, but a number of states ...

  1. Text Mining in Social Networks

    NASA Astrophysics Data System (ADS)

    Aggarwal, Charu C.; Wang, Haixun

    Social networks are rich in various kinds of contents such as text and multimedia. The ability to apply text mining algorithms effectively in the context of text data is critical for a wide variety of applications. Social networks require text mining algorithms for a wide variety of applications such as keyword search, classification, and clustering. While search and classification are well known applications for a wide variety of scenarios, social networks have a much richer structure both in terms of text and links. Much of the work in the area uses either purely the text content or purely the linkage structure. However, many recent algorithms use a combination of linkage and content information for mining purposes. In many cases, it turns out that the use of a combination of linkage and content information provides much more effective results than a system which is based purely on either of the two. This paper provides a survey of such algorithms, and the advantages observed by using such algorithms in different scenarios. We also present avenues for future research in this area.

  2. Auxiliary circuit enables automatic monitoring of EKG'S

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Auxiliary circuits allow direct, automatic monitoring of electrocardiograms by digital computers. One noiseless square-wave output signal for each trigger pulse from an electrocardiogram preamplifier is produced. The circuit also permits automatic processing of cardiovascular data from analog tapes.

  3. Text Structures, Readings, and Retellings: An Exploration of Two Texts

    ERIC Educational Resources Information Center

    Martens, Prisca; Arya, Poonam; Wilson, Pat; Jin, Lijun

    2007-01-01

    The purpose of this study is to explore the relationship between children's use of reading strategies and language cues while reading and their comprehension after reading two texts: "Cherries and Cherry Pits" (Williams, 1986) and "There's Something in My Attic" (Mayer, 1988). The data were drawn from a larger study of the reading strategies of…

  4. Text Format, Text Comprehension, and Related Reader Variables

    ERIC Educational Resources Information Center

    Nichols, Jodi L.

    2009-01-01

    This investigation explored relationships between format of text (electronic or print-based) and reading comprehension of adolescent readers. Also in question were potential influences on comprehension from related measures including academic placement of participants, gender, prior knowledge of the content, and overall reading ability. Influences…

  5. Automatic Analysis of Critical Incident Reports: Requirements and Use Cases.

    PubMed

    Denecke, Kerstin

    2016-01-01

    Increasingly, critical incident reports are used as a means to increase patient safety and quality of care. The entire potential of these sources of experiential knowledge remains often unconsidered since retrieval and analysis is difficult and time-consuming, and the reporting systems often do not provide support for these tasks. The objective of this paper is to identify potential use cases for automatic methods that analyse critical incident reports. In more detail, we will describe how faceted search could offer an intuitive retrieval of critical incident reports and how text mining could support in analysing relations among events. To realise an automated analysis, natural language processing needs to be applied. Therefore, we analyse the language of critical incident reports and derive requirements towards automatic processing methods. We learned that there is a huge potential for an automatic analysis of incident reports, but there are still challenges to be solved. PMID:27139389

  6. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  7. Automatic processing, analysis, and recognition of images

    NASA Astrophysics Data System (ADS)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  8. GPU-Accelerated Text Mining

    SciTech Connect

    Cui, Xiaohui; Mueller, Frank; Zhang, Yongpeng; Potok, Thomas E

    2009-01-01

    Accelerating hardware devices represent a novel promise for improving the performance for many problem domains but it is not clear for which domains what accelerators are suitable. While there is no room in general-purpose processor design to significantly increase the processor frequency, developers are instead resorting to multi-core chips duplicating conventional computing capabilities on a single die. Yet, accelerators offer more radical designs with a much higher level of parallelism and novel programming environments. This present work assesses the viability of text mining on CUDA. Text mining is one of the key concepts that has become prominent as an effective means to index the Internet, but its applications range beyond this scope and extend to providing document similarity metrics, the subject of this work. We have developed and optimized text search algorithms for GPUs to exploit their potential for massive data processing. We discuss the algorithmic challenges of parallelization for text search problems on GPUs and demonstrate the potential of these devices in experiments by reporting significant speedups. Our study may be one of the first to assess more complex text search problems for suitability for GPU devices, and it may also be one of the first to exploit and report on atomic instruction usage that have recently become available in NVIDIA devices.

  9. 8 CFR 1205.1 - Automatic revocation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 8 Aliens and Nationality 1 2012-01-01 2012-01-01 false Automatic revocation. 1205.1 Section 1205.1 Aliens and Nationality EXECUTIVE OFFICE FOR IMMIGRATION REVIEW, DEPARTMENT OF JUSTICE IMMIGRATION REGULATIONS REVOCATION OF APPROVAL OF PETITIONS § 1205.1 Automatic revocation. (a) Reasons for automatic revocation. The approval of a petition or...

  10. 8 CFR 205.1 - Automatic revocation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 8 Aliens and Nationality 1 2012-01-01 2012-01-01 false Automatic revocation. 205.1 Section 205.1 Aliens and Nationality DEPARTMENT OF HOMELAND SECURITY IMMIGRATION REGULATIONS REVOCATION OF APPROVAL OF PETITIONS § 205.1 Automatic revocation. (a) Reasons for automatic revocation. The approval of a petition or self-petition made under section...

  11. 30 CFR 75.1405 - Automatic couplers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... between the ends of such equipment. All haulage equipment without automatic couplers in use in a mine on... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Automatic couplers. 75.1405 Section 75.1405... MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Hoisting and Mantrips § 75.1405 Automatic couplers....

  12. 30 CFR 75.1405 - Automatic couplers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... between the ends of such equipment. All haulage equipment without automatic couplers in use in a mine on... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Automatic couplers. 75.1405 Section 75.1405... MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Hoisting and Mantrips § 75.1405 Automatic couplers....

  13. 30 CFR 75.1405 - Automatic couplers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... between the ends of such equipment. All haulage equipment without automatic couplers in use in a mine on... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Automatic couplers. 75.1405 Section 75.1405... MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Hoisting and Mantrips § 75.1405 Automatic couplers....

  14. 30 CFR 75.1405 - Automatic couplers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... between the ends of such equipment. All haulage equipment without automatic couplers in use in a mine on... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Automatic couplers. 75.1405 Section 75.1405... MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Hoisting and Mantrips § 75.1405 Automatic couplers....

  15. 30 CFR 75.1405 - Automatic couplers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... between the ends of such equipment. All haulage equipment without automatic couplers in use in a mine on... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Automatic couplers. 75.1405 Section 75.1405... MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Hoisting and Mantrips § 75.1405 Automatic couplers....

  16. Self-Compassion and Automatic Thoughts

    ERIC Educational Resources Information Center

    Akin, Ahmet

    2012-01-01

    The aim of this research is to examine the relationships between self-compassion and automatic thoughts. Participants were 299 university students. In this study, the Self-compassion Scale and the Automatic Thoughts Questionnaire were used. The relationships between self-compassion and automatic thoughts were examined using correlation analysis…

  17. Semantic Annotation of Complex Text Structures in Problem Reports

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Throop, David R.; Fleming, Land D.

    2011-01-01

    Text analysis is important for effective information retrieval from databases where the critical information is embedded in text fields. Aerospace safety depends on effective retrieval of relevant and related problem reports for the purpose of trend analysis. The complex text syntax in problem descriptions has limited statistical text mining of problem reports. The presentation describes an intelligent tagging approach that applies syntactic and then semantic analysis to overcome this problem. The tags identify types of problems and equipment that are embedded in the text descriptions. The power of these tags is illustrated in a faceted searching and browsing interface for problem report trending that combines automatically generated tags with database code fields and temporal information.

  18. Young Children's Thinking in Relation to Texts: A Comparison with Older Children.

    ERIC Educational Resources Information Center

    Feathers, Karen M.

    2002-01-01

    Compared the thinking of kindergartners and sixth-graders as expressed in unassisted retellings of a narrative text. Found no significant age differences in retelling lengths and few significant age differences in the amount of types of thinking. Older children tended to summarize paragraphs and single sentences; young children tended to summarize

  19. Biomarker Identification Using Text Mining

    PubMed Central

    Li, Hui; Liu, Chunmei

    2012-01-01

    Identifying molecular biomarkers has become one of the important tasks for scientists to assess the different phenotypic states of cells or organisms correlated to the genotypes of diseases from large-scale biological data. In this paper, we proposed a text-mining-based method to discover biomarkers from PubMed. First, we construct a database based on a dictionary, and then we used a finite state machine to identify the biomarkers. Our method of text mining provides a highly reliable approach to discover the biomarkers in the PubMed database. PMID:23197989

  20. Automatically scramming nuclear reactor system

    DOEpatents

    Ougouag, Abderrafi M.; Schultz, Richard R.; Terry, William K.

    2004-10-12

    An automatically scramming nuclear reactor system. One embodiment comprises a core having a coolant inlet end and a coolant outlet end. A cooling system operatively associated with the core provides coolant to the coolant inlet end and removes heated coolant from the coolant outlet end, thus maintaining a pressure differential therebetween during a normal operating condition of the nuclear reactor system. A guide tube is positioned within the core with a first end of the guide tube in fluid communication with the coolant inlet end of the core, and a second end of the guide tube in fluid communication with the coolant outlet end of the core. A control element is positioned within the guide tube and is movable therein between upper and lower positions, and automatically falls under the action of gravity to the lower position when the pressure differential drops below a safe pressure differential.

  1. Automatic design of magazine covers

    NASA Astrophysics Data System (ADS)

    Jahanian, Ali; Liu, Jerry; Tretter, Daniel R.; Lin, Qian; Damera-Venkata, Niranjan; O'Brien-Strain, Eamonn; Lee, Seungyon; Fan, Jian; Allebach, Jan P.

    2012-03-01

    In this paper, we propose a system for automatic design of magazine covers that quantifies a number of concepts from art and aesthetics. Our solution to automatic design of this type of media has been shaped by input from professional designers, magazine art directors and editorial boards, and journalists. Consequently, a number of principles in design and rules in designing magazine covers are delineated. Several techniques are derived and employed in order to quantify and implement these principles and rules in the format of a software framework. At this stage, our framework divides the task of design into three main modules: layout of magazine cover elements, choice of color for masthead and cover lines, and typography of cover lines. Feedback from professional designers on our designs suggests that our results are congruent with their intuition.

  2. Automatic registration of satellite imagery

    NASA Technical Reports Server (NTRS)

    Fonseca, Leila M. G.; Costa, Max H. M.; Manjunath, B. S.; Kenney, C.

    1997-01-01

    Image registration is one of the basic image processing operations in remote sensing. With the increase in the number of images collected every day from different sensors, automated registration of multi-sensor/multi-spectral images has become an important issue. A wide range of registration techniques has been developed for many different types of applications and data. The objective of this paper is to present an automatic registration algorithm which uses a multiresolution analysis procedure based upon the wavelet transform. The procedure is completely automatic and relies on the grey level information content of the images and their local wavelet transform modulus maxima. The registration algorithm is very simple and easy to apply because it needs basically one parameter. We have obtained very encouraging results on test data sets from the TM and SPOT sensor images of forest, urban and agricultural areas.

  3. Assessment of home-based behavior modification programs for autistic children: reliability and validity of the behavioral summarized evaluation.

    PubMed

    Oneal, Brent J; Reeb, Roger N; Korte, John R; Butter, Eliot J

    2006-01-01

    Since the publication of Lovaas' (1987) impressive findings, there has been a proliferation of home-based behavior modification programs for autistic children. Parents and other paraprofessionals often play key roles in the implementation and monitoring of these programs. The Behavioral Summarized Evaluation (BSE) was developed for professionals and paraprofessionals to use in assessing the severity of autistic symptoms over the course of treatment. This paper examined the psychometric properties of the BSE (inter-item consistency, factorial composition, convergent validity, and sensitivity to parents' perceptions of symptom change over time) when used by parents of autistic youngsters undergoing home-based intervention. Recommendations for future research are presented. PMID:17000600

  4. Sex and gender differences in autism spectrum disorder: summarizing evidence gaps and identifying emerging areas of priority.

    PubMed

    Halladay, Alycia K; Bishop, Somer; Constantino, John N; Daniels, Amy M; Koenig, Katheen; Palmer, Kate; Messinger, Daniel; Pelphrey, Kevin; Sanders, Stephan J; Singer, Alison Tepper; Taylor, Julie Lounds; Szatmari, Peter

    2015-01-01

    One of the most consistent findings in autism spectrum disorder (ASD) research is a higher rate of ASD diagnosis in males than females. Despite this, remarkably little research has focused on the reasons for this disparity. Better understanding of this sex difference could lead to major advancements in the prevention or treatment of ASD in both males and females. In October of 2014, Autism Speaks and the Autism Science Foundation co-organized a meeting that brought together almost 60 clinicians, researchers, parents, and self-identified autistic individuals. Discussion at the meeting is summarized here with recommendations on directions of future research endeavors. PMID:26075049

  5. Text-image alignment for historical handwritten documents

    NASA Astrophysics Data System (ADS)

    Zinger, S.; Nerbonne, J.; Schomaker, L.

    2009-01-01

    We describe our work on text-image alignment in context of building a historical document retrieval system. We aim at aligning images of words in handwritten lines with their text transcriptions. The images of handwritten lines are automatically segmented from the scanned pages of historical documents and then manually transcribed. To train automatic routines to detect words in an image of handwritten text, we need a training set - images of words with their transcriptions. We present our results on aligning words from the images of handwritten lines and their corresponding text transcriptions. Alignment based on the longest spaces between portions of handwriting is a baseline. We then show that relative lengths, i.e. proportions of words in their lines, can be used to improve the alignment results considerably. To take into account the relative word length, we define the expressions for the cost function that has to be minimized for aligning text words with their images. We apply right to left alignment as well as alignment based on exhaustive search. The quality assessment of these alignments shows correct results for 69% of words from 100 lines, or 90% of partially correct and correct alignments combined.

  6. Automatic computation of transfer functions

    DOEpatents

    Atcitty, Stanley; Watson, Luke Dale

    2015-04-14

    Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

  7. Automatically-Programed Machine Tools

    NASA Technical Reports Server (NTRS)

    Purves, L.; Clerman, N.

    1985-01-01

    Software produces cutter location files for numerically-controlled machine tools. APT, acronym for Automatically Programed Tools, is among most widely used software systems for computerized machine tools. APT developed for explicit purpose of providing effective software system for programing NC machine tools. APT system includes specification of APT programing language and language processor, which executes APT statements and generates NC machine-tool motions specified by APT statements.

  8. Toward automatic finite element analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Perucchio, Renato; Voelcker, Herbert

    1987-01-01

    Two problems must be solved if the finite element method is to become a reliable and affordable blackbox engineering tool. Finite element meshes must be generated automatically from computer aided design databases and mesh analysis must be made self-adaptive. The experimental system described solves both problems in 2-D through spatial and analytical substructuring techniques that are now being extended into 3-D.

  9. Automatic connector for structural beams

    NASA Technical Reports Server (NTRS)

    Von Tiessehausen, G. F.

    1980-01-01

    Lightweight connector automatically aligns beams to be jointed, and withstands torsion, tension, and compression loads. One beam has connector, other has receptor. Bracket aligns connector and receptor. When actuated, spring in connector pushes shaft into receptor. Hooks on shaft snap to lock into receptor slots. Union can be separated easily without damage. Connectors are designed for in-space assembly, but may be suited to ground assemblies as well.

  10. Automatic translation among spoken languages

    NASA Astrophysics Data System (ADS)

    Walter, Sharon M.; Costigan, Kelly

    1994-02-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  11. Automatic translation among spoken languages

    NASA Technical Reports Server (NTRS)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  12. A Visually Oriented Text Editor

    NASA Technical Reports Server (NTRS)

    Gomez, J. E.

    1985-01-01

    HERMAN employs Evans & Sutherland Picture System 2 to provide screenoriented editing capability for DEC PDP-11 series computer. Text altered by visual indication of characters changed. Group of HERMAN commands provides for higher level operations. HERMAN provides special features for editing FORTRAN source programs.

  13. Reading Instruction and Text Difficulty

    ERIC Educational Resources Information Center

    Donne, Vicki

    2011-01-01

    An observational study investigated the influence of text difficulty (independent, instructional, or frustration level) on the reading experiences of students in grades 1-3 in two schools for the deaf. Participants included 12 students who are deaf or hard of hearing and 5 educators. The most significant findings were twofold. First, students…

  14. Transformation and Text: Journal Pedagogy.

    ERIC Educational Resources Information Center

    Ellis, Carol

    One intention that an instructor had for her new course called "Writing and Healing: Women's Journal Writing" was to make apparent the power of self-written text to transform the writer. She asked her students--women studying women writing their lives and women writing their own lives--to write three pages a day and to focus on change. The…

  15. Reviving "Walden": Mining the Text.

    ERIC Educational Resources Information Center

    Hewitt Julia

    2000-01-01

    Describes how the author and her high school English students begin their study of Thoreau's "Walden" by mining the text for quotations to inspire their own writing and discussion on the topic, "How does Thoreau speak to you or how could he speak to someone you know?" (SR)

  16. Solar Concepts: A Background Text.

    ERIC Educational Resources Information Center

    Gorham, Jonathan W.

    This text is designed to provide teachers, students, and the general public with an overview of key solar energy concepts. Various energy terms are defined and explained. Basic thermodynamic laws are discussed. Alternative energy production is described in the context of the present energy situation. Described are the principal contemporary solar…

  17. Predictive Encoding in Text Compression.

    ERIC Educational Resources Information Center

    Raita, Timo; Teuhola, Jukka

    1989-01-01

    Presents three text compression methods of increasing power and evaluates each based on the trade-off between compression gain and processing time. The advantages of using hash coding for speed and optimal arithmetic coding to successor information for compression gain are discussed. (26 references) (Author/CLB)

  18. Automatic Contrail Detection and Segmentation

    NASA Technical Reports Server (NTRS)

    Weiss, John M.; Christopher, Sundar A.; Welch, Ronald M.

    1998-01-01

    Automatic contrail detection is of major importance in the study of the atmospheric effects of aviation. Due to the large volume of satellite imagery, selecting contrail images for study by hand is impractical and highly subject to human error. It is far better to have a system in place that will automatically evaluate an image to determine 1) whether it contains contrails and 2) where the contrails are located. Preliminary studies indicate that it is possible to automatically detect and locate contrails in Advanced Very High Resolution Radiometer (AVHRR) imagery with a high degree of confidence. Once contrails have been identified and localized in a satellite image, it is useful to segment the image into contrail versus noncontrail pixels. The ability to partition image pixels makes it possible to determine the optical properties of contrails, including optical thickness and particle size. In this paper, we describe a new technique for segmenting satellite images containing contrails. This method has good potential for creating a contrail climatology in an automated fashion. The majority of contrails are detected, rejecting clutter in the image, even cirrus streaks. Long, thin contrails are most easily detected. However, some contrails may be missed because they are curved, diffused over a large area, or present in short segments. Contrails average 2-3 km in width for the cases studied.

  19. Multi-dimensional classification of biomedical text: Toward automated, practical provision of high-utility text to diverse users

    PubMed Central

    Shatkay, Hagit; Pan, Fengxia; Rzhetsky, Andrey; Wilbur, W. John

    2008-01-01

    Motivation: Much current research in biomedical text mining is concerned with serving biologists by extracting certain information from scientific text. We note that there is no ‘average biologist’ client; different users have distinct needs. For instance, as noted in past evaluation efforts (BioCreative, TREC, KDD) database curators are often interested in sentences showing experimental evidence and methods. Conversely, lab scientists searching for known information about a protein may seek facts, typically stated with high confidence. Text-mining systems can target specific end-users and become more effective, if the system can first identify text regions rich in the type of scientific content that is of interest to the user, retrieve documents that have many such regions, and focus on fact extraction from these regions. Here, we study the ability to characterize and classify such text automatically. We have recently introduced a multi-dimensional categorization and annotation scheme, developed to be applicable to a wide variety of biomedical documents and scientific statements, while intended to support specific biomedical retrieval and extraction tasks. Results: The annotation scheme was applied to a large corpus in a controlled effort by eight independent annotators, where three individual annotators independently tagged each sentence. We then trained and tested machine learning classifiers to automatically categorize sentence fragments based on the annotation. We discuss here the issues involved in this task, and present an overview of the results. The latter strongly suggest that automatic annotation along most of the dimensions is highly feasible, and that this new framework for scientific sentence categorization is applicable in practice. Contact: shatkay@cs.queensu.ca PMID:18718948

  20. Multimodal Excitatory Interfaces with Automatic Content Classification

    NASA Astrophysics Data System (ADS)

    Williamson, John; Murray-Smith, Roderick

    We describe a non-visual interface for displaying data on mobile devices, based around active exploration: devices are shaken, revealing the contents rattling around inside. This combines sample-based contact sonification with event playback vibrotactile feedback for a rich and compelling display which produces an illusion much like balls rattling inside a box. Motion is sensed from accelerometers, directly linking the motions of the user to the feedback they receive in a tightly closed loop. The resulting interface requires no visual attention and can be operated blindly with a single hand: it is reactive rather than disruptive. This interaction style is applied to the display of an SMS inbox. We use language models to extract salient features from text messages automatically. The output of this classification process controls the timbre and physical dynamics of the simulated objects. The interface gives a rapid semantic overview of the contents of an inbox, without compromising privacy or interrupting the user.

  1. The TEXT upgrade vertical interferometer

    NASA Astrophysics Data System (ADS)

    Hallock, G. A.; Gartman, M. L.; Li, W.; Chiang, K.; Shin, S.; Castles, R. L.; Chatterjee, R.; Rahman, A. S.

    1992-10-01

    A far-infrared interferometer has been installed on TEXT upgrade to obtain electron density profiles. The primary system views the plasma vertically through a set of large (60-cm radial×7.62-cm toroidal) diagnostic ports. A 1-cm channel spacing (59 channels total) and fast electronic time response is used, to provide high resolution for radial profiles and perturbation experiments. Initial operation of the vertical system was obtained late in 1991, with six operating channels.

  2. [On two antique medical texts].

    PubMed

    Rosa, Maria Carlota

    2005-01-01

    The two texts presented here--Regimento proueytoso contra ha pestenença [literally, "useful regime against pestilence"] and Modus curandi cum balsamo ["curing method using balm"]--represent the extent of Portugal's known medical library until circa 1530, produced in gothic letters by foreign printers: Germany's Valentim Fernandes, perhaps the era's most important printer, who worked in Lisbon between 1495 and 1518, and Germdo Galharde, a Frenchman who practiced his trade in Lisbon and Coimbra between 1519 and 1560. Modus curandi, which came to light in 1974 thanks to bibliophile José de Pina Martins, is anonymous. Johannes Jacobi is believed to be the author of Regimento proueytoso, which was translated into Latin (Regimen contra pestilentiam), French, and English. Both texts are presented here in facsimile and in modern Portuguese, while the first has also been reproduced in archaic Portuguese using modern typographical characters. This philological venture into sixteenth-century medicine is supplemented by a scholarly glossary which serves as a valuable tool in interpreting not only Regimento proueytoso but also other texts from the era. Two articles place these documents in historical perspective. PMID:17500134

  3. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  4. Identifying Issue Frames in Text

    PubMed Central

    Sagi, Eyal; Diermeier, Daniel; Kaufmann, Stefan

    2013-01-01

    Framing, the effect of context on cognitive processes, is a prominent topic of research in psychology and public opinion research. Research on framing has traditionally relied on controlled experiments and manually annotated document collections. In this paper we present a method that allows for quantifying the relative strengths of competing linguistic frames based on corpus analysis. This method requires little human intervention and can therefore be efficiently applied to large bodies of text. We demonstrate its effectiveness by tracking changes in the framing of terror over time and comparing the framing of abortion by Democrats and Republicans in the U.S. PMID:23874909

  5. A Howardite-Eucrite-Diogenite (HED) Meteorite Compendium: Summarizing Samples of ASteroid 4 Vesta in Preparation for the Dawn Mission

    NASA Technical Reports Server (NTRS)

    Garber, J. M.; Righter, K.

    2011-01-01

    The Howardite-Eucrite-Diogenite (HED) suite of achondritic meteorites, thought to originate from asteroid 4 Vesta, has recently been summarized into a meteorite compendium. This compendium will serve as a guide for researchers interested in further analysis of HEDs, and we expect that interest in these samples will greatly increase with the planned arrival of the Dawn Mission at Vesta in August 2011. The focus of this abstract/poster is to (1) introduce and describe HED samples from both historical falls and Antarctic finds, and (2) provide information on unique HED samples available for study from the Antarctic Meteorite Collection at JSC, including the vesicular eucrite PCA91007, the olivine diogenite EETA79002, and the paired ALH polymict eucrites.

  6. Efficient Index for Handwritten Text

    NASA Astrophysics Data System (ADS)

    Kamel, Ibrahim

    This paper deals with one of the new emerging multimedia data types, namely, handwritten cursive text. The paper presents two indexing methods for searching a collection of cursive handwriting. The first index, word-level index, treats word as pictogram and uses global features for retrieval. The word-level index is suitable for large collection of cursive text. While the second one, called stroke-level index, treats the word as a set of strokes. The stroke-level index is more accurate, but more costly than the word level index. Each word (or stroke) can be described with a set of features and, thus, can be stored as points in the feature space. The Karhunen-Loeve transform is then used to minimize the number of features used (data dimensionality) and thus the index size. Feature vectors are stored in an R-tree. We implemented both indexes and carried many simulation experiments to measure the effectiveness and the cost of the search algorithm. The proposed indexes achieve substantial saving in the search time over the sequential search. Moreover, the proposed indexes improve the matching rate up to 46% over the sequential search.

  7. Offsite radiation doses summarized from Hanford environmental monitoring reports for the years 1957-1984. [Contains glossary

    SciTech Connect

    Soldat, J.K.; Price, K.R.; McCormack, W.D.

    1986-02-01

    Since 1957, evaluations of offsite impacts from each year of operation have been summarized in publicly available, annual environmental reports. These evaluations included estimates of potential radiation exposure to members of the public, either in terms of percentages of the then permissible limits or in terms of radiation dose. The estimated potential radiation doses to maximally exposed individuals from each year of Hanford operations are summarized in a series of tables and figures. The applicable standard for radiation dose to an individual for whom the maximum exposure was estimated is also shown. Although the estimates address potential radiation doses to the public from each year of operations at Hanford between 1957 and 1984, their sum will not produce an accurate estimate of doses accumulated over this time period. The estimates were the best evaluations available at the time to assess potential dose from the current year of operation as well as from any radionuclides still present in the environment from previous years of operation. There was a constant striving for improved evaluation of the potential radiation doses received by members of the public, and as a result the methods and assumptions used to estimate doses were periodically modified to add new pathways of exposure and to increase the accuracy of the dose calculations. Three conclusions were reached from this review: radiation doses reported for the years 1957 through 1984 for the maximum individual did not exceed the applicable dose standards; radiation doses reported over the past 27 years are not additive because of the changing and inconsistent methods used; and results from environmental monitoring and the associated dose calculations reported over the 27 years from 1957 through 1984 do not suggest a significant dose contribution from the buildup in the environment of radioactive materials associated with Hanford operations.

  8. Preferences of Knowledge Users for Two Formats of Summarizing Results from Systematic Reviews: Infographics and Critical Appraisals

    PubMed Central

    Crick, Katelynn; Hartling, Lisa

    2015-01-01

    Objectives To examine and compare preferences of knowledge users for two different formats of summarizing results from systematic reviews: infographics and critical appraisals. Design Cross-sectional. Setting Annual members’ meeting of a Network of Centres of Excellence in Knowledge Mobilization called TREKK (Translating Emergency Knowledge for Kids). TREKK is a national network of researchers, clinicians, health consumers, and relevant organizations with the goal of mobilizing knowledge to improve emergency care for children. Participants Members of the TREKK Network attending the annual meeting in October 2013. Outcome Measures Overall preference for infographic vs. critical appraisal format. Members’ rating of each format on a 10-point Likert scale for clarity, comprehensibility, and aesthetic appeal. Members’ impressions of the appropriateness of the two formats for their professional role and for other audiences. Results Among 64 attendees, 58 members provided feedback (91%). Overall, their preferred format was divided with 24/47 (51%) preferring the infographic to the critical appraisal. Preference varied by professional role, with 15/22 (68%) of physicians preferring the critical appraisal and 8/12 (67%) of nurses preferring the infographic. The critical appraisal was rated higher for clarity (mean 7.8 vs. 7.0; p = 0.03), while the infographic was rated higher for aesthetic appeal (mean 7.2 vs. 5.0; p<0.001). There was no difference between formats for comprehensibility (mean 7.6 critical appraisal vs. 7.1 infographic; p = 0.09). Respondents indicated the infographic would be most useful for patients and their caregivers, while the critical appraisal would be most useful for their professional roles. Conclusions Infographics are considered more aesthetically appealing for summarizing evidence; however, critical appraisal formats are considered clearer and more comprehensible. Our findings show differences in terms of audience-specific preferences for presentation of research results. This study supports other research indicating that tools for knowledge dissemination and translation need to be targeted to specific end users’ preferences and needs. PMID:26466099

  9. Unification of automatic target tracking and automatic target recognition

    NASA Astrophysics Data System (ADS)

    Schachter, Bruce J.

    2014-06-01

    The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT∪ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.

  10. An NLP Framework for Non-Topical Text Analysis in Urdu--A Resource Poor Language

    ERIC Educational Resources Information Center

    Mukund, Smruthi

    2012-01-01

    Language plays a very important role in understanding the culture and mindset of people. Given the abundance of electronic multilingual data, it is interesting to see what insight can be gained by automatic analysis of text. This in turn calls for text analysis which is focused on non-topical information such as emotions being expressed that is in…

  11. An NLP Framework for Non-Topical Text Analysis in Urdu--A Resource Poor Language

    ERIC Educational Resources Information Center

    Mukund, Smruthi

    2012-01-01

    Language plays a very important role in understanding the culture and mindset of people. Given the abundance of electronic multilingual data, it is interesting to see what insight can be gained by automatic analysis of text. This in turn calls for text analysis which is focused on non-topical information such as emotions being expressed that is in

  12. Sparsity inspired automatic target recognition

    NASA Astrophysics Data System (ADS)

    Patel, Vishal M.; Nasrabadi, Nasser M.; Chellappa, Rama

    2010-04-01

    In this paper, we develop a framework for using only the needed data for automatic target recognition (ATR) algorithms using the recently developed theory of sparse representations and compressive sensing (CS). We show how sparsity can be helpful for efficient utilization of data, with the possibility of developing real-time, robust target classification. We verify the efficacy of the proposed algorithm in terms of the recognition rate on the well known Comanche forward-looking infrared (FLIR) data set consisting of ten different military targets at different orientations.

  13. Commutated automatic gain control system

    NASA Technical Reports Server (NTRS)

    Yost, S. R.

    1982-01-01

    A commutated automatic gain control (AGC) system was designed and built for a prototype Loran C receiver. The receiver uses a microcomputer to control a memory aided phase-locked loop (MAPLL). The microcomputer also controls the input/output, latitude/longitude conversion, and the recently added AGC system. The circuit designed for the AGC is described, and bench and flight test results are presented. The AGC circuit described actually samples starting at a point 40 microseconds after a zero crossing determined by the software lock pulse ultimately generated by a 30 microsecond delay and add network in the receiver front end envelope detector.

  14. Automatic assembly of space stations

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1985-01-01

    A problem in the automatic assembly of space stations is the determination of guidance laws for the terminal rendezvous and docking of two structural components or modules. The problem involves the feedback control of both the relative attitude and translational motion of the modules. A suitable mathematical model based on rigid body dynamics was used. The basic requirements, physical constraints and difficulties associated with the control problem are discussed. An approach which bypasses some of the difficulties is proposed. A nonlinear guidance law satisfying the basic requirements is derived. The implementation requirements is discussed. The performance of the resulting feedback control system with rigid and flexible structural components is studied by computer simulation.

  15. Intermediate leak protection/automatic shutdown for B and W helical coil steam generator

    SciTech Connect

    Not Available

    1981-01-01

    The report summarizes a follow-on study to the multi-tiered Intermediate Leak/Automatic Shutdown System report. It makes the automatic shutdown system specific to the Babcock and Wilcox (B and W) helical coil steam generator and to the Large Development LMFBR Plant. Threshold leak criteria specific to this steam generator design are developed, and performance predictions are presented for a multi-tier intermediate leak, automatic shutdown system applied to this unit. Preliminary performance predictions for application to the helical coil steam generator were given in the referenced report; for the most part, these predictions have been confirmed. The importance of including a cover gas hydrogen meter in this unit is demonstrated by calculation of a response time one-fifth that of an in-sodium meter at hot standby and refueling conditions.

  16. Spam Filtering without Text Analysis

    NASA Astrophysics Data System (ADS)

    Belabbes, Sihem; Richard, Gilles

    Our paper introduces a new way to filter spam using as background the Kolmogorov complexity theory and as learning component a Support Vector Machine. Our idea is to skip the classical text analysis in use with standard filtering techniques, and to focus on the measure of the informative content of a message to classify it as spam or legitimate. Exploiting the fact that we can estimate a message information content through compression techniques, we represent an e-mail as a multi-dimensional real vector and we train a Support Vector Machine to get a classifier achieving accuracy rates in the range of 90%-97%, bringing our combined technique at the top of the current spam filtering technologies.

  17. Text Mining for Protein Docking

    PubMed Central

    Badal, Varsha D.; Kundrotas, Petras J.; Vakser, Ilya A.

    2015-01-01

    The rapidly growing amount of publicly available information from biomedical research is readily accessible on the Internet, providing a powerful resource for predictive biomolecular modeling. The accumulated data on experimentally determined structures transformed structure prediction of proteins and protein complexes. Instead of exploring the enormous search space, predictive tools can simply proceed to the solution based on similarity to the existing, previously determined structures. A similar major paradigm shift is emerging due to the rapidly expanding amount of information, other than experimentally determined structures, which still can be used as constraints in biomolecular structure prediction. Automated text mining has been widely used in recreating protein interaction networks, as well as in detecting small ligand binding sites on protein structures. Combining and expanding these two well-developed areas of research, we applied the text mining to structural modeling of protein-protein complexes (protein docking). Protein docking can be significantly improved when constraints on the docking mode are available. We developed a procedure that retrieves published abstracts on a specific protein-protein interaction and extracts information relevant to docking. The procedure was assessed on protein complexes from Dockground (http://dockground.compbio.ku.edu). The results show that correct information on binding residues can be extracted for about half of the complexes. The amount of irrelevant information was reduced by conceptual analysis of a subset of the retrieved abstracts, based on the bag-of-words (features) approach. Support Vector Machine models were trained and validated on the subset. The remaining abstracts were filtered by the best-performing models, which decreased the irrelevant information for ~ 25% complexes in the dataset. The extracted constraints were incorporated in the docking protocol and tested on the Dockground unbound benchmark set, significantly increasing the docking success rate. PMID:26650466

  18. Populating the Semantic Web by Macro-reading Internet Text

    NASA Astrophysics Data System (ADS)

    Mitchell, Tom M.; Betteridge, Justin; Carlson, Andrew; Hruschka, Estevam; Wang, Richard

    A key question regarding the future of the semantic web is "how will we acquire structured information to populate the semantic web on a vast scale?" One approach is to enter this information manually. A second approach is to take advantage of pre-existing databases, and to develop common ontologies, publishing standards, and reward systems to make this data widely accessible. We consider here a third approach: developing software that automatically extracts structured information from unstructured text present on the web. We also describe preliminary results demonstrating that machine learning algorithms can learn to extract tens of thousands of facts to populate a diverse ontology, with imperfect but reasonably good accuracy.

  19. Temporal reasoning over clinical text: the state of the art

    PubMed Central

    Sun, Weiyi; Rumshisky, Anna; Uzuner, Ozlem

    2013-01-01

    Objectives To provide an overview of the problem of temporal reasoning over clinical text and to summarize the state of the art in clinical natural language processing for this task. Target audience This overview targets medical informatics researchers who are unfamiliar with the problems and applications of temporal reasoning over clinical text. Scope We review the major applications of text-based temporal reasoning, describe the challenges for software systems handling temporal information in clinical text, and give an overview of the state of the art. Finally, we present some perspectives on future research directions that emerged during the recent community-wide challenge on text-based temporal reasoning in the clinical domain. PMID:23676245

  20. An automatic assembly planning system

    NASA Astrophysics Data System (ADS)

    Huang, Y. F.; Lee, C. S. G.

    An automatic assembly planning system which takes the CAD description of a product as input and automatically generates an assembly plan subject to the resource constraint of a given assembly cell is presented. The system improves the flexibility and productivity of flexible manufacturing systems and is composed of five modules: world database, simulated world model, knowledge acquisition mechanism, planning knowledge base, and assembly planner. The acquired knowledge forms the planning knowledge base. The simulated world model keeps track of the current state of the assembly world. In the initial state, all the components are separated, while in the final state, all the components are assembled. The assembly planner is made up of a set of production rules which models the effects of real assembly tasks. By repeatedly applying these production rules to the simulated world state, the planner transforms the initial state into the final state. The set of rules applied during this transformation process forms the assembly plan to actually assemble the product in the given assembly cell. Examples are given to illustrate the concepts in these five modules.

  1. Automatic Computer Mapping of Terrain

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.

    1971-01-01

    Computer processing of 17 wavelength bands of visible, reflective infrared, and thermal infrared scanner spectrometer data, and of three wavelength bands derived from color aerial film has resulted in successful automatic computer mapping of eight or more terrain classes in a Yellowstone National Park test site. The tests involved: (1) supervised and non-supervised computer programs; (2) special preprocessing of the scanner data to reduce computer processing time and cost, and improve the accuracy; and (3) studies of the effectiveness of the proposed Earth Resources Technology Satellite (ERTS) data channels in the automatic mapping of the same terrain, based on simulations, using the same set of scanner data. The following terrain classes have been mapped with greater than 80 percent accuracy in a 12-square-mile area with 1,800 feet of relief; (1) bedrock exposures, (2) vegetated rock rubble, (3) talus, (4) glacial kame meadow, (5) glacial till meadow, (6) forest, (7) bog, and (8) water. In addition, shadows of clouds and cliffs are depicted, but were greatly reduced by using preprocessing techniques.

  2. Automatic transmission system for vehicles

    SciTech Connect

    Kurihara, K.; Arai, K.

    1987-09-15

    An automatic transmission system for vehicles powered by an internal combustion engine is described which consists of: a gear-change mechanism connected to an internal combustion engine and adapted to operate in response to an electric signal; means for outputting at least one condition signal indicative of an operating condition of the internal combustion engine at each instant; a first detection means for detecting a gear position of the gear-change mechanism at each instant; a selector having at least a drive position for allowing automatic gear-change operation over a plurality of gear positions of the gear-change mechanism and a gear-holding position provided adjacent to drive position; an output means responsive to the operation of the an operation lever of the selector for outputting a command signal indicating a position at which the operation lever for selecting a desired operation mode is set; a storage means for storing a plurality of sets of map data corresponding to a plurality of gear change maps; a second detection means responsive to an output from the first detection means a map selecting means responsive to the command signal and an output from the second detection means for selecting a predetermined set of map data corresponding to the command signal; a control means responsive to the condition signal for controlling the operation of the gear-change mechanism in accordance with the set of map data selected by the map selecting means.

  3. Automatic visible watermarking of images

    NASA Astrophysics Data System (ADS)

    Rao, A. Ravishankar; Braudaway, Gordon W.; Mintzer, Frederick C.

    1998-04-01

    Visible image watermarking has become an important and widely used technique to identify ownership and protect copyrights to images. A visible image watermark immediately identifies the owner of an image, and if properly constructed, can deter subsequent unscrupulous use of the image. The insertion of a visible watermark should satisfy two conflicting conditions: the intensity of the watermark should be strong enough to be perceptible, yet it should be light enough to be unobtrusive and not mar the beauty of the original image. Typically such an adjustment is made manually, and human intervention is required to set the intensity of the watermark at the right level. This is fine for a few images, but is unsuitable for a large collection of images. Thus, it is desirable to have a technique to automatically adjust the intensity of the watermark based on some underlying property of each image. This will allow a large number of images to be automatically watermarked, this increasing the throughput of the watermarking stage. In this paper we show that the measurement of image texture can be successfully used to automate the adjustment of watermark intensity. A linear regression model is used to predict subjective assessments of correct watermark intensity based on image texture measurements.

  4. Automatic temperature controlled retinal photocoagulation.

    PubMed

    Schlott, Kerstin; Koinzer, Stefan; Ptaszynski, Lars; Bever, Marco; Baade, Alex; Roider, Johann; Birngruber, Reginald; Brinkmann, Ralf

    2012-06-01

    Laser coagulation is a treatment method for many retinal diseases. Due to variations in fundus pigmentation and light scattering inside the eye globe, different lesion strengths are often achieved. The aim of this work is to realize an automatic feedback algorithm to generate desired lesion strengths by controlling the retinal temperature increase with the irradiation time. Optoacoustics afford non-invasive retinal temperature monitoring during laser treatment. A 75 ns/523 nm Q-switched Nd:YLF laser was used to excite the temperature-dependent pressure amplitudes, which were detected at the cornea by an ultrasonic transducer embedded in a contact lens. A 532 nm continuous wave Nd:YAG laser served for photocoagulation. The ED50 temperatures, for which the probability of ophthalmoscopically visible lesions after one hour in vivo in rabbits was 50%, varied from 63C for 20 ms to 49C for 400 ms. Arrhenius parameters were extracted as ?E=273 J mol(-1) and A=3 x 10(44) s(-1). Control algorithms for mild and strong lesions were developed, which led to average lesion diameters of 162 34 ?m and 189 34 ?m, respectively. It could be demonstrated that the sizes of the automatically controlled lesions were widely independent of the treatment laser power and the retinal pigmentation. PMID:22734753

  5. Unsupervised Mining of Frequent Tags for Clinical Eligibility Text Indexing

    PubMed Central

    Miotto, Riccardo; Weng, Chunhua

    2013-01-01

    Clinical text, such as clinical trial eligibility criteria, is largely underused in state-of-the-art medical search engines due to difficulties of accurate parsing. This paper proposes a novel methodology to derive a semantic index for clinical eligibility documents based on a controlled vocabulary of frequent tags, which are automatically mined from the text. We applied this method to eligibility criteria on ClinicalTrials.gov and report that frequent tags (1) define an effective and efficient index of clinical trials and (2) are unlikely to grow radically when the repository increases. We proposed to apply the semantic index to filter clinical trial search results and we concluded that frequent tags reduce the result space more efficiently than an uncontrolled set of UMLS concepts. Overall, unsupervised mining of frequent tags from clinical text leads to an effective semantic index for the clinical eligibility documents and promotes their computational reuse. PMID:24036004

  6. Hierarchical Concept Indexing of Full-Text Documents in the Unified Medical Language System Information Sources Map.

    ERIC Educational Resources Information Center

    Wright, Lawrence W.; Nardini, Holly K. Grossetta; Aronson, Alan R.; Rindflesch, Thomas C.

    1999-01-01

    Describes methods for applying natural-language processing for automatic concept-based indexing of full text and methods for exploiting the structure and hierarchy of full-text documents to a large collection of full-text documents drawn from the Health Services/Technology Assessment Text database at the National Library of Medicine. Examines how…

  7. Hierarchical Concept Indexing of Full-Text Documents in the Unified Medical Language System Information Sources Map.

    ERIC Educational Resources Information Center

    Wright, Lawrence W.; Nardini, Holly K. Grossetta; Aronson, Alan R.; Rindflesch, Thomas C.

    1999-01-01

    Describes methods for applying natural-language processing for automatic concept-based indexing of full text and methods for exploiting the structure and hierarchy of full-text documents to a large collection of full-text documents drawn from the Health Services/Technology Assessment Text database at the National Library of Medicine. Examines how

  8. Automatic Planning Research Applied To Orbital Construction

    NASA Astrophysics Data System (ADS)

    Park, William T.

    1987-02-01

    Artificial intelligence research on automatic planning could result in a new class of management aids to reduce the cost of constructing the Space Station, and would have economically important spinoffs to terrestrial industry as well. Automatic planning programs could be used to plan and schedule launch activities, material deliveries to orbit, construction procedures, and the use of machinery and tools. Numerous automatic planning programs have been written since the 1050's. We describe PARPLAN, a recently-developed experimental automatic planning program written in the AI language Prolog, that can generate plans with parallel activities.

  9. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  10. Video summarization based tele-endoscopy: a service to efficiently manage visual data generated during wireless capsule endoscopy procedure.

    PubMed

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-09-01

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use. More importantly, WCE combined with mobile computing ensures rapid transmission of diagnostic data to hospitals and enables off-site senior gastroenterologists to offer timely decision making support. However, during this WCE process, video data are produced in huge amounts, but only a limited amount of data is actually useful for diagnosis. The sharing and analysis of this video data becomes a challenging task due the constraints such as limited memory, energy, and communication capability. In order to facilitate efficient WCE data collection and browsing tasks, we present a video summarization-based tele-endoscopy service that estimates the semantically relevant video frames from the perspective of gastroenterologists. For this purpose, image moments, curvature, and multi-scale contrast are computed and are fused to obtain the saliency map of each frame. This saliency map is used to select keyframes. The proposed tele-endoscopy service selects keyframes based on their relevance to the disease diagnosis. This ensures the sending of diagnostically relevant frames to the gastroenterologist instead of sending all the data, thus saving transmission costs and bandwidth. The proposed framework also saves storage costs as well as the precious time of doctors in browsing patient's information. The qualitative and quantitative results are encouraging and show that the proposed service provides video keyframes to the gastroenterologists without discarding important information. PMID:25037715

  11. Neutron and X-Ray Effects on Small Intestine Summarized by Using a Mathematical Model or Paradigm

    NASA Astrophysics Data System (ADS)

    Carr, K. E.; McCullough, J. S.; Nunn, S.; Hume, S. P.; Nelson, A. C.

    1991-03-01

    The responses of intestinal tissues to ionizing radiation can be described by comparing irradiated cell populations qualitatively or quantitatively with corresponding controls. This paper describes quantitative data obtained from resin-embedded sections of neutron-irradiated mouse small intestine at different times after treatment. Information is collected by counting cells or structures present per complete circumference. The data are assessed by using standard statistical tests, which show that early mitotic arrest precedes changes in goblet, absorptive, endocrine and stromal cells and a decrease in crypt numbers. The data can also produce ratios of irradiated: control figures for cells or structural elements. These ratios, along with tissue area measurements, can be used to summarize the structural damage as a composite graph and table, including a total figure, known as the Morphological Index. This is used to quantify the temporal response of the wall as a whole and to compare the effects of different qualities of radiation, here X-ray and cyclotron-produced neutron radiations. It is possible that such analysis can be used predictively along with other reference data to identify the treatment, dose and time required to produce observed tissue damage.

  12. Charting patients' course: a comparison of statistics used to summarize patient course in longitudinal and repeated measures studies.

    PubMed

    Arndt, S; Turvey, C; Coryell, W H; Dawson, J D; Leon, A C; Akiskal, H S

    2000-01-01

    Investigators conducting longitudinal studies of psychiatric illnesses often analyze data based on psychiatric symptom scales that were administered at multiple time points. This study examines the statistical properties of seven indices that summarize patient long-term course. These indices can be used to compare differences between two or more groups or to test for changes in symptoms over time. They may also be treated as outcome measures and correlated with other clinical variables.The performance of each of the seven indices was assessed using data from two large ongoing studies of psychiatric patients: a longitudinal study of affective disorders and a longitudinal study of first-episode psychosis. These two datasets were subjected to bootstrapping techniques in order to calculate both type I error rates and statistical power for each summary statistic. Of the seven indices, Kendall's tau performed the best as a measure of patients' symptom course. Kendall's tau appears to offer more statistical power to detect change in course, yet its average type I error rate was comparable to the other indices. PMID:10758251

  13. Summarized Costs, Placement Of Quality Stars, And Other Online Displays Can Help Consumers Select High-Value Health Plans.

    PubMed

    Greene, Jessica; Hibbard, Judith H; Sacks, Rebecca M

    2016-04-01

    Starting in 2017, all state and federal health insurance exchanges will present quality data on health plans in addition to cost information. We analyzed variations in the current design of information on state exchanges to identify presentation approaches that encourage consumers to take quality as well as cost into account when selecting a health plan. Using an online sample of 1,025 adults, we randomly assigned participants to view the same comparative information on health plans, displayed in different ways. We found that consumers were much more likely to select a high-value plan when cost information was summarized instead of detailed, when quality stars were displayed adjacent to cost information, when consumers understood that quality stars signified the quality of medical care, and when high-value plans were highlighted with a check mark or blue ribbon. These approaches, which were equally effective for participants with higher and lower numeracy, can inform the development of future displays of plan information in the exchanges. PMID:27044968

  14. ANPS - AUTOMATIC NETWORK PROGRAMMING SYSTEM

    NASA Technical Reports Server (NTRS)

    Schroer, B. J.

    1994-01-01

    Development of some of the space program's large simulation projects -- like the project which involves simulating the countdown sequence prior to spacecraft liftoff -- requires the support of automated tools and techniques. The number of preconditions which must be met for a successful spacecraft launch and the complexity of their interrelationship account for the difficulty of creating an accurate model of the countdown sequence. Researchers developed ANPS for the Nasa Marshall Space Flight Center to assist programmers attempting to model the pre-launch countdown sequence. Incorporating the elements of automatic programming as its foundation, ANPS aids the user in defining the problem and then automatically writes the appropriate simulation program in GPSS/PC code. The program's interactive user dialogue interface creates an internal problem specification file from user responses which includes the time line for the countdown sequence, the attributes for the individual activities which are part of a launch, and the dependent relationships between the activities. The program's automatic simulation code generator receives the file as input and selects appropriate macros from the library of software modules to generate the simulation code in the target language GPSS/PC. The user can recall the problem specification file for modification to effect any desired changes in the source code. ANPS is designed to write simulations for problems concerning the pre-launch activities of space vehicles and the operation of ground support equipment and has potential for use in developing network reliability models for hardware systems and subsystems. ANPS was developed in 1988 for use on IBM PC or compatible machines. The program requires at least 640 KB memory and one 360 KB disk drive, PC DOS Version 2.0 or above, and GPSS/PC System Version 2.0 from Minuteman Software. The program is written in Turbo Prolog Version 2.0. GPSS/PC is a trademark of Minuteman Software. Turbo Prolog is a trademark of Borland International. IBM PC and PS DOS are registered trademarks of International Business Machines Corporation.

  15. MedSynDiKATe--design considerations for an ontology-based medical text understanding system.

    PubMed Central

    Hahn, U.; Romacker, M.; Schulz, S.

    2000-01-01

    MedSynDiKATe is a natural language processor for automatically acquiring knowledge from medical finding reports. The content of these documents is transferred to formal representation structures which constitute a corresponding text knowledge base. The general system architecture we present integrates requirements from the analysis of single sentences, as well as those of referentially linked sentences forming cohesive texts. The strong demands MedSynDiKATe poses to the availability of expressive knowledge sources are accounted for by two alternative approaches to (semi)automatic ontology engineering. PMID:11079899

  16. Keyword Extraction from Arabic Legal Texts

    ERIC Educational Resources Information Center

    Rammal, Mahmoud; Bahsoun, Zeinab; Al Achkar Jabbour, Mona

    2015-01-01

    Purpose: The purpose of this paper is to apply local grammar (LG) to develop an indexing system which automatically extracts keywords from titles of Lebanese official journals. Design/methodology/approach: To build LG for our system, the first word that plays the determinant role in understanding the meaning of a title is analyzed and grouped as…

  17. Keyword Extraction from Arabic Legal Texts

    ERIC Educational Resources Information Center

    Rammal, Mahmoud; Bahsoun, Zeinab; Al Achkar Jabbour, Mona

    2015-01-01

    Purpose: The purpose of this paper is to apply local grammar (LG) to develop an indexing system which automatically extracts keywords from titles of Lebanese official journals. Design/methodology/approach: To build LG for our system, the first word that plays the determinant role in understanding the meaning of a title is analyzed and grouped as

  18. Automatic home medical product recommendation.

    PubMed

    Luo, Gang; Thomas, Selena B; Tang, Chunqiang

    2012-04-01

    Web-based personal health records (PHRs) are being widely deployed. To improve PHR's capability and usability, we proposed the concept of intelligent PHR (iPHR). In this paper, we use automatic home medical product recommendation as a concrete application to demonstrate the benefits of introducing intelligence into PHRs. In this new application domain, we develop several techniques to address the emerging challenges. Our approach uses treatment knowledge and nursing knowledge, and extends the language modeling method to (1) construct a topic-selection input interface for recommending home medical products, (2) produce a global ranking of Web pages retrieved by multiple queries, and (3) provide diverse search results. We demonstrate the effectiveness of our techniques using USMLE medical exam cases. PMID:20703712

  19. Automatic Nanodesign Using Evolutionary Techniques

    NASA Technical Reports Server (NTRS)

    Globus, Al; Saini, Subhash (Technical Monitor)

    1998-01-01

    Many problems associated with the development of nanotechnology require custom designed molecules. We use genetic graph software, a new development, to automatically evolve molecules of interest when only the requirements are known. Genetic graph software designs molecules, and potentially nanoelectronic circuits, given a fitness function that determines which of two molecules is better. A set of molecules, the first generation, is generated at random then tested with the fitness function, Subsequent generations are created by randomly choosing two parent molecules with a bias towards high scoring molecules, tearing each molecules in two at random, and mating parts from the mother and father to create two children. This procedure is repeated until a satisfactory molecule is found. An atom pair similarity test is currently used as the fitness function to evolve molecules similar to existing pharmaceuticals.

  20. Automatic Mechetronic Wheel Light Device

    DOEpatents

    Khan, Mohammed John Fitzgerald

    2004-09-14

    A wheel lighting device for illuminating a wheel of a vehicle to increase safety and enhance aesthetics. The device produces the appearance of a "ring of light" on a vehicle's wheels as the vehicle moves. The "ring of light" can automatically change in color and/or brightness according to a vehicle's speed, acceleration, jerk, selection of transmission gears, and/or engine speed. The device provides auxiliary indicator lights by producing light in conjunction with a vehicle's turn signals, hazard lights, alarm systems, and etc. The device comprises a combination of mechanical and electronic components and can be placed on the outer or inner surface of a wheel or made integral to a wheel or wheel cover. The device can be configured for all vehicle types, and is electrically powered by a vehicle's electrical system and/or battery.

  1. Automatic thermal switch. [spacecraft applications

    NASA Technical Reports Server (NTRS)

    Cunningham, J. W.; Wing, L. D. (Inventor)

    1983-01-01

    An automatic thermal switch to control heat flow includes two thermally conductive plates and a thermally conductive switch saddle pivotally mounted to the first plate. A flexible heat carrier is connected between the switch saddle and the second plate. A phase-change power unit, including a piston coupled to the switch saddle, is in thermal contact with the first thermally conductive plate. A biasing element biases the switch saddle in a predetermined position with respect to the first plate. When the phase-change power unit is actuated by an increase in heat transmitted through the first place, the piston extends and causes the switch saddle to pivot, thereby varying the thermal conduction between the two plates through the switch saddle and flexible heat carrier. The biasing element, switch saddle, and piston can be arranged to provide either a normally closed or normally opened thermally conductive path between the two plates.

  2. Automatic interpretation of oblique ionograms

    NASA Astrophysics Data System (ADS)

    Ippolito, Alessandro; Scotto, Carlo; Francis, Matthew; Settimi, Alessandro; Cesaroni, Claudio

    2015-03-01

    We present an algorithm for the identification of trace characteristics of oblique ionograms allowing determination of the Maximum Usable Frequency (MUF) for communication between the transmitter and receiver. The algorithm automatically detects and rejects poor quality ionograms. We performed an exploratory test of the algorithm using data from a campaign of oblique soundings between Rome, Italy (41.90 N, 12.48 E) and Chania, Greece (35.51 N, 24.01 E) and also between Kalkarindji, Australia (17.43 S, 130.81 E) and Culgoora, Australia (30.30 S, 149.55 E). The success of these tests demonstrates the applicability of the method to ionograms recorded by different ionosondes in various helio and geophysical conditions.

  3. Automatic Sequencing for Experimental Protocols

    NASA Astrophysics Data System (ADS)

    Hsieh, Paul F.; Stern, Ivan

    We present a paradigm and implementation of a system for the specification of the experimental protocols to be used for the calibration of AXAF mirrors. For the mirror calibration, several thousand individual measurements need to be defined. For each measurement, over one hundred parameters need to be tabulated for the facility test conductor and several hundred instrument parameters need to be set. We provide a high level protocol language which allows for a tractable representation of the measurement protocol. We present a procedure dispatcher which automatically sequences a protocol more accurately and more rapidly than is possible by an unassisted human operator. We also present back-end tools to generate printed procedure manuals and database tables required for review by the AXAF program. This paradigm has been tested and refined in the calibration of detectors to be used in mirror calibration.

  4. Automatic force balance calibration system

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T. (Inventor)

    1996-01-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  5. Computerized automatic tip scanning operation

    SciTech Connect

    Nishikawa, K.; Fukushima, T.; Nakai, H.; Yanagisawa, A.

    1984-02-01

    In BWR nuclear power stations the Traversing Incore Probe (TIP) system is one of the most important components in reactor monitoring and control. In previous TIP systems, however, operators have suffered from the complexity of operation and long operation time required. The system presented in this paper realizes the automatic operation of the TIP system by monitoring and driving it with a process computer. This system significantly reduces the burden on customer operators and improves plant efficiency by simplifying the operating procedure, augmenting the accuracy of the measured data, and shortening operating time. The process computer is one of the PODIA (Plant Operation by Displayed Information Automation) systems. This computer transfers control signals to the TIP control panel, which in turn drives equipment by microprocessor control. The process computer contains such components as the CRT/KB unit, the printer plotter, the hard copier, and the message typers required for efficient man-machine communications. Its operation and interface properties are described.

  6. Automatic electronic fish tracking system

    NASA Technical Reports Server (NTRS)

    Osborne, P. W.; Hoffman, E.; Merriner, J. V.; Richards, C. E.; Lovelady, R. W.

    1976-01-01

    A newly developed electronic fish tracking system to automatically monitor the movements and migratory habits of fish is reported. The system is aimed particularly at studies of effects on fish life of industrial facilities which use rivers or lakes to dump their effluents. Location of fish is acquired by means of acoustic links from the fish to underwater Listening Stations, and by radio links which relay tracking information to a shore-based Data Base. Fish over 4 inches long may be tracked over a 5 x 5 mile area. The electronic fish tracking system provides the marine scientist with electronics which permit studies that were not practical in the past and which are cost-effective compared to manual methods.

  7. Automatic insulation resistance testing apparatus

    DOEpatents

    Wyant, Francis J.; Nowlen, Steven P.; Luker, Spencer M.

    2005-06-14

    An apparatus and method for automatic measurement of insulation resistances of a multi-conductor cable. In one embodiment of the invention, the apparatus comprises a power supply source, an input measuring means, an output measuring means, a plurality of input relay controlled contacts, a plurality of output relay controlled contacts, a relay controller and a computer. In another embodiment of the invention the apparatus comprises a power supply source, an input measuring means, an output measuring means, an input switching unit, an output switching unit and a control unit/data logger. Embodiments of the apparatus of the invention may also incorporate cable fire testing means. The apparatus and methods of the present invention use either voltage or current for input and output measured variables.

  8. Automatic blocking of nested loops

    NASA Technical Reports Server (NTRS)

    Schreiber, Robert; Dongarra, Jack J.

    1990-01-01

    Blocked algorithms have much better properties of data locality and therefore can be much more efficient than ordinary algorithms when a memory hierarchy is involved. On the other hand, they are very difficult to write and to tune for particular machines. The reorganization is considered of nested loops through the use of known program transformations in order to create blocked algorithms automatically. The program transformations used are strip mining, loop interchange, and a variant of loop skewing in which invertible linear transformations (with integer coordinates) of the loop indices are allowed. Some problems are solved concerning the optimal application of these transformations. It is shown, in a very general setting, how to choose a nearly optimal set of transformed indices. It is then shown, in one particular but rather frequently occurring situation, how to choose an optimal set of block sizes.

  9. Automatic transmission for a vehicle

    SciTech Connect

    Moroto, S.; Sakakibara, S.

    1986-12-09

    An automatic transmission is described for a vehicle, comprising: a coupling means having an input shaft and an output shaft; a belt type continuously-variable speed transmission system having an input pulley mounted coaxially on a first shaft, an output pulley mounted coaxially on a second shaft and a belt extending between the first and second pulleys to transfer power, each of the first and second pulleys having a fixed sheave and a movable sheave. The first shaft is disposed coaxially with and rotatably coupled with the output shaft of the coupling means, the second shaft being disposed side by side and in parallel with the first shaft; a planetary gear mechanism; a forward-reverse changeover mechanism and a low-high speed changeover mechanism.

  10. Automatic toilet seat lowering apparatus

    SciTech Connect

    Guerty, Harold G.

    1994-09-06

    A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat. A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat.

  11. Automatic AVHRR image navigation software

    NASA Technical Reports Server (NTRS)

    Baldwin, Dan; Emery, William

    1992-01-01

    This is the final report describing the work done on the project entitled Automatic AVHRR Image Navigation Software funded through NASA-Washington, award NAGW-3224, Account 153-7529. At the onset of this project, we had developed image navigation software capable of producing geo-registered images from AVHRR data. The registrations were highly accurate but required a priori knowledge of the spacecraft's axes alignment deviations, commonly known as attitude. The three angles needed to describe the attitude are called roll, pitch, and yaw, and are the components of the deviations in the along scan, along track and about center directions. The inclusion of the attitude corrections in the navigation software results in highly accurate georegistrations, however, the computation of the angles is very tedious and involves human interpretation for several steps. The technique also requires easily identifiable ground features which may not be available due to cloud cover or for ocean data. The current project was motivated by the need for a navigation system which was automatic and did not require human intervention or ground control points. The first step in creating such a system must be the ability to parameterize the spacecraft's attitude. The immediate goal of this project was to study the attitude fluctuations and determine if they displayed any systematic behavior which could be modeled or parameterized. We chose a period in 1991-1992 to study the attitude of the NOAA 11 spacecraft using data from the Tiros receiving station at the Colorado Center for Astrodynamic Research (CCAR) at the University of Colorado.

  12. Aided versus automatic target recognition

    NASA Astrophysics Data System (ADS)

    O'Hair, Mark A.; Purvis, Bradley D.; Brown, Jeff

    1997-06-01

    Automatic target recognition (ATR) algorithms have offered the promise of recognizing items of military importance over the past 20 years. It is the experience of the authors that greater ATR success would be possible if the ATR were used to 'aid' the human operator instead of automatically 'direct' the operator. ATRs have failed not due to their probability of detection versus false alarm rate, but to neglect of the human component. ATRs are designed to improve overall throughput by relieving the human operator of the need to perform repetitive tasks like scanning vast quantities of imagery for possible targets. ATRs are typically inserted prior to the operator and provide cues, which are then accepted or rejected. From our experience at three field exercises and a current operational deployment to the Bosnian theater, this is not the best way to get total system performance. The human operator makes decisions based on learning, history of past events, and surrounding contextual information. Loss of these factors by providing imagery, latent with symbolic cues on top of the original imagery, actually increases the workload of the operator. This paper covers the lessons learned from the field demonstrations and the operational deployment. The reconnaissance and intelligence community's primary use of an ATR should be to establish prioritized cues of potential targets for an operator to 'pull' from and to be able to 'send' targets identified by the operator for a 'second opinion.' The Army and Air Force are modifying their exploitation workstations over the next 18 months to use ATRs, which operate in this fashion. This will be the future architecture that ATRs for the reconnaissance and intelligence community should integrate into.

  13. Mobile text messaging for health: a systematic review of reviews.

    PubMed

    Hall, Amanda K; Cole-Lewis, Heather; Bernhardt, Jay M

    2015-03-18

    The aim of this systematic review of reviews is to identify mobile text-messaging interventions designed for health improvement and behavior change and to derive recommendations for practice. We have compiled and reviewed existing systematic research reviews and meta-analyses to organize and summarize the text-messaging intervention evidence base, identify best-practice recommendations based on findings from multiple reviews, and explore implications for future research. Our review found that the majority of published text-messaging interventions were effective when addressing diabetes self-management, weight loss, physical activity, smoking cessation, and medication adherence for antiretroviral therapy. However, we found limited evidence across the population of studies and reviews to inform recommended intervention characteristics. Although strong evidence supports the value of integrating text-messaging interventions into public health practice, additional research is needed to establish longer-term intervention effects, identify recommended intervention characteristics, and explore issues of cost-effectiveness. PMID:25785892

  14. Mobile Text Messaging for Health: A Systematic Review of Reviews

    PubMed Central

    Hall, Amanda K.; Cole-Lewis, Heather; Bernhardt, Jay M.

    2015-01-01

    The aim of this systematic review of reviews is to identify mobile text-messaging interventions designed for health improvement and behavior change and to derive recommendations for practice. We have compiled and reviewed existing systematic research reviews and meta-analyses to organize and summarize the text-messaging intervention evidence base, identify best-practice recommendations based on findings from multiple reviews, and explore implications for future research. Our review found that the majority of published text-messaging interventions were effective when addressing diabetes self-management, weight loss, physical activity, smoking cessation, and medication adherence for antiretroviral therapy. However, we found limited evidence across the population of studies and reviews to inform recommended intervention characteristics. Although strong evidence supports the value of integrating text-messaging interventions into public health practice, additional research is needed to establish longer-term intervention effects, identify recommended intervention characteristics, and explore issues of cost-effectiveness. PMID:25785892

  15. Summarizing results on the performance of a selective set of atmospheric plasma jets for separation of photons and reactive particles

    NASA Astrophysics Data System (ADS)

    Schneider, Simon; Jarzina, Fabian; Lackmann, Jan-Wilm; Golda, Judith; Layes, Vincent; Schulz-von der Gathen, Volker; Bandow, Julia Elisabeth; Benedikt, Jan

    2015-11-01

    A microscale atmospheric-pressure plasma jet is a remote plasma jet, where plasma-generated reactive particles and photons are involved in substrate treatment. Here, we summarize our efforts to develop and characterize a particle- or photon-selective set of otherwise identical jets. In that way, the reactive species or photons can be used separately or in combination to study their isolated or combined effects to test whether the effects are additive or synergistic. The final version of the set of three jets—particle-jet, photon-jet and combined jet—is introduced. This final set realizes the highest reproducibility of the photon and particle fluxes, avoids turbulent gas flow, and the fluxes of the selected plasma-emitted components are almost identical in the case of all jets, while the other component is effectively blocked, which was verified by optical emission spectroscopy and mass spectrometry. Schlieren-imaging and a fluid dynamics simulation show the stability of the gas flow. The performance of these selective jets is demonstrated with the example of the treatment of E. coli bacteria with the different components emitted by a He-only, a He/N2 and a He/O2 plasma. Additionally, measurements of the vacuum UV photon spectra down to the wavelength of 50 nm can be made with the photon-jet and the relative comparison of spectral intensities among different gas mixtures is reported here. The results will show that the vacuum UV photons can lead to the inactivation of the E.coli bacteria.

  16. A unified framework for multioriented text detection and recognition.

    PubMed

    Yao, Cong; Bai, Xiang; Liu, Wenyu

    2014-11-01

    High level semantics embodied in scene texts are both rich and clear and thus can serve as important cues for a wide range of vision applications, for instance, image understanding, image indexing, video search, geolocation, and automatic navigation. In this paper, we present a unified framework for text detection and recognition in natural images. The contributions of this paper are threefold: 1) text detection and recognition are accomplished concurrently using exactly the same features and classification scheme; 2) in contrast to methods in the literature, which mainly focus on horizontal or near-horizontal texts, the proposed system is capable of localizing and reading texts of varying orientations; and 3) a new dictionary search method is proposed, to correct the recognition errors usually caused by confusions among similar yet different characters. As an additional contribution, a novel image database with texts of different scales, colors, fonts, and orientations in diverse real-world scenarios, is generated and released. Extensive experiments on standard benchmarks as well as the proposed database demonstrate that the proposed system achieves highly competitive performance, especially on multioriented texts. PMID:25203989

  17. Automatic Grading of Spreadsheet and Database Skills

    ERIC Educational Resources Information Center

    Kovacic, Zlatko J.; Green, John Steven

    2012-01-01

    Growing enrollment in distance education has increased student-to-lecturer ratios and, therefore, increased the workload of the lecturer. This growing enrollment has resulted in mounting efforts to develop automatic grading systems in an effort to reduce this workload. While research in the design and development of automatic grading systems has a

  18. Automatic Contour Tracking in Ultrasound Images

    ERIC Educational Resources Information Center

    Li, Min; Kambhamettu, Chandra; Stone, Maureen

    2005-01-01

    In this paper, a new automatic contour tracking system, EdgeTrak, for the ultrasound image sequences of human tongue is presented. The images are produced by a head and transducer support system (HATS). The noise and unrelated high-contrast edges in ultrasound images make it very difficult to automatically detect the correct tongue surfaces. In…

  19. An Experiment in Automatic Hierarchical Document Classification.

    ERIC Educational Resources Information Center

    Garland, Kathleen

    1983-01-01

    Describes method of automatic document classification in which documents classed as QA by Library of Congress classification system were clustered at six thresholds by keyword using single link technique. Automatically generated clusters were compared to Library of Congress subclasses, and partial classified hierarchy was formed. Twelve references…

  20. ANNUAL REPORT-AUTOMATIC INDEXING AND ABSTRACTING.

    ERIC Educational Resources Information Center

    Lockheed Missiles and Space Co., Palo Alto, CA. Electronic Sciences Lab.

    THE INVESTIGATION IS CONCERNED WITH THE DEVELOPMENT OF AUTOMATIC INDEXING, ABSTRACTING, AND EXTRACTING SYSTEMS. BASIC INVESTIGATIONS IN ENGLISH MORPHOLOGY, PHONETICS, AND SYNTAX ARE PURSUED AS NECESSARY MEANS TO THIS END. IN THE FIRST SECTION THE THEORY AND DESIGN OF THE "SENTENCE DICTIONARY" EXPERIMENT IN AUTOMATIC EXTRACTION IS OUTLINED. SOME OF…

  1. Annual Report: Automatic Informative Abstracting and Extracting.

    ERIC Educational Resources Information Center

    Earl, L. L.; And Others

    The development of automatic indexing, abstracting, and extracting systems is investigated. Part I describes the development of tools for making syntactic and semantic distinctions of potential use in automatic indexing and extracting. One of these tools is a program for syntactic analysis (i.e., parsing) of English, the other is a dictionary of…

  2. 32 CFR 2001.30 - Automatic declassification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Automatic declassification. 2001.30 Section 2001... Declassification § 2001.30 Automatic declassification. (a) General. All departments and agencies that have original... originating agency as described in § 2001.34. (g) Unscheduled records. Classified information in records...

  3. 32 CFR 2001.30 - Automatic declassification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Automatic declassification. 2001.30 Section 2001... Declassification § 2001.30 Automatic declassification. (a) General. All departments and agencies that have original... originating agency as described in § 2001.34. (g) Unscheduled records. Classified information in records...

  4. Automaticity Training for Dyslexics: An Experimental Study.

    ERIC Educational Resources Information Center

    Holt-Ochsner, Liana K.; Manis, Franklin R.

    1992-01-01

    This study used computer word games to train 35 dyslexic readers (mean age 13 years) in automaticity (speed and accuracy) of word recognition. After training, reaction time on the word vocalization and sentence comprehension tasks improved significantly for both trained and untrained stimuli. Results support the automaticity hypothesis. (DB)

  5. Annual Report: Automatic Informative Abstracting and Extracting.

    ERIC Educational Resources Information Center

    Earl, L. L.; And Others

    The development of automatic indexing, abstracting, and extracting systems is investigated. Part I describes the development of tools for making syntactic and semantic distinctions of potential use in automatic indexing and extracting. One of these tools is a program for syntactic analysis (i.e., parsing) of English, the other is a dictionary of

  6. ANNUAL REPORT-AUTOMATIC INDEXING AND ABSTRACTING.

    ERIC Educational Resources Information Center

    Lockheed Missiles and Space Co., Palo Alto, CA. Electronic Sciences Lab.

    THE INVESTIGATION IS CONCERNED WITH THE DEVELOPMENT OF AUTOMATIC INDEXING, ABSTRACTING, AND EXTRACTING SYSTEMS. BASIC INVESTIGATIONS IN ENGLISH MORPHOLOGY, PHONETICS, AND SYNTAX ARE PURSUED AS NECESSARY MEANS TO THIS END. IN THE FIRST SECTION THE THEORY AND DESIGN OF THE "SENTENCE DICTIONARY" EXPERIMENT IN AUTOMATIC EXTRACTION IS OUTLINED. SOME OF

  7. Automatic Grading of Spreadsheet and Database Skills

    ERIC Educational Resources Information Center

    Kovacic, Zlatko J.; Green, John Steven

    2012-01-01

    Growing enrollment in distance education has increased student-to-lecturer ratios and, therefore, increased the workload of the lecturer. This growing enrollment has resulted in mounting efforts to develop automatic grading systems in an effort to reduce this workload. While research in the design and development of automatic grading systems has a…

  8. Automatic Contour Tracking in Ultrasound Images

    ERIC Educational Resources Information Center

    Li, Min; Kambhamettu, Chandra; Stone, Maureen

    2005-01-01

    In this paper, a new automatic contour tracking system, EdgeTrak, for the ultrasound image sequences of human tongue is presented. The images are produced by a head and transducer support system (HATS). The noise and unrelated high-contrast edges in ultrasound images make it very difficult to automatically detect the correct tongue surfaces. In

  9. The Automaticity of Visual Statistical Learning

    ERIC Educational Resources Information Center

    Turk-Browne, Nicholas B.; Junge, Justin; Scholl, Brian J.

    2005-01-01

    The visual environment contains massive amounts of information involving the relations between objects in space and time, and recent studies of visual statistical learning (VSL) have suggested that this information can be automatically extracted by the visual system. The experiments reported in this article explore the automaticity of VSL in…

  10. Inferring Group Processes from Computer-Mediated Affective Text Analysis

    SciTech Connect

    Schryver, Jack C; Begoli, Edmon; Jose, Ajith; Griffin, Christopher

    2011-02-01

    Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Several useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.

  11. Semi Automatic Ontology Instantiation in the domain of Risk Management

    NASA Astrophysics Data System (ADS)

    Makki, Jawad; Alquier, Anne-Marie; Prince, Violaine

    One of the challenging tasks in the context of Ontological Engineering is to automatically or semi-automatically support the process of Ontology Learning and Ontology Population from semi-structured documents (texts). In this paper we describe a Semi-Automatic Ontology Instantiation method from natural language text, in the domain of Risk Management. This method is composed from three steps 1 ) Annotation with part-of-speech tags, 2) Semantic Relation Instances Extraction, 3) Ontology instantiation process. It's based on combined NLP techniques using human intervention between steps 2 and 3 for control and validation. Since it heavily relies on linguistic knowledge it is not domain dependent which is a good feature for portability between the different fields of risk management application. The proposed methodology uses the ontology of the PRIMA1 project (supported by the European community) as a Generic Domain Ontology and populates it via an available corpus. A first validation of the approach is done through an experiment with Chemical Fact Sheets from Environmental Protection Agency2.

  12. Automatic extraction of relationships between concepts based on ontology

    NASA Astrophysics Data System (ADS)

    Yuan, Yifan; Du, Junping; Yang, Yuehua; Zhou, Jun; He, Pengcheng; Cao, Shouxin

    This paper applies Chinese word segmentation technology to the automatic extraction and description of the relationship between concepts. It takes text as corpus, matches the concept-pairs by rules and then describes the relationship between concepts in statistical methods. The paper implements an experiment based on the text in the field "respond to emergency", and optimizes speech tagging on account of experimental results, so that the relations extracted are more meaningful to emergency response. It analyzes the display order of inquiries and formulates rules of response and makes the results more meaningful. Consequently, the method turns out to be effective, and can be flexibly extended to other areas.

  13. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  14. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.

    PubMed

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  15. Spaceliner Class Operability Gains Via Combined Airbreathing/ Rocket Propulsion: Summarizing an Operational Assessment of Highly Reusable Space Transports

    NASA Technical Reports Server (NTRS)

    Nix, Michael B.; Escher, William J. d.

    1999-01-01

    In discussing a new NASA initiative in advanced space transportation systems and technologies, the Director of the NASA Marshall Space Flight Center, Arthur G. Stephenson, noted that, "It would use new propulsion technology, air-breathing engine so you don't have to carry liquid oxygen, at least while your flying through the atmosphere. We are calling it Spaceliner 100 because it would be 100 times cheaper, costing $ 100 dollars a pound to orbit." While airbreathing propulsion is directly named, rocket propulsion is also implied by, "... while you are flying through the atmosphere." In-space final acceleration to orbital speed mandates rocket capabilities. Thus, in this informed view, Spaceliner 100 will be predicated on combined airbreathing/rocket propulsion, the technical subject of this paper. Interestingly, NASA's recently concluded Highly Reusable Space Transportation (HRST) study focused on the same affordability goal as that of the Spaceliner 100 initiative and reflected the decisive contribution of combined propulsion as a way of expanding operability and increasing the design robustness of future space transports, toward "aircraft like" capabilities. The HRST study built on the Access to Space Study and the Reusable Launch Vehicle (RLV) development activities to identify and characterize space transportation concepts, infrastructure and technologies that have the greatest potential for reducing delivery cost by another order of magnitude, from $1,000 to $100-$200 per pound for 20,000 lb. - 40.000 lb. payloads to low earth orbit (LEO). The HRST study investigated a number of near-term, far-term, and very far-term launch vehicle concepts including all-rocket single-stage-to-orbit (SSTO) concepts, two-stage-to-orbit (TSTO) concepts, concepts with launch assist, rocket-based combined cycle (RBCC) concepts, advanced expendable vehicles, and more far term ground-based laser powered launchers. The HRST study consisted of preliminary concept studies, assessments and analysis tool development for advanced space transportation systems, followed by end-to-end system concept definitions and trade analyses, specific system concept definition and analysis, specific key technology and topic analysis, system, operational and economics model development, analysis, and integrated assessments. The HRST Integration Task Force (HITF) was formed to synthesize study results in several specific topic areas and support the development of conclusions from the study: Systems Concepts Definitions, Technology Assessment, Operations Assessment, and Cost Assessment. This paper summarizes the work of the Operations Assessment Team: the six approaches used, the analytical tools and methodologies developed and employed, the issues and concerns, and the results of the assessment. The approaches were deliberately varied in measures of merit and procedure to compensate for the uncertainty inherent in operations data in this early phase of concept exploration. In general, rocket based combined cycle (RBCC) concepts appear to have significantly greater potential than all-rocket concepts for reducing operations costs.

  16. Automatic Weather Station (AWS) Lidar

    NASA Technical Reports Server (NTRS)

    Rall, Jonathan A.R.; Abshire, James B.; Spinhirne, James D.; Smith, David E. (Technical Monitor)

    2000-01-01

    An autonomous, low-power atmospheric lidar instrument is being developed at NASA Goddard Space Flight Center. This compact, portable lidar will operate continuously in a temperature controlled enclosure, charge its own batteries through a combination of a small rugged wind generator and solar panels, and transmit its data from remote locations to ground stations via satellite. A network of these instruments will be established by co-locating them at remote Automatic Weather Station (AWS) sites in Antarctica under the auspices of the National Science Foundation (NSF). The NSF Office of Polar Programs provides support to place the weather stations in remote areas of Antarctica in support of meteorological research and operations. The AWS meteorological data will directly benefit the analysis of the lidar data while a network of ground based atmospheric lidar will provide knowledge regarding the temporal evolution and spatial extent of Type la polar stratospheric clouds (PSC). These clouds play a crucial role in the annual austral springtime destruction of stratospheric ozone over Antarctica, i.e. the ozone hole. In addition, the lidar will monitor and record the general atmospheric conditions (transmission and backscatter) of the overlying atmosphere which will benefit the Geoscience Laser Altimeter System (GLAS). Prototype lidar instruments have been deployed to the Amundsen-Scott South Pole Station (1995-96, 2000) and to an Automated Geophysical Observatory site (AGO 1) in January 1999. We report on data acquired with these instruments, instrument performance, and anticipated performance of the AWS Lidar.

  17. Ekofisk automatic GPS subsidence measurements

    SciTech Connect

    Mes, M.J.; Landau, H.; Luttenberger, C.

    1996-10-01

    A fully automatic GPS satellite-based procedure for the reliable measurement of subsidence of several platforms in almost real time is described. Measurements are made continuously on platforms in the North Sea Ekofisk Field area. The procedure also yields rate measurements, which are also essential for confirming platform safety, planning of remedial work, and verification of subsidence models. GPS measurements are more attractive than seabed pressure-gauge-based platform subsidence measurements-they are much cheaper to install and maintain and not subject to gauge drift. GPS measurements were coupled to oceanographic quantities such as the platform deck clearance, which leads to less complex offshore survey procedures. Ekofisk is an oil and gas field in the southern portion of the Norwegian North Sea. Late in 1984, it was noticed that the Ekofisk platform decks were closer to the sea surface than when the platforms were installed-subsidence was the only logical explanation. After the subsidence phenomenon was recognized, an accurate measurement method was needed to measure progression of subsidence and the associated subsidence rate. One available system for which no further development was needed, was the NAVSTAR GPS-measurements started in March 1985.

  18. Automatic interpretation of biological tests.

    PubMed

    Boufriche-Boufaïda, Z

    1998-03-01

    In this article, an approach for an Automatic Interpretation of Biological Tests (AIBT) is described. The developed system is much needed in Preventive Medicine Centers (PMCs). It is designed as a self-sufficient system that could be easily used by trained nurses during the routine visit. The results that the system provides are not only useful to provide the PMC physicians with a preliminary diagnosis, but also allows them more time to focus on the serious cases, making the clinical visit more qualitative. On the other hand, because the use of such a system has been planned for many years, its possibilities for future extensions must be seriously considered. The methodology adopted can be interpreted as a combination of the advantages of two main approaches adopted in current diagnostic systems: the production system approach and the object-oriented system approach. From the rules, the ability of these approaches to capture the deductive processes of the expert in domains where causal mechanisms are often understood are retained. The object-oriented approach guides the elicitation and the engineering of knowledge in such a way that abstractions, categorizations and classifications are encouraged whilst individual instances of objects of any type are recognized as separate, independent entities. PMID:9684093

  19. Automatic image cropping for republishing

    NASA Astrophysics Data System (ADS)

    Cheatle, Phil

    2010-02-01

    Image cropping is an important aspect of creating aesthetically pleasing web pages and repurposing content for different web or printed output layouts. Cropping provides both the possibility of improving the composition of the image, and also the ability to change the aspect ratio of the image to suit the layout design needs of different document or web page formats. This paper presents a method for aesthetically cropping images on the basis of their content. Underlying the approach is a novel segmentation-based saliency method which identifies some regions as "distractions", as an alternative to the conventional "foreground" and "background" classifications. Distractions are a particular problem with typical consumer photos found on social networking websites such as FaceBook, Flickr etc. Automatic cropping is achieved by identifying the main subject area of the image and then using an optimization search to expand this to form an aesthetically pleasing crop. Evaluation of aesthetic functions like auto-crop is difficult as there is no single correct solution. A further contribution of this paper is an automated evaluation method which goes some way towards handling the complexity of aesthetic assessment. This allows crop algorithms to be easily evaluated against a large test set.

  20. Automatic transmission system for vehicles

    SciTech Connect

    Takefuta, H.

    1987-02-24

    An automatic transmission system is described for vehicles having a friction clutch coupled to an internal combustion engine, a speed-change-gear type transmission coupled to the clutch, a first actuator for operating the clutch in response to an electric signal, and a second actuator for operating the transmission in response to an electric signal. A means is also included for producing at least one condition data indicative of the condition of operation of the vehicle and a control means responsive to at least the condition data for controlling the operation of the first and second actuators in order to carry out the gear change operation of the transmission. The control means includes: (1) a storing means for storing a first data representing a first gear change map showing gear change characteristics for obtaining economical running and a second data representing a second gear change map showing gear change characteristics for obtaining high-power-output running; (2) a signal generating means which has an operation lever movable along a predetermined gear shift pattern used for manual operation and generates a command signal indicative of the position of the operation lever on the gear shift pattern; and (3) means responsive to the command signal and the condition data for controlling the first and second actuators so as to carry out a gear change operation in one mode among a first control mode in which the transmission is shifted to the gear position corresponding to the position of the operation lever.