Science.gov

Sample records for automatic text summarization

  1. A Comparison of Automatic Summarizers of Texts in Brazilian Portuguese

    E-print Network

    Kent, University of

    , Thiago Alexandre Salgueiro Pardo1, Carlos Nascimento Silla Jr.2, Celso Antônio Alves Kaestner2, Michael-901 Curitiba, PR, Brazil {silla; kaestner}@ppgia.pucpr.br Abstract. Automatic Summarization (AS) in Brazil has

  2. Automatic Text Structuring and Categorization As a First Step in Summarizing Legal Cases.

    ERIC Educational Resources Information Center

    Moens, Marie-Francine; Uyttendaele, Caroline

    1997-01-01

    Describes SALOMON (Summary and Analysis of Legal texts for Managing Online Needs), a system which automatically summarizes Belgian criminal cases to improve access to court decisions. Highlights include a text grammar represented as a semantic network; automatic abstracting; knowledge acquisition and representation; parsing; evaluation, including…

  3. Towards Answering Biological Questions with Experimental Evidence: Automatically Identifying Text that Summarize Image Content in Full-Text Articles

    E-print Network

    Yu, Hong

    that Summarize Image Content in Full-Text Articles Hong Yu, PhD Department of Health Sciences, University reported in bioscience full-text articles. Biologists need to access images to validate research facts Images (i.e., figures) are important experimental evidence that are typically reported in bioscience full-text

  4. Generic Text Summarization for Turkish

    E-print Network

    Cicekli, Ilyas

    Generic Text Summarization for Turkish Mücahid Kutlu1 , Celal Ciir1 and Ilyas Cicekli1* 1@cs.bilkent.edu.tr In this paper, we propose a generic text summarization method that generates summaries of Turkish texts summarization outputs with manual summaries of two newly created Turkish data sets. This paper presents one

  5. Using Text Messaging to Summarize Text

    ERIC Educational Resources Information Center

    Williams, Angela Ruffin

    2012-01-01

    Summarizing is an academic task that students are expected to have mastered by the time they enter college. However, experience has revealed quite the contrary. Summarization is often difficult to master as well as teach, but instructors in higher education can benefit greatly from the rapid advancement in mobile wireless technology devices, by…

  6. Figure-Associated Text Summarization and Evaluation

    PubMed Central

    Polepalli Ramesh, Balaji; Sethi, Ricky J.; Yu, Hong

    2015-01-01

    Biomedical literature incorporates millions of figures, which are a rich and important knowledge resource for biomedical researchers. Scientists need access to the figures and the knowledge they represent in order to validate research findings and to generate new hypotheses. By themselves, these figures are nearly always incomprehensible to both humans and machines and their associated texts are therefore essential for full comprehension. The associated text of a figure, however, is scattered throughout its full-text article and contains redundant information content. In this paper, we report the continued development and evaluation of several figure summarization systems, the FigSum+ systems, that automatically identify associated texts, remove redundant information, and generate a text summary for every figure in an article. Using a set of 94 annotated figures selected from 19 different journals, we conducted an intrinsic evaluation of FigSum+. We evaluate the performance by precision, recall, F1, and ROUGE scores. The best FigSum+ system is based on an unsupervised method, achieving F1 score of 0.66 and ROUGE-1 score of 0.97. The annotated data is available at figshare.com (http://figshare.com/articles/Figure_Associated_Text_Summarization_and_Evaluation/858903). PMID:25643357

  7. Task-Driven Dynamic Text Summarization

    ERIC Educational Resources Information Center

    Workman, Terri Elizabeth

    2011-01-01

    The objective of this work is to examine the efficacy of natural language processing (NLP) in summarizing bibliographic text for multiple purposes. Researchers have noted the accelerating growth of bibliographic databases. Information seekers using traditional information retrieval techniques when searching large bibliographic databases are often…

  8. Automatic Soccer Video Analysis and Summarization

    NASA Astrophysics Data System (ADS)

    Ekin, Ahmet; Tekalp, A. Murat

    2003-01-01

    We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level soccer video processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game, ii) all goals in a game, and iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust for soccer video processing. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g. goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and the robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured at different countries and conditions.

  9. Text Summarization Branches Out Proceedings of the ACL-04 Workshop

    E-print Network

    Text Summarization Branches Out Proceedings of the ACL-04 Workshop Marie-Francine Moens and Stan and useful approaches to summarization in a world of small mobile devices with their miniature screens. Welcome to the ACL-2004 workshop Text Summarization Branches Out. Enjoy! Marie-Francine Moens Stan

  10. An Efficient Statistical Approach for Automatic Organic Chemistry Summarization

    E-print Network

    Avignon et des Pays de Vaucluse, Université de

    An Efficient Statistical Approach for Automatic Organic Chemistry Summarization Florian Boudin1 for summa- rizing scientific documents in Organic Chemistry that concentrates on numerical treatments. We of Organic Chemistry articles. 1 Introduction Over 1.7 million new Chemistry articles were published in 20071

  11. Information Extraction and Text Summarization Using Linguistic Knowledge Acquisition.

    ERIC Educational Resources Information Center

    Rau, Lisa F.; And Others

    1989-01-01

    Describes SCISOR (System for Conceptual Information Summarization, Organization and Retrieval), a prototype intelligent information retrieval system that extracts useful information from large bodies of text. It overcomes limitations of linguistic coverage by applying a text processing strategy that is tolerant of unknown words and gaps in…

  12. Generic Text Summarization for Turkish Department of Computer Science

    E-print Network

    Cicekli, Ilyas

    Generic Text Summarization for Turkish Celal Ciir Department of Computer Science Bilkent University method that generates summaries of Turkish texts by ranking sentences according to their scores function that uses its feature values and the weights of the features. The best feature weights are learned

  13. Integrating cohesion and coherence for Automatic Summarization Laura Alonso i Alemany Maria Fuentes Fort

    E-print Network

    Integrating cohesion and coherence for Automatic Summarization Laura Alonso i Alemany Maria Fuentes This paper presents the integration of cohesive properties of text with co- herence relations, to obtain been devoted to the adequacy of the resulting texts to a human user. Well-formedness, cohesion

  14. c 2002 Association for Computational Linguistics Automatic Summarization of

    E-print Network

    of cross-speaker information units (question-answer pairs). A system evaluation is performed using a corpus annotators. The global eval- uation shows that for the two more informal genres, our summarization system exclusively use human transcripts of spoken dialogues. Intrinsic evaluations of text summaries usually use

  15. Automatic Summarization of Mouse Gene Information by Clustering and Sentence Extraction from MEDLINE Abstracts

    PubMed Central

    Yang, Jianji; Cohen, Aaron M.; Hersh, William

    2007-01-01

    Tools to automatically summarize gene information from the literature have the potential to help genomics researchers better interpret gene expression data and investigate biological pathways. The task of finding information on sets of genes is common for genomic researchers, and PubMed is still the first choice because the most recent and original information can only be found in the unstructured, free text biomedical literature. However, finding information on a set of genes by manually searching and scanning the literature is a time-consuming and daunting task for scientists. We built and evaluated a query-based automatic summarizer of information on mouse genes studied in microarray experiments. The system clusters a set of genes by MeSH, GO and free text features and presents summaries for each gene by ranked sentences extracted from MEDLINE abstracts. Evaluation showed that the system seems to provide meaningful clusters and informative sentences are ranked higher by the algorithm. PMID:18693953

  16. Ranking, Labeling, and Summarizing Short Text in Social Media 

    E-print Network

    Khabiri, Elham

    2013-04-18

    One of the key features driving the growth and success of the Social Web is large-scale participation through user-contributed content – often through short text in social media. Unlike traditional long-form documents – e.g., Web pages, blog posts...

  17. ! -..///+ + +,.0 This text first summarizes what can be the respective

    E-print Network

    Nugues, Pierre

    and dialogue. It then describes three examples of verbal and written interaction systems in virtual reality an information extraction system can benefit from such a tool. Finally, the text describes a virtual workbench in virtual worlds. Keywords: Virtual reality, Conversational agents, Spoken navigation, Scene generation from

  18. Automatic summarization of voicemail messages using lexical and prosodic features 

    E-print Network

    Koumpis, Konstantinos; Renals, Steve

    This article presents trainable methods for extracting principal content words from voicemail messages. The short text summaries generated are suitable for mobile messaging applications. The system uses a set of classifiers ...

  19. Automatic summarization of voicemail messages using lexical and prosodic features

    E-print Network

    Edinburgh, University of

    of SDR systems, operating on an archive of broadcast news, were evaluated as part of the Text REtrieval different speech recogni- tion systems, as well as human transcriptions of voicemail speech. keywords of information and commu- nication systems that deal with audio and visual media has stimulated the need

  20. Automatic summarization of voicemail messages using lexical and prosodic features

    E-print Network

    Edinburgh, University of

    of SDR systems, operating on an archive of broadcast news, were evaluated as part of the Text REtrieval different speech recogni­ tion systems, as well as human transcriptions of voicemail speech. keywords of information and commu­ nication systems that deal with audio and visual media has stimulated the need

  1. Automatic Summarization of MEDLINE Citations for Evidence–Based Medical Treatment: A Topic-Oriented Evaluation

    PubMed Central

    Fiszman, Marcelo; Demner-Fushman, Dina; Kilicoglu, Halil; Rindflesch, Thomas C.

    2009-01-01

    As the number of electronic biomedical textual resources increases, it becomes harder for physicians to find useful answers at the point of care. Information retrieval applications provide access to databases; however, little research has been done on using automatic summarization to help navigate the documents returned by these systems. After presenting a semantic abstraction automatic summarization system for MEDLINE citations, we concentrate on evaluating its ability to identify useful drug interventions for fifty-three diseases. The evaluation methodology uses existing sources of evidence-based medicine as surrogates for a physician-annotated reference standard. Mean average precision (MAP) and a clinical usefulness score developed for this study were computed as performance metrics. The automatic summarization system significantly outperformed the baseline in both metrics. The MAP gain was 0.17 (p < 0.01) and the increase in the overall score of clinical usefulness was 0.39 (p < 0.05). PMID:19022398

  2. A Study of Cognitive Mapping as a Means to Improve Summarization and Comprehension of Expository Text.

    ERIC Educational Resources Information Center

    Ruddell, Robert B.; Boyle, Owen F.

    1989-01-01

    Investigates the effects of cognitive mapping on written summarization and comprehension of expository text. Concludes that mapping appears to assist students in: (1) developing procedural knowledge resulting in more effective written summarization and (2) identifying and using supporting details in their essays. (MG)

  3. Science Text Comprehension: Drawing, Main Idea Selection, and Summarizing as Learning Strategies

    ERIC Educational Resources Information Center

    Leopold, Claudia; Leutner, Detlev

    2012-01-01

    The purpose of two experiments was to contrast instructions to generate drawings with two text-focused strategies--main idea selection (Exp. 1) and summarization (Exp. 2)--and to examine whether these strategies could help students learn from a chemistry science text. Both experiments followed a 2 x 2 design, with drawing strategy instructions…

  4. A Comparison of Two Strategies for Teaching Third Graders to Summarize Information Text

    ERIC Educational Resources Information Center

    Dromsky, Ann Marie

    2011-01-01

    Summarizing text is one of the most effective comprehension strategies (National Institute of Child Health and Human Development, 2000) and an effective way to learn from information text (Dole, Duffy, Roehler, & Pearson, 1991; Pressley & Woloshyn, 1995). In addition, much research supports the explicit instruction of such strategies as…

  5. A Joint Model of Text and Aspect Ratings for Sentiment Summarization Department of Computer Science

    E-print Network

    Cortes, Corinna

    /5 "Our waitress was rude", "Awful service" Value 5/5 "Good Greek food for the $", "Great price!" Figure 1, figure 1 summarizes a restaurant using aspects food, decor, service, and value plus a numeric rating out, calamari, or coarse-grained, e.g., food, decor, service. Sim- ilarly, extracted text can range from

  6. DiffNet: automatic differential functional summarization of dE-MAP networks.

    PubMed

    Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes

    2014-10-01

    The study of genetic interaction networks that respond to changing conditions is an emerging research problem. Recently, Bandyopadhyay et al. (2010) proposed a technique to construct a differential network (dE-MAPnetwork) from two static gene interaction networks in order to map the interaction differences between them under environment or condition change (e.g., DNA-damaging agent). This differential network is then manually analyzed to conclude that DNA repair is differentially effected by the condition change. Unfortunately, manual construction of differential functional summary from a dE-MAP network that summarizes all pertinent functional responses is time-consuming, laborious and error-prone, impeding large-scale analysis on it. To this end, we propose DiffNet, a novel data-driven algorithm that leverages Gene Ontology (go) annotations to automatically summarize a dE-MAP network to obtain a high-level map of functional responses due to condition change. We tested DiffNet on the dynamic interaction networks following MMS treatment and demonstrated the superiority of our approach in generating differential functional summaries compared to state-of-the-art graph clustering methods. We studied the effects of parameters in DiffNet in controlling the quality of the summary. We also performed a case study that illustrates its utility. PMID:25009128

  7. Stemming Malay Text and Its Application in Automatic Text Categorization

    NASA Astrophysics Data System (ADS)

    Yasukawa, Michiko; Lim, Hui Tian; Yokoo, Hidetoshi

    In Malay language, there are no conjugations and declensions and affixes have important grammatical functions. In Malay, the same word may function as a noun, an adjective, an adverb, or, a verb, depending on its position in the sentence. Although extensively simple root words are used in informal conversations, it is essential to use the precise words in formal speech or written texts. In Malay, to make sentences clear, derivative words are used. Derivation is achieved mainly by the use of affixes. There are approximately a hundred possible derivative forms of a root word in written language of the educated Malay. Therefore, the composition of Malay words may be complicated. Although there are several types of stemming algorithms available for text processing in English and some other languages, they cannot be used to overcome the difficulties in Malay word stemming. Stemming is the process of reducing various words to their root forms in order to improve the effectiveness of text processing in information systems. It is essential to avoid both over-stemming and under-stemming errors. We have developed a new Malay stemmer (stemming algorithm) for removing inflectional and derivational affixes. Our stemmer uses a set of affix rules and two types of dictionaries: a root-word dictionary and a derivative-word dictionary. The use of set of rules is aimed at reducing the occurrence of under-stemming errors, while that of the dictionaries is believed to reduce the occurrence of over-stemming errors. We performed an experiment to evaluate the application of our stemmer in text mining software. For the experiment, text data used were actual web pages collected from the World Wide Web to demonstrate the effectiveness of our Malay stemming algorithm. The experimental results showed that our stemmer can effectively increase the precision of the extracted Boolean expressions for text categorization.

  8. Automatically generating extraction patterns from untagged text

    SciTech Connect

    Riloff, E.

    1996-12-31

    Many corpus-based natural language processing systems rely on text corpora that have been manually annotated with syntactic or semantic tags. In particular, all previous dictionary construction systems for information extraction have used an annotated training corpus or some form of annotated input. We have developed a system called AutoSlog-TS that creates dictionaries of extraction patterns using only untagged text. AutoSlog-TS is based on the AutoSlog system, which generated extraction patterns using annotated text and a set of heuristic rules. By adapting AutoSlog and combining it with statistical techniques, we eliminated its dependency on tagged text. In experiments with the MUC-4 terrorism domain, AutoSlog-TS created a dictionary of extraction patterns that performed comparably to a dictionary created by AutoSlog, using only preclassified texts as input.

  9. Automatic Evaluation of Text Coherence: Models and Representations This paper investigates the automatic evaluation of

    E-print Network

    Barzilay, Regina

    Automatic Evaluation of Text Coherence: Models and Representations Abstract This paper investigates their distribution. Given a new text, the model evaluates its coherence by computing the probability of its entity the automatic evaluation of text coherence for machine­generated texts. We in­ troduce a fully

  10. Information fusion for automatic text classification

    SciTech Connect

    Dasigi, V.; Mann, R.C.; Protopopescu, V.A.

    1996-08-01

    Analysis and classification of free text documents encompass decision-making processes that rely on several clues derived from text and other contextual information. When using multiple clues, it is generally not known a priori how these should be integrated into a decision. An algorithmic sensor based on Latent Semantic Indexing (LSI) (a recent successful method for text retrieval rather than classification) is the primary sensor used in our work, but its utility is limited by the {ital reference}{ital library} of documents. Thus, there is an important need to complement or at least supplement this sensor. We have developed a system that uses a neural network to integrate the LSI-based sensor with other clues derived from the text. This approach allows for systematic fusion of several information sources in order to determine a combined best decision about the category to which a document belongs.

  11. Profiling School Shooters: Automatic Text-Based Analysis

    PubMed Central

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L.

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters’ texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  12. Profiling School Shooters: Automatic Text-Based Analysis.

    PubMed

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters' texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  13. Automatic Web Site Summarization by Image Content: A Case Study with Logo and Trademark

    E-print Network

    Petrakis, Euripides G.M.

    of corporate Web sites or of products presented there. The proposed method incorporates machine learning, Machine Learning I. INTRODUCTION THE World Wide Web (WWW) has grown substantially in recent years consistency of image content representation and high quality results, image- based summarization need

  14. Document Difficulty Framework for Semi-Automatic Text Classification

    E-print Network

    Bellogin, Alejandro

    ) provide much faster and cheaper classification than human experts. However, even though there have been as the human effort required. On the other hand, manual classification is the best option for systemsDocument Difficulty Framework for Semi-Automatic Text Classification Miguel Martinez-Alvarez1

  15. Automatic extraction of relations between medical concepts in clinical texts

    E-print Network

    Harabagiu, Sanda M.

    Automatic extraction of relations between medical concepts in clinical texts Bryan Rink, Sanda between medical problems, treatments, and tests mentioned in electronic medical records. Materials. When gold standard data for concepts and assertions were available, F1 was 73.7, precision was 72

  16. Automatic Transliteration of Judeo-Arabic Texts into Arabic Script

    E-print Network

    Dershowitz, Nachum

    Aramaic and Hebrew, sometimes modified according to Arabic morphological rules. Since the Arabic alphabetAutomatic Transliteration of Judeo-Arabic Texts into Arabic Script ! Kfir Bar, Tel Aviv University Nachum Dershowitz, Tel Aviv University Yaacov Choueka, The Friedberg Genizah Project ! The Judeo-Arabic

  17. Effects of Presentation Mode and Computer Familiarity on Summarization of Extended Texts

    ERIC Educational Resources Information Center

    Yu, Guoxing

    2010-01-01

    Comparability studies on computer- and paper-based reading tests have focused on short texts and selected-response items via almost exclusively statistical modeling of test performance. The psychological effects of presentation mode and computer familiarity on individual students are under-researched. In this study, 157 students read extended…

  18. A scheme for automatic text rectification in real scene images

    NASA Astrophysics Data System (ADS)

    Wang, Baokang; Liu, Changsong; Ding, Xiaoqing

    2015-03-01

    Digital camera is gradually replacing traditional flat-bed scanner as the main access to obtain text information for its usability, cheapness and high-resolution, there has been a large amount of research done on camera-based text understanding. Unfortunately, arbitrary position of camera lens related to text area can frequently cause perspective distortion which most OCR systems at present cannot manage, thus creating demand for automatic text rectification. Current rectification-related research mainly focused on document images, distortion of natural scene text is seldom considered. In this paper, a scheme for automatic text rectification in natural scene images is proposed. It relies on geometric information extracted from characters themselves as well as their surroundings. For the first step, linear segments are extracted from interested region, and a J-Linkage based clustering is performed followed by some customized refinement to estimate primary vanishing point(VP)s. To achieve a more comprehensive VP estimation, second stage would be performed by inspecting the internal structure of characters which involves analysis on pixels and connected components of text lines. Finally VPs are verified and used to implement perspective rectification. Experiments demonstrate increase of recognition rate and improvement compared with some related algorithms.

  19. The Extent to Which Pre-Service Turkish Language and Literature Teachers Could Apply Summarizing Rules in Informative Texts

    ERIC Educational Resources Information Center

    Görgen, Izzet

    2015-01-01

    The purpose of the present study is to determine the extent to which pre-service Turkish Language and Literature teachers possess summarizing skill. Answers to the following questions were sought in the study: What is the summarizing skill level of the pre-service Turkish Language and Literature teachers? Which of the summarizing rules are…

  20. Toward a multi-sensor-based approach to automatic text classification

    SciTech Connect

    Dasigi, V.R.; Mann, R.C.

    1995-10-01

    Many automatic text indexing and retrieval methods use a term-document matrix that is automatically derived from the text in question. Latent Semantic Indexing is a method, recently proposed in the Information Retrieval (IR) literature, for approximating a large and sparse term-document matrix with a relatively small number of factors, and is based on a solid mathematical foundation. LSI appears to be quite useful in the problem of text information retrieval, rather than text classification. In this report, we outline a method that attempts to combine the strength of the LSI method with that of neural networks, in addressing the problem of text classification. In doing so, we also indicate ways to improve performance by adding additional {open_quotes}logical sensors{close_quotes} to the neural network, something that is hard to do with the LSI method when employed by itself. The various programs that can be used in testing the system with TIPSTER data set are described. Preliminary results are summarized, but much work remains to be done.

  1. Automatic Evaluation of Search Ontologies in the Entertainment Domain using Text

    E-print Network

    Elhadad, Michael

    Automatic Evaluation of Search Ontologies in the Entertainment Domain using Text Classification this ontology evaluation method on an ontology in the Movies do- main, that has been acquired automatically from. The proposed ontology evaluation method is general: it only relies on the possibility to automatically align

  2. Syllabic Level Automatic Synchronization of Music Signals and Text Lyrics

    E-print Network

    Wang, Ye

    of the same source. In karaoke, we may want to synchronize the lyrics to the music, so that a user can sing not considered the fine-grained syllable level alignment needed to make automatic karaoke systems feasible. We of music analysis with respect to karaoke but are not comparable as the source signal differs. In [1

  3. AUTOMATIC ANALYSIS OF DESCRIPTIVE TEXTS James R. Cowls

    E-print Network

    the descriptions and related to parts of the plant in which we are interested. The resulting output is a standar and the keywords in the text to assign each segment of the text to a particular part of the plant. The chard stage knowledgeable in the subject matter of the teXt. The texts currently used are wild plant descriptions taken

  4. Mining Knowledge from Text Collections Using Automatically Generated Metadata

    E-print Network

    Pierre, John M.

    knowledge. In businesses and institutions, the amount of information existing in repositories of text in these document repositories. In this pa- per we describe an approach for mining knowledge from text collections Businesses and institutions often have a great deal of information technology infrastructure devoted

  5. Automatic theory generation from analyst text files using coherence networks

    NASA Astrophysics Data System (ADS)

    Shaffer, Steven C.

    2014-05-01

    This paper describes a three-phase process of extracting knowledge from analyst textual reports. Phase 1 involves performing natural language processing on the source text to extract subject-predicate-object triples. In phase 2, these triples are then fed into a coherence network analysis process, using a genetic algorithm optimization. Finally, the highest-value sub networks are processed into a semantic network graph for display. Initial work on a well- known data set (a Wikipedia article on Abraham Lincoln) has shown excellent results without any specific tuning. Next, we ran the process on the SYNthetic Counter-INsurgency (SYNCOIN) data set, developed at Penn State, yielding interesting and potentially useful results.

  6. High compression rate text summarization

    E-print Network

    Branavan, Satchuthananthavale Rasiah Kuhan

    2008-01-01

    This thesis focuses on methods for condensing large documents into highly concise summaries, achieving compression rates on par with human writers. While the need for such summaries in the current age of information overload ...

  7. Automatic Extraction of New Words from Japanese Texts using Generalized Forward-Backward Search

    E-print Network

    Automatic Extraction of New Words from Japanese Texts using Generalized Forward-Backward Search, 238-03 Japan nagat a©nttnly. ±sl. ntt. j p Abstract We present a novel new word extraction method from Japanese texts based on expected word frequencies. First, we compute expected word frequencies from

  8. Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and Its Automatic Evaluation

    E-print Network

    Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and Its Automatic Evaluation completely is very difficult, since various ambiguities cannot be resolved solely by syntactic or semantic or partial parsers (Smadja, 1991; Hindle, 1990) to acquiring specific patterns from texts. These tell us

  9. ALL RIGHTS RESERVED AUTOMATIC ASSESSMENT OF NON-TOPICAL PROPERTIES OF TEXT

    E-print Network

    of Doctor of Philosophy Graduate Program in Communication, Information and Library Studies written under This study takes some first step towards automatic classification of texts with regard to non on each non-topical dimension. However, the experiments demonstrate that binary classification techniques

  10. Automatic Cataloguing and Searching for Retrospective Data by Use of OCR Text.

    ERIC Educational Resources Information Center

    Tseng, Yuen-Hsien

    2001-01-01

    Describes efforts in supporting information retrieval from OCR (optical character recognition) degraded text. Reports on approaches used in an automatic cataloging and searching contest for books in multiple languages, including a vector space retrieval model, an n-gram indexing method, and a weighting scheme; and discusses problems of Asian…

  11. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction

    PubMed Central

    Najafi, Elham; Darooneh, Amir H.

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction. PMID:26091207

  12. Extractive summarization using complex networks and syntactic dependency

    NASA Astrophysics Data System (ADS)

    Amancio, Diego R.; Nunes, Maria G. V.; Oliveira, Osvaldo N.; Costa, Luciano da F.

    2012-02-01

    The realization that statistical physics methods can be applied to analyze written texts represented as complex networks has led to several developments in natural language processing, including automatic summarization and evaluation of machine translation. Most importantly, so far only a few metrics of complex networks have been used and therefore there is ample opportunity to enhance the statistics-based methods as new measures of network topology and dynamics are created. In this paper, we employ for the first time the metrics betweenness, vulnerability and diversity to analyze written texts in Brazilian Portuguese. Using strategies based on diversity metrics, a better performance in automatic summarization is achieved in comparison to previous work employing complex networks. With an optimized method the Rouge score (an automatic evaluation method used in summarization) was 0.5089, which is the best value ever achieved for an extractive summarizer with statistical methods based on complex networks for Brazilian Portuguese. Furthermore, the diversity metric can detect keywords with high precision, which is why we believe it is suitable to produce good summaries. It is also shown that incorporating linguistic knowledge through a syntactic parser does enhance the performance of the automatic summarizers, as expected, but the increase in the Rouge score is only minor. These results reinforce the suitability of complex network methods for improving automatic summarizers in particular, and treating text in general.

  13. Implementation of Automatic Process of Edge Rotation Diagnostic System on J-TEXT Tokamak

    NASA Astrophysics Data System (ADS)

    Zhang, Zepin; Cheng, Zhifeng; Luo, Jian; Wang, Zhijiang; Zhang, Xiaolong; Hou, Saiying; Cheng, Cheng

    2014-08-01

    A spectral diagnostic control system (SDCS) is developed to implement automatic process of the edge rotation diagnostic system on the J-TEXT tokamak. The SDCS contains a control module, data operation module, data analysis module, and data upload module. The core of this system is a newly developed software “Spectra Assist”, which completes the whole process by coupling all related subroutines and servers. The results of data correction and calculated rotation are presented. In the daily discharge of J-TEXT, SDCS is proved to have a stable performance and high efficiency in completing the process of data acquisition, operation and results output.

  14. Automatic Entity Recognition and Typing from Massive Text Corpora: A Phrase and Network Mining Approach

    PubMed Central

    Ren, Xiang; El-Kishky, Ahmed; Wang, Chi; Han, Jiawei

    2015-01-01

    In today’s computerized and information-based society, we are soaked with vast amounts of text data, ranging from news articles, scientific publications, product reviews, to a wide range of textual information from social media. To unlock the value of these unstructured text data from various domains, it is of great importance to gain an understanding of entities and their relationships. In this tutorial, we introduce data-driven methods to recognize typed entities of interest in massive, domain-specific text corpora. These methods can automatically identify token spans as entity mentions in documents and label their types (e.g., people, product, food) in a scalable way. We demonstrate on real datasets including news articles and tweets how these typed entities aid in knowledge discovery and management. PMID:26705508

  15. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis.

    PubMed

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text "The North Wind and the Sun" were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ? = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ? = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  16. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis

    PubMed Central

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ? = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ? = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  17. Semi-automatic image personalization tool for variable text insertion and replacement

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-02-01

    Image personalization is a widely used technique in personalized marketing,1 in which a vendor attempts to promote new products or retain customers by sending marketing collateral that is tailored to the customers' demographics, needs, and interests. With current solutions of which we are aware such as XMPie,2 DirectSmile,3 and AlphaPicture,4 in order to produce this tailored marketing collateral, image templates need to be created manually by graphic designers, involving complex grid manipulation and detailed geometric adjustments. As a matter of fact, the image template design is highly manual, skill-demanding and costly, and essentially the bottleneck for image personalization. We present a semi-automatic image personalization tool for designing image templates. Two scenarios are considered: text insertion and text replacement, with the text replacement option not offered in current solutions. The graphical user interface (GUI) of the tool is described in detail. Unlike current solutions, the tool renders the text in 3-D, which allows easy adjustment of the text. In particular, the tool has been implemented in Java, which introduces flexible deployment and eliminates the need for any special software or know-how on the part of the end user.

  18. Automatic extraction of property norm-like data from large text corpora.

    PubMed

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties. PMID:25019134

  19. Automatic correction of grammatical errors in non-native English text

    E-print Network

    Lee, John Sie Yuen, 1977-

    2009-01-01

    Learning a foreign language requires much practice outside of the classroom. Computer-assisted language learning systems can help fill this need, and one desirable capability of such systems is the automatic correction of ...

  20. Exploring the Effects of Multimedia Learning on Pre-Service Teachers' Perceived and Actual Learning Performance: The Use of Embedded Summarized Texts in Educational Media

    ERIC Educational Resources Information Center

    Wu, Leon Yufeng; Yamanaka, Akio

    2013-01-01

    In light of the increased usage of instructional media for teaching and learning, the design of these media as aids to convey the content for learning can be crucial for effective learning outcomes. In this vein, the literature has given attention to how concurrent on-screen text can be designed using these media to enhance learning performance.…

  1. Texting

    ERIC Educational Resources Information Center

    Tilley, Carol L.

    2009-01-01

    With the increasing ranks of cell phone ownership is an increase in text messaging, or texting. During 2008, more than 2.5 trillion text messages were sent worldwide--that's an average of more than 400 messages for every person on the planet. Although many of the messages teenagers text each day are perhaps nothing more than "how r u?" or "c u…

  2. Evaluation of extractive voicemail summarization

    E-print Network

    Koumpis, Konstantinos; Renals, Steve

    2003-01-01

    This paper is about the evaluation of a system that generates short text summaries of voicemail messages, suitable for transmission as text messages. Our approach to summarization is based on a speech-recognized transcript ...

  3. Automatic Interpretation System Integrating Free-style Sentence Translation and Parallel Text Based Translation

    E-print Network

    Translation Takahiro Ikeda Shinichi Ando Kenji Satoh Akitoshi Okumura Takao Watanabe Multimedia Res. Labs. NEC Labs. 4-1-1 Miyazaki, Miyamae-ku, Kawasaki, Kanagawa 216 t-ikeda@di.jp.nec.com, s-ando@cw.jp.nec.com, k-satoh@da.jp.nec.com, a-okumura@bx.jp.nec.com, t-watanabe@ay.jp.nec.com Abstract This paper proposes an automatic in

  4. Automatism

    PubMed Central

    McCaldon, R. J.

    1964-01-01

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”. PMID:14199824

  5. Unsupervised method for automatic construction of a disease dictionary from a large free text collection.

    PubMed

    Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan

    2008-01-01

    Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting con-textual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35-88%) over available, manually created disease terminologies. PMID:18999169

  6. Experimenting with Automatic Text-to-Diagram Conversion: A Novel Teaching Aid for the Blind People

    ERIC Educational Resources Information Center

    Mukherjee, Anirban; Garain, Utpal; Biswas, Arindam

    2014-01-01

    Diagram describing texts are integral part of science and engineering subjects including geometry, physics, engineering drawing, etc. In order to understand such text, one, at first, tries to draw or perceive the underlying diagram. For perception of the blind students such diagrams need to be drawn in some non-visual accessible form like tactile…

  7. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP 2010) Automatically Producing Plot Unit Representations for Narrative Text

    E-print Network

    Riloff, Ellen

    2010) Automatically Producing Plot Unit Representations for Narrative Text Amit Goyal Dept. of Computer University of Maryland College Park, MD 20742 hal@umiacs.umd.edu Abstract In the 1980s, plot units were research explores whether current NLP tech- nology can be used to automatically produce plot unit

  8. Semi-Automatic Grading of Students' Answers Written in Free Text

    ERIC Educational Resources Information Center

    Escudeiro, Nuno; Escudeiro, Paula; Cruz, Augusto

    2011-01-01

    The correct grading of free text answers to exam questions during an assessment process is time consuming and subject to fluctuations in the application of evaluation criteria, particularly when the number of answers is high (in the hundreds). In consequence of these fluctuations, inherent to human nature, and largely determined by emotional…

  9. Automatic CEFR Level Prediction for Estonian Learner Text Sowmya Vajjala1

    E-print Network

    and take us a step closer towards the automated assessment of Estonian learner text. KEYWORDS: Estonian at a language teaching institute before starting to learn a language at a certain level or serve as a guiding of language acquisition. While automated assessment is an active area of research for English, approaches

  10. The Automatic Assessment of Free Text Answers Using a Modified BLEU Algorithm

    ERIC Educational Resources Information Center

    Noorbehbahani, F.; Kardan, A. A.

    2011-01-01

    e-Learning plays an undoubtedly important role in today's education and assessment is one of the most essential parts of any instruction-based learning process. Assessment is a common way to evaluate a student's knowledge regarding the concepts related to learning objectives. In this paper, a new method for assessing the free text answers of…

  11. Knowledge Acquisition from Texts : Using an Automatic Clustering Method Based on Noun-Modifier Relationship

    E-print Network

    LEXTERis a terminology extraction software (Bouri- gault et al., 1996). A corpus of French texts on any-syntactic analysis is performed to extract "candi- date terms". Then, the knowledge engi- neer, assisted analysis. Prior to the semantic analysis, morpho-syntactic analysis is performed by LEXTER, a terminology

  12. Text Mining and Natural Language Processing Approaches for Automatic Categorization of Lay Requests to Web-Based Expert Forums

    PubMed Central

    Reincke, Ulrich; Michelmann, Hans Wilhelm

    2009-01-01

    Background Both healthy and sick people increasingly use electronic media to obtain medical information and advice. For example, Internet users may send requests to Web-based expert forums, or so-called “ask the doctor” services. Objective To automatically classify lay requests to an Internet medical expert forum using a combination of different text-mining strategies. Methods We first manually classified a sample of 988 requests directed to a involuntary childlessness forum on the German website “Rund ums Baby” (“Everything about Babies”) into one or more of 38 categories belonging to two dimensions (“subject matter” and “expectations”). After creating start and synonym lists, we calculated the average Cramer’s V statistic for the association of each word with each category. We also used principle component analysis and singular value decomposition as further text-mining strategies. With these measures we trained regression models and determined, on the basis of best regression models, for any request the probability of belonging to each of the 38 different categories, with a cutoff of 50%. Recall and precision of a test sample were calculated as a measure of quality for the automatic classification. Results According to the manual classification of 988 documents, 102 (10%) documents fell into the category “in vitro fertilization (IVF),” 81 (8%) into the category “ovulation,” 79 (8%) into “cycle,” and 57 (6%) into “semen analysis.” These were the four most frequent categories in the subject matter dimension (consisting of 32 categories). The expectation dimension comprised six categories; we classified 533 documents (54%) as “general information” and 351 (36%) as a wish for “treatment recommendations.” The generation of indicator variables based on the chi-square analysis and Cramer’s V proved to be the best approach for automatic classification in about half of the categories. In combination with the two other approaches, 100% precision and 100% recall were realized in 18 (47%) out of the 38 categories in the test sample. For 35 (92%) categories, precision and recall were better than 80%. For some categories, the input variables (ie, “words”) also included variables from other categories, most often with a negative sign. For example, absence of words predictive for “menstruation” was a strong indicator for the category “pregnancy test.” Conclusions Our approach suggests a way of automatically classifying and analyzing unstructured information in Internet expert forums. The technique can perform a preliminary categorization of new requests and help Internet medical experts to better handle the mass of information and to give professional feedback. PMID:19632978

  13. EnvMine: A text-mining system for the automatic extraction of contextual information

    PubMed Central

    2010-01-01

    Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles). So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations) from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved) of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude), thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical variables of sampling sites, thus facilitating the performance of ecological analyses. EnvMine can also help in the development of standards for the annotation of environmental features. PMID:20515448

  14. Video Summarization via Crowdsourcing

    E-print Network

    Chen, Sheng-Wei

    Video Summarization via Crowdsourcing Abstract Although video summarization has been studied extensively, existing schemes are neither lightweight nor generalizable to all types of video content. To generate accurate abstractions of all types of video, we propose a framework called Click2SMRY, which

  15. An automatic system to identify heart disease risk factors in clinical texts over time.

    PubMed

    Chen, Qingcai; Li, Haodi; Tang, Buzhou; Wang, Xiaolong; Liu, Xin; Liu, Zengjian; Liu, Shu; Wang, Weida; Deng, Qiwen; Zhu, Suisong; Chen, Yangxin; Wang, Jingfeng

    2015-12-01

    Despite recent progress in prediction and prevention, heart disease remains a leading cause of death. One preliminary step in heart disease prediction and prevention is risk factor identification. Many studies have been proposed to identify risk factors associated with heart disease; however, none have attempted to identify all risk factors. In 2014, the National Center of Informatics for Integrating Biology and Beside (i2b2) issued a clinical natural language processing (NLP) challenge that involved a track (track 2) for identifying heart disease risk factors in clinical texts over time. This track aimed to identify medically relevant information related to heart disease risk and track the progression over sets of longitudinal patient medical records. Identification of tags and attributes associated with disease presence and progression, risk factors, and medications in patient medical history were required. Our participation led to development of a hybrid pipeline system based on both machine learning-based and rule-based approaches. Evaluation using the challenge corpus revealed that our system achieved an F1-score of 92.68%, making it the top-ranked system (without additional annotations) of the 2014 i2b2 clinical NLP challenge. PMID:26362344

  16. Progress towards an unassisted element identification from Laser Induced Breakdown Spectra with automatic ranking techniques inspired by text retrieval

    NASA Astrophysics Data System (ADS)

    Amato, G.; Cristoforetti, G.; Legnaioli, S.; Lorenzetti, G.; Palleschi, V.; Sorrentino, F.; Tognoni, E.

    2010-08-01

    In this communication, we will illustrate an algorithm for automatic element identification in LIBS spectra which takes inspiration from the vector space model applied to text retrieval techniques. The vector space model prescribes that text documents and text queries are represented as vectors of weighted terms (words). Document ranking, with respect to relevance to a query, is obtained by comparing the vectors representing the documents with the vector representing the query. In our case, we represent elements and samples as vectors of weighted peaks, obtained from their spectra. The likelihood of the presence of an element in a sample is computed by comparing the corresponding vectors of weighted peaks. The weight of a peak is proportional to its intensity and to the inverse of the number of peaks, in the database, in its wavelength neighboring. We suppose to have a database containing the peaks of all elements we want to recognize, where each peak is represented by a wavelength and it is associated with its expected relative intensity and the corresponding element. Detection of elements in a sample is obtained by ranking the elements according to the distance of the associated vectors from the vector representing the sample. The application of this approach to elements identification using LIBS spectra obtained from several kinds of metallic alloys will be also illustrated. The possible extension of this technique towards an algorithm for fully automated LIBS analysis will be discussed.

  17. Lexical Cohesion Based Topic Modeling for Summarization

    E-print Network

    Cicekli, Ilyas

    Lexical Cohesion Based Topic Modeling for Summarization Gonenc Ercan and Ilyas Cicekli Dept of sentences. Lexical chains have been used in summarization research to analyze the lexical cohesion structure advantage of more lexical cohesion clues. Our algorithm segments the text with respect to each topic

  18. Large Data Gen. Independence/Complexity Doc Summarization Data Summarization Image Summarization Assay Selection End Summarizing Large Data Sets

    E-print Network

    Noble, William Stafford

    End Big Data in Machine Learning Statistics and Machine Learning "There's no data like more data Data Summarization Image Summarization Assay Selection End Big Data in Machine Learning Statistics Summarization Assay Selection End Big Data in Machine Learning Statistics and Machine Learning "There's no data

  19. QCS: a system for querying, clustering and summarizing documents.

    SciTech Connect

    Dunlavy, Daniel M.; Schlesinger, Judith D. (Center for Computing Sciences, Bowie, MD); O'Leary, Dianne P.; Conroy, John M.

    2006-10-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.

  20. QCS : a system for querying, clustering, and summarizing documents.

    SciTech Connect

    Dunlavy, Daniel M.

    2006-08-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence ''trimming'', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.

  1. User and Device Adaptation in Summarizing Sports Videos

    NASA Astrophysics Data System (ADS)

    Nitta, Naoko; Babaguchi, Noboru

    Video summarization is defined as creating a video summary which includes only important scenes in the original video streams. In order to realize automatic video summarization, the significance of each scene needs to be determined. When targeted especially on broadcast sports videos, a play scene, which corresponds to a play, can be considered as a scene unit. The significance of every play scene can generally be determined based on the importance of the play in the game. Furthermore, the following two issues should be considered: 1) what is important depends on each user's preferences, and 2) the summaries should be tailored for media devices that each user has. Considering the above issues, this paper proposes a unified framework for user and device adaptation in summarizing broadcast sports videos. The proposed framework summarizes sports videos by selecting play scenes based on not only the importance of each play itself but also the users' preferences by using the metadata, which describes the semantic content of videos with keywords, and user profiles, which describe users' preference degrees for the keywords. The selected scenes are then presented in a proper way using various types of media such as video, image, or text according to device profiles which describe the device type. We experimentally verified the effectiveness of user adaptation by examining how the generated summaries are changed by different preference degrees and by comparing our results with/without using user profiles. The validity of device adaptation is also evaluated by conducting questionnaires using PCs and mobile phones as the media devices.

  2. Video summarization: methods and landscape

    NASA Astrophysics Data System (ADS)

    Barbieri, Mauro; Agnihotri, Lalitha; Dimitrova, Nevenka

    2003-11-01

    The ability to summarize and abstract information will be an essential part of intelligent behavior in consumer devices. Various summarization methods have been the topic of intensive research in the content-based video analysis community. Summarization in traditional information retrieval is a well understood problem. While there has been a lot of research in the multimedia community there is no agreed upon terminology and classification of the problems in this domain. Although the problem has been researched from different aspects there is usually no distinction between the various dimensions of summarization. The goal of the paper is to provide the basic definitions of widely used terms such as skimming, summarization, and highlighting. The different levels of summarization: local, global, and meta-level are made explicit. We distinguish among the dimensions of task, content, and method and provide an extensive classification model for the same. We map the existing summary extraction approaches in the literature into this model and we classify the aspects of proposed systems in the literature. In addition, we outline the evaluation methods and provide a brief survey. Finally we propose future research directions based on the white spots that we identified by analysis of existing systems in the literature.

  3. Generating Descriptions that Summarize Geospatial and Temporal Data Martin Molina

    E-print Network

    Molina, Martín

    Generating Descriptions that Summarize Geospatial and Temporal Data Martin Molina Department a knowledge-based method for automatically generating summaries of geospatial and temporal data, i.e. data that show the ability of our method to generate certain types of geospatial and temporal descriptions. 1

  4. Multimodal Summarization of Complex Sentences Naushad UzZaman

    E-print Network

    Bigham, Jeffrey P.

    Multimodal Summarization of Complex Sentences Naushad UzZaman Computer Science Department@cs.rochester.edu ABSTRACT In this paper, we introduce the idea of automatically illustrating complex sentences as multimodal to pictures, multimodal summaries provide additional clues of what happened, who did it, to whom and how

  5. ABSTRACTSOFCURRENTLITERATURE Automatic Text Generation: Application

    E-print Network

    Report Sublanguage Chantal Contant Drpartement de linguistique et philologie Universit6 de Montrral and abstract as being related to computational linguistics or knowledge representation, resulting from

  6. SUMMARIZATION AND INDEXING OF HUMAN ACTIVITY SEQUENCES Bi Song*, Namrata Vaswani**, Amit K. Roy-Chowdhury*

    E-print Network

    Vaswani, Namrata

    used for indexing and summarizing a indexing and summarizing (tracking) a real-life video sequence on automatically tracking and indexing a real life video sequence of different activities. Learned Models + EdgeSUMMARIZATION AND INDEXING OF HUMAN ACTIVITY SEQUENCES Bi Song*, Namrata Vaswani**, Amit K. Roy

  7. NAACL-HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text Toward Plot Units: Automatic Affect State Analysis

    E-print Network

    Riloff, Ellen

    Toward Plot Units: Automatic Affect State Analysis Amit Goyal and Ellen Riloff and Hal Daume III with characters in a story. This research repre- sents a first step toward the automatic genera- tion of plot unit evaluate AE- SOP on a small collection of fables. 1 Introduction In the 1980s, plot units (Lehnert, 1981

  8. Extractive Summarization of Voicemail using Lexical and Prosodic Feature Subset Selection 

    E-print Network

    Koumpis, Konstantinos; Renals, Steve; Niranjan, Mahesan

    2001-01-01

    This paper presents a novel data-driven approach to summarizing spoken audio transcripts utilizing lexical and prosodic features. The former are obtained from a speech recognizer and the latter are extracted automatically ...

  9. Big Data -The Good, Bad, and Ugly Generalized Independence Doc Summarization Speech Summarization General Summarization Submodularity and Big Data

    E-print Network

    Noble, William Stafford

    Big Data - The Good, Bad, and Ugly Generalized Independence Doc Summarization Speech Summarization General Summarization Submodularity and Big Data Jeffrey A. Bilmes Professor Departments of Electrical://melodi.ee.washington.edu/~bilmes Friday, May 3rd, 2013 J. Bilmes Submodularity and Big Data page 1 / 64 #12;Big Data - The Good, Bad

  10. Event-centric Twitter photo summarization

    E-print Network

    Wen, Chung-Lin, S.M. Massachusetts Institute of Technology

    2014-01-01

    We develop a novel algorithm based on spectral geometry that summarize a photo collection into a small subset that represents the collection well. While the definition for a good summarization might not be unique, we focus ...

  11. Natural Event Summarization School of Computer Science

    E-print Network

    Li, Tao

    Natural Event Summarization Yexi Jiang School of Computer Science Florida International University summarization, namely how to concisely summarize temporal events [14, 15, 25]. Current state-of-art event, 10532, USA perng@us.ibm.com Tao Li School of Computer Science Florida International University Miami, FL

  12. Highlight summarization in golf videos using audio signals

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Kim, Jin Young

    2008-01-01

    In this paper, we present an automatic summarization of highlights in golf videos based on audio information alone without video information. The proposed highlight summarization system is carried out based on semantic audio segmentation and detection on action units from audio signals. Studio speech, field speech, music, and applause are segmented by means of sound classification. Swing is detected by the methods of impulse onset detection. Sounds like swing and applause form a complete action unit, while studio speech and music parts are used to anchor the program structure. With the advantage of highly precise detection of applause, highlights are extracted effectively. Our experimental results obtain high classification precision on 18 golf games. It proves that the proposed system is very effective and computationally efficient to apply the technology to embedded consumer electronic devices.

  13. Automatic classification of documents with an in-depth analysis of information extraction and automatic summarization

    E-print Network

    Hohm, Joseph Brandon, 1982-

    2004-01-01

    Today, annual information fabrication per capita exceeds two hundred and fifty megabytes. As the amount of data increases, classification and retrieval methods become more necessary to find relevant information. This thesis ...

  14. On the Application of Generic Summarization Algorithms to Music

    NASA Astrophysics Data System (ADS)

    Raposo, Francisco; Ribeiro, Ricardo; de Matos, David Martins

    2015-01-01

    Several generic summarization algorithms were developed in the past and successfully applied in fields such as text and speech summarization. In this paper, we review and apply these algorithms to music. To evaluate this summarization's performance, we adopt an extrinsic approach: we compare a Fado Genre Classifier's performance using truncated contiguous clips against the summaries extracted with those algorithms on 2 different datasets. We show that Maximal Marginal Relevance (MMR), LexRank and Latent Semantic Analysis (LSA) all improve classification performance in both datasets used for testing.

  15. Indexing of Arabic documents automatically based on lexical analysis

    E-print Network

    Molijy, Abdulrahman Al; Alsmadi, Izzat

    2012-01-01

    The continuous information explosion through the Internet and all information sources makes it necessary to perform all information processing activities automatically in quick and reliable manners. In this paper, we proposed and implemented a method to automatically create and Index for books written in Arabic language. The process depends largely on text summarization and abstraction processes to collect main topics and statements in the book. The process is developed in terms of accuracy and performance and results showed that this process can effectively replace the effort of manually indexing books and document, a process that can be very useful in all information processing and retrieval applications.

  16. Contextual Text Mining

    ERIC Educational Resources Information Center

    Mei, Qiaozhu

    2009-01-01

    With the dramatic growth of text information, there is an increasing need for powerful text mining systems that can automatically discover useful knowledge from text. Text is generally associated with all kinds of contextual information. Those contexts can be explicit, such as the time and the location where a blog article is written, and the…

  17. Transcription and Summarization of Voicemail Speech 

    E-print Network

    Koumpis, Konstantinos; Renals, Steve

    This paper describes the development of a system to transcribe and summarize voicemail messages. The results of the research we present are two-fold. First, a hybrid connectionist approach to the Voicemail transcription task shows that competitive...

  18. Evaluation of Extractive Voicemail Summarization Konstantinos Koumpis and Steve Renals

    E-print Network

    Edinburgh, University of

    - sages. Voicemail summarization has several features that differentiate it from conventional text content of the mes- sage; K. Koumpis is currently with Domain Dynamics Ltd. Feature extraction of content words extracted from the original mes- sage transcription. Given this definition, we can frame

  19. Customization in a Unified Framework for Summarizing Medical Literature

    E-print Network

    Schiffman, Barry

    Customization in a Unified Framework for Summarizing Medical Literature N. Elhadad a , M.-Y. Kan b a tailored summary of relevant documents for either a physician or lay person. The approach takes advantage of regularities in medical literature text structure and content to fulfill identified user needs. Results

  20. Customization in a Unified Framework for Summarizing Medical Literature

    E-print Network

    Schiffman, Barry

    Customization in a Unified Framework for Summarizing Medical Literature N. Elhadad a , M.­Y. Kan b a tailored summary of relevant documents for either a physician or lay person. The approach takes advantage of regularities in medical literature text structure and content to fulfill identified user needs. Results

  1. Adaptive detection of missed text areas in OCR outputs: application to the automatic assessment of OCR quality in mass digitization projects

    NASA Astrophysics Data System (ADS)

    Ben Salah, Ahmed; Ragot, Nicolas; Paquet, Thierry

    2013-01-01

    The French National Library (BnF*) has launched many mass digitization projects in order to give access to its collection. The indexation of digital documents on Gallica (digital library of the BnF) is done through their textual content obtained thanks to service providers that use Optical Character Recognition softwares (OCR). OCR softwares have become increasingly complex systems composed of several subsystems dedicated to the analysis and the recognition of the elements in a page. However, the reliability of these systems is always an issue at stake. Indeed, in some cases, we can find errors in OCR outputs that occur because of an accumulation of several errors at different levels in the OCR process. One of the frequent errors in OCR outputs is the missed text components. The presence of such errors may lead to severe defects in digital libraries. In this paper, we investigate the detection of missed text components to control the OCR results from the collections of the French National Library. Our verification approach uses local information inside the pages based on Radon transform descriptors and Local Binary Patterns descriptors (LBP) coupled with OCR results to control their consistency. The experimental results show that our method detects 84.15% of the missed textual components, by comparing the OCR ALTO files outputs (produced by the service providers) to the images of the document.

  2. Extrinsic Summarization Evaluation: A Decision Audit Task

    E-print Network

    Edinburgh, University of

    Interfaces and Presentation]: Multimedia Information Systems--Evaluation/methodology General Terms the impact of automatic speech recog- nition (ASR) errors on user performance. We employ several evaluation user satisfaction on an information retrieval task, users can adapt their browsing behavior to complete

  3. Disease Related Knowledge Summarization Based on Deep Graph Search

    PubMed Central

    Wu, Xiaofang; Yang, Zhihao; Li, ZhiHeng; Lin, Hongfei; Wang, Jian

    2015-01-01

    The volume of published biomedical literature on disease related knowledge is expanding rapidly. Traditional information retrieval (IR) techniques, when applied to large databases such as PubMed, often return large, unmanageable lists of citations that do not fulfill the searcher's information needs. In this paper, we present an approach to automatically construct disease related knowledge summarization from biomedical literature. In this approach, firstly Kullback-Leibler Divergence combined with mutual information metric is used to extract disease salient information. Then deep search based on depth first search (DFS) is applied to find hidden (indirect) relations between biomedical entities. Finally random walk algorithm is exploited to filter out the weak relations. The experimental results show that our approach achieves a precision of 60% and a recall of 61% on salient information extraction for Carcinoma of bladder and outperforms the method of Combo. PMID:26413521

  4. Disease Related Knowledge Summarization Based on Deep Graph Search.

    PubMed

    Wu, Xiaofang; Yang, Zhihao; Li, ZhiHeng; Lin, Hongfei; Wang, Jian

    2015-01-01

    The volume of published biomedical literature on disease related knowledge is expanding rapidly. Traditional information retrieval (IR) techniques, when applied to large databases such as PubMed, often return large, unmanageable lists of citations that do not fulfill the searcher's information needs. In this paper, we present an approach to automatically construct disease related knowledge summarization from biomedical literature. In this approach, firstly Kullback-Leibler Divergence combined with mutual information metric is used to extract disease salient information. Then deep search based on depth first search (DFS) is applied to find hidden (indirect) relations between biomedical entities. Finally random walk algorithm is exploited to filter out the weak relations. The experimental results show that our approach achieves a precision of 60% and a recall of 61% on salient information extraction for Carcinoma of bladder and outperforms the method of Combo. PMID:26413521

  5. Summarizing phenotype evolution patterns from report cases.

    PubMed

    Taboada, María; Alvarez, Verónica; Martínez, Diego; Pilo, Belén; Robinson, Peter N; Sobrido, María J

    2012-11-01

    The need to represent and manage time is implicit in several reasoning processes in medicine. However, this is predominantly obvious in the field of many neurodegenerative disorders, which are characterized by insidious onsets, progressive courses and variable combinations of clinical manifestations in each patient. Therefore, the availability of tools providing high level descriptions of the evolution of phenotype manifestations from patient data is crucial to promote early disease recognition and optimize the diagnostic process. Although many case reports published in the literature do not provide exhaustive temporal information except only key time references, such as disease onset, diagnosis or monitoring time, automatically comparing cases described by temporal clinical manifestation sequences can provide valuable knowledge about the data evolution. In this paper, we demonstrate the usefulness of representing patient case reports of a neurodegenerative disorder as a set of temporal clinical manifestations semantically annotated with a domain phenotype ontology and registered with a time-stamped value. Novel techniques are presented to query and match sets of different manifestation sequences from multiple patient cases, with the aim of automatically inferring phenotype evolution patterns of generic patients for clinical studies. The method was applied to 25 patient report cases from a Spanish study of the domain of cerebrotendinous xanthomatosis. Five evolution patterns were automatically generated to analyze the patient data. The results were evaluated against 49 relevant conclusions drawn from the study, with a precision of 93 % and a recall of 70 %. PMID:23085966

  6. Reorganized text.

    PubMed

    2015-05-01

    Reorganized Text: In the Original Investigation titled “Patterns of Hospital Utilization for Head and Neck Cancer Care: Changing Demographics” posted online in the January 29, 2015, issue of JAMA Otolaryngology–Head & Neck Surgery (doi:10.1001 /jamaoto.2014.3603), information was copied within sections and text rearranged to accommodate Continuing Medical Education quiz formatting. The information from the topic statements of each paragraph in the Hypothesis Testing subsection of the Methods section was collected in a new first paragraph for that subsection, which reads as follows: “Several hypotheses regarding the causes of regionalization of HNCA care were tested using the NIS data: (1) increasing patient comorbidities over time, causing a shift in care to teaching institutions that would theoretically be better equipped to handle such increased comorbidities; (2) shifting of payer status; (3) increased proportion of prior radiation therapy; and (4) a higher fraction of more complex procedures being referred and performed at teaching institutions.” In addition, the phrase "As summarized in Table3," was added to the beginning of paragraph 6 of the Discussion section, and the call-out to Table 3 in the middle of that paragraph was deleted. Finally, paragraphs 6 and 7 of the Discussion section were combined. PMID:25996397

  7. Automatic Imitation

    ERIC Educational Resources Information Center

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  8. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2010-04-06

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  9. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E. (Oak Ridge, TN); Elmore, Mark Thomas (Oak Ridge, TN); Reed, Joel Wesley (Knoxville, TN); Treadwell, Jim N. (Louisville, TN); Samatova, Nagiza Faridovna (Oak Ridge, TN)

    2008-01-01

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  10. System for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2006-07-04

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  11. Video summarization for energy efficient wireless streaming

    NASA Astrophysics Data System (ADS)

    Li, Zhu; Zhai, Fan; Katsaggelos, Aggelos K.

    2005-07-01

    With the proliferation of camera equipped cell phones and the deployment of the higher data rate 2.5G and 3G infra structure systems, providing consumers with video-equipped cellular communication infrastructure is highly desirable, and can drive the development of a large number of valuable applications. However, for an uplink wireless channel, both the bandwidth and battery energy in a mobile phone are limited for video communications. In this paper, we pursue an energy efficient video communication solution through joint video summarization and transmission adaptation over a slow fading wireless channel. Coding and modulation schemes and packet transmission strategy are optimized and adapted to the unique packet arrival and delay characteristics of the video summaries. In additional to the optimal solution, we also propose a heuristic solution that is greedy but has close to optimal performance. Operational energy efficiency-summary distortion performance is characterized under an optimal summarization setting. Simulation results show the advantage of the proposed scheme with respect to energy efficiency and video transmission quality.

  12. Effective Replays and Summarization of Virtual Experiences

    PubMed Central

    Ponto, Kevin; Kohlmann, Joe; Gleicher, Michael

    2012-01-01

    Direct replays of the experience of a user in a virtual environment are difficult for others to watch due to unnatural camera motions. We present methods for replaying and summarizing these egocentric experiences that effectively communicate the users observations while reducing unwanted camera movements. Our approach summarizes the viewpoint path as a concise sequence of viewpoints that cover the same parts of the scene. The core of our approach is a novel content dependent metric that can be used to identify similarities between viewpoints. This enables viewpoints to be grouped by similar contextual view information and provides a means to generate novel viewpoints that can encapsulate a series of views. These resulting encapsulated viewpoints are used to synthesize new camera paths that convey the content of the original viewers experience. Projecting the initial movement of the user back on the scene can be used to convey the details of their observations, and the extracted viewpoints can serve as bookmarks for control or analysis. Finally we present performance analysis along with two forms of validation to test whether the extracted viewpoints are representative of the viewers original observations and to test for the overall effectiveness of the presented replay methods. PMID:22402688

  13. Medical Textbook Summarization and Guided Navigation using Statistical Sentence Extraction

    PubMed Central

    Whalen, Gregory

    2005-01-01

    We present a method for automated medical textbook and encyclopedia summarization. Using statistical sentence extraction and semantic relationships, we extract sentences from text returned as part of an existing textbook search (similar to a book index). Our system guides users to the information they desire by summarizing the content of each relevant chapter or section returned through the search. The summary is tailored to contain sentences that specifically address the user’s search terms. Our clustering method selects sentences that contain concepts specifically addressing the context of the query term in each of the returned sections. Our method examines conceptual relationships from the UMLS and selects clusters of concepts using Expectation Maximization (EM). Sentences associated with the concept clusters are shown to the user. We evaluated whether our extracted summary provides a suitable answer to the user’s question. PMID:16779153

  14. Large Scale Language Modeling in Automatic Speech Recognition

    E-print Network

    Cortes, Corinna

    Large Scale Language Modeling in Automatic Speech Recognition Ciprian Chelba, Dan Bikel, Maria for a variety of automatic speech recognition tasks in Google. We summarize results on Voice Search and a few the string W is broken into sentences, or other segments such as utterances in automatic speech recognition

  15. WOLF; automatic typing program

    USGS Publications Warehouse

    Evenden, G.I.

    1982-01-01

    A FORTRAN IV program for the Hewlett-Packard 1000 series computer provides for automatic typing operations and can, when employed with manufacturer's text editor, provide a system to greatly facilitate preparation of reports, letters and other text. The input text and imbedded control data can perform nearly all of the functions of a typist. A few of the features available are centering, titles, footnotes, indentation, page numbering (including Roman numerals), automatic paragraphing, and two forms of tab operations. This documentation contains both user and technical description of the program.

  16. Machine Translation from Text

    NASA Astrophysics Data System (ADS)

    Habash, Nizar; Olive, Joseph; Christianson, Caitlin; McCary, John

    Machine translation (MT) from text, the topic of this chapter, is perhaps the heart of the GALE project. Beyond being a well defined application that stands on its own, MT from text is the link between the automatic speech recognition component and the distillation component. The focus of MT in GALE is on translating from Arabic or Chinese to English. The three languages represent a wide range of linguistic diversity and make the GALE MT task rather challenging and exciting.

  17. Automatic speaker recognition system

    NASA Astrophysics Data System (ADS)

    Higgins, Alan; Naylor, Joe

    1984-07-01

    The Defense Communications Division of ITT (ITTDCD) has developed an automatic speaker recognition (ASR) system that meets the functional requirements defined in NRL's Statement of Work. This report is organized as follows. Chapter 2 is a short history of the development of the ASR system, both the algorithm and the implementation. Chapter 3 describes the methodology of system testing, and Chapter 4 summarizes test results. In Chapter 5, some additional testing performed using GFM test material is discussed. Conclusions derived from the contract work are given in Chapter 6.

  18. Automatic Speech Recognition

    NASA Astrophysics Data System (ADS)

    Potamianos, Gerasimos; Lamel, Lori; Wölfel, Matthias; Huang, Jing; Marcheret, Etienne; Barras, Claude; Zhu, Xuan; McDonough, John; Hernando, Javier; Macho, Dusan; Nadeu, Climent

    Automatic speech recognition (ASR) is a critical component for CHIL services. For example, it provides the input to higher-level technologies, such as summarization and question answering, as discussed in Chapter 8. In the spirit of ubiquitous computing, the goal of ASR in CHIL is to achieve a high performance using far-field sensors (networks of microphone arrays and distributed far-field microphones). However, close-talking microphones are also of interest, as they are used to benchmark ASR system development by providing a best-case acoustic channel scenario to compare against.

  19. Traitement Automatique des Langues Naturelles, Marseille, 2014 Porting a Summarizer to the French Language

    E-print Network

    a Summarizer to the French Language RÉMI BOIS1 JOHANNES LEVELING2 LORRAINE GOEURIOT2 GARETH J. F. JONES2 LIADH describe the porting of the English language REZIME text summarizer to the French language. REZIME learning tech- niques, using statistical, syntactic and lexical features which are computed based

  20. Extractive vs. NLG-based Abstractive Summarization of Evaluative Text: The Effect of Corpus Controversiality

    E-print Network

    to this task, including vari- ance, information entropy, and measures of inter- rater reliability. (e.g. Fleiss to develop our own based on information entropy. Summary evaluation is a challenging open re- search area

  1. Toward Extractive Summarization of Multimodal Documents

    E-print Network

    Carberry, Sandra

    articles, the texts #12;2 Peng Wu and Sandra Carberry '03 '04 '05 '06 60% 50% 40% 30% 20% 10% 0% Plastic is popular More consumers are using plastic to pay for gas. Percentage of gas bought with credit or debit Seattle's In the seattle area, for example, the Pacific Ocean has risen nearly they are rising about 0

  2. Automatic transmission

    SciTech Connect

    Miura, M.; Aoki, H.

    1988-02-02

    An automatic transmission is described comprising: an automatic transmission mechanism portion comprising a single planetary gear unit and a dual planetary gear unit; carriers of both of the planetary gear units that are integral with one another; an input means for inputting torque to the automatic transmission mechanism, clutches for operatively connecting predetermined ones of planetary gear elements of both of the planetary gear units to the input means and braking means for restricting the rotation of predetermined ones of planetary gear elements of both of the planetary gear units. The clutches are disposed adjacent one another at an end portion of the transmission for defining a clutch portion of the transmission; a first clutch portion which is attachable to the automatic transmission mechanism portion for comprising the clutch portion when attached thereto; a second clutch portion that is attachable to the automatic transmission mechanism portion in place of the first clutch portion for comprising the clutch portion when so attached. The first clutch portion comprising first clutch for operatively connecting the input means to a ring gear of the single planetary gear unit and a second clutch for operatively connecting the input means to a single gear of the automatic transmission mechanism portion. The second clutch portion comprising a the first clutch, the second clutch, and a third clutch for operatively connecting the input member to a ring gear of the dual planetary gear unit.

  3. More than a "Basic Skill": Breaking down the Complexities of Summarizing for ABE/ESL Learners

    ERIC Educational Resources Information Center

    Ouellette-Schramm, Jennifer

    2015-01-01

    This article describes the complex cognitive and linguistic challenges of summarizing expository text at vocabulary, syntactic, and rhetorical levels. It then outlines activities to help ABE/ESL learners develop corresponding skills.

  4. Automatic analysis of medical dialogue in the home hemodialysis domain : structure induction and summarization

    E-print Network

    Lacson, Ronilda Covar, 1968-

    2005-01-01

    Spoken medical dialogue is a valuable source of information, and it forms a foundation for diagnosis, prevention and therapeutic management. However, understanding even a perfect transcript of spoken dialogue is challenging ...

  5. Discourse Analysis and Structuring Text.

    ERIC Educational Resources Information Center

    Pace, Ann Jaffe

    1980-01-01

    Reviews the kinds of discourse analyses that are currently being undertaken, summarizes research findings, and makes suggestions based on these findings for structuring texts to be used for instructional or informational purposes. (Author/MER)

  6. Automatically Inducing Ontologies from Corpora Inderjeet Mani

    E-print Network

    Automatically Inducing Ontologies from Corpora Inderjeet Mani Department of Linguistics Georgetown describe a system that automatically induces an ontology from any large on-line text collection relationships among them. In this paper, we describe a system that automatically induces an ontology from any

  7. Recent progress in automatically extracting information from the pharmacogenomic literature

    PubMed Central

    Garten, Yael; Coulet, Adrien; Altman, Russ B

    2011-01-01

    The biomedical literature holds our understanding of pharmacogenomics, but it is dispersed across many journals. In order to integrate our knowledge, connect important facts across publications and generate new hypotheses we must organize and encode the contents of the literature. By creating databases of structured pharmocogenomic knowledge, we can make the value of the literature much greater than the sum of the individual reports. We can, for example, generate candidate gene lists or interpret surprising hits in genome-wide association studies. Text mining automatically adds structure to the unstructured knowledge embedded in millions of publications, and recent years have seen a surge in work on biomedical text mining, some specific to pharmacogenomics literature. These methods enable extraction of specific types of information and can also provide answers to general, systemic queries. In this article, we describe the main tasks of text mining in the context of pharmacogenomics, summarize recent applications and anticipate the next phase of text mining applications. PMID:21047206

  8. Text Mining.

    ERIC Educational Resources Information Center

    Trybula, Walter J.

    1999-01-01

    Reviews the state of research in text mining, focusing on newer developments. The intent is to describe the disparate investigations currently included under the term text mining and provide a cohesive structure for these efforts. A summary of research identifies key organizations responsible for pushing the development of text mining. A section…

  9. Text Mining Nonnegative Matrix Factorization

    E-print Network

    Kunkle, Tom

    of Mathematics North Carolina State University Raleigh, NC SIAM-SEAS­Charleston 3/25/2005 #12;Outline Traditional Factorization (2000) #12;Vector Space Model (1960s and 1970s) Gerard Salton's Information Retrieval System SMART: System for the Mechanical Analysis and Retrieval of Text (Salton's Magical Automatic Retriever of Text

  10. Text Sets.

    ERIC Educational Resources Information Center

    Giorgis, Cyndi; Johnson, Nancy J.

    2002-01-01

    Presents annotations of approximately 30 titles grouped in text sets. Defines a text set as five to ten books on a particular topic or theme. Discusses books on the following topics: living creatures; pirates; physical appearance; natural disasters; and the Irish potato famine. (SG)

  11. Automatic Analysis of Plot for Story Rewriting Harry Halpin

    E-print Network

    Automatic Analysis of Plot for Story Rewriting Harry Halpin School of Informatics University judyr@inf.ed.ac.uk Abstract A method for automatic plot analysis of narrative texts that uses components recalls the story. Our method of automatic plot analysis enables the tu- toring system to automatically

  12. VIDEO SUMMARIZATION BY VIDEO STRUCTURE ANALYSIS AND GRAPH OPTIMIZATION

    E-print Network

    Lyu, Michael R.

    VIDEO SUMMARIZATION BY VIDEO STRUCTURE ANALYSIS AND GRAPH OPTIMIZATION Shi Lu, Irwin King video summarization method that combines video structure analysis and graph optimiza- tion. First, we analyze the structure of the video, find the boundaries of video scenes, then we calculate each scene

  13. Video Summarization Based on User Interaction Dan R. Olsen Jr.

    E-print Network

    Olsen Jr., Dan R.

    video protocols such as MOVE Networks, Flash Server and Microsoft Smooth HD all have the propertyVideo Summarization Based on User Interaction Dan R. Olsen Jr. Brigham Young University olsen information about each play. In contrast with previous video summarization work this paper describes how

  14. Scientific Text Processing

    NASA Astrophysics Data System (ADS)

    Goossens, Michel; Herwijnen, Eric Van

    Aspects of text processing important for the scientific community are discussed, and an overview of currently available software is presented. Progress on standardization efforts in the area of document exchange (SGML), document formatting (DSSSL), document presentation (SPDL), fonts (ISO 9541) and character codes (Unicode and ISO 10646) is described. An elementary particle naming scheme for use with LATEX and SGML is proposed. LATEX, PostScript, SGML and desk-top publishing allow electronic submission of articles to publishers, and printing on demand. Advantages of standardization are illustrated by the description of a system which can exchange documents between different word processors and automatically extract bibliographic data for a library database.

  15. Automatic transmission

    SciTech Connect

    Ohkubo, M.

    1988-02-16

    An automatic transmission is described combining a stator reversing type torque converter and speed changer having first and second sun gears comprising: (a) a planetary gear train composed of first and second planetary gears sharing one planetary carrier in common; (b) a clutch and requisite brakes to control the planetary gear train; and (c) a speed-increasing or speed-decreasing mechanism is installed both in between a turbine shaft coupled to a turbine of the stator reversing type torque converter and the first sun gear of the speed changer, and in between a stator shaft coupled to a reversing stator and the second sun gear of the speed changer.

  16. Automatic transmission

    SciTech Connect

    Miki, N.

    1988-10-11

    This patent describes an automatic transmission including a fluid torque converter, a first gear unit having three forward-speed gears and a single reverse gear, a second gear unit having a low-speed gear and a high-speed gear, and a hydraulic control system, the hydraulic control system comprising: a source of pressurized fluid; a first shift valve for controlling the shifting between the first-speed gear and the second-speed gear of the first gear unit; a second shift valve for controlling the shifting between the second-speed gear and the third-speed gear of the first gear unit; a third shift valve equipped with a spool having two positions for controlling the shifting between the low-speed gear and the high-speed gear of the second gear unit; a manual selector valve having a plurality of shift positions for distributing the pressurized fluid supply from the source of pressurized fluid to the first, second and third shift valves respectively; first, second and third solenoid valves corresponding to the first, second and third shift valves, respectively for independently controlling the operation of the respective shift valves, thereby establishing a six forward-speed automatic transmission by combining the low-speed gear and the high-speed gear of the second gear unit with each of the first-speed gear, the second speed gear and the third-speed gear of the first gear unit; and means to fixedly position the spool of the third shift valve at one of the two positions by supplying the pressurized fluid to the third shift valve when the manual selector valve is shifted to a particular shift position, thereby locking the second gear unit in one of low-speed gear and the high-speed gear, whereby the six forward-speed automatic transmission is converted to a three forward-speed automatic transmission when the manual selector valve is shifted to the particular shift position.

  17. Automatic transmission

    SciTech Connect

    Aoki, H.

    1989-03-21

    An automatic transmission is described, comprising: a torque converter including an impeller having a connected member, a turbine having an input member and a reactor; and an automatic transmission mechanism having first to third clutches and plural gear units including a single planetary gear unit with a ring gear and a dual planetary gear unit with a ring gear. The single and dual planetary gear units have respective carriers integrally coupled with each other and respective sun gears integrally coupled with each other, the input member of the turbine being coupled with the ring gear of the single planetary gear unit through the first clutch, and being coupled with the sun gear through the second clutch. The connected member of the impeller is coupled with the ring gear of the dual planetary gear of the dual planetary gear unit is made to be and ring gear of the dual planetary gear unit is made to be restrained as required, and the carrier is coupled with an output member.

  18. Combining Linguistic and Machine Learning Techniques for Email Summarization

    E-print Network

    Tzoukermann Bell Laboratories Lucent Technologies 700 Mountain Avenue Murray Hill, NJ, 07974 evelyne learning models applied to nat- ural language task of summarizing email mes- sages through topic phrase

  19. Automatic transmission

    SciTech Connect

    Hamane, M.; Ohri, H.

    1989-03-21

    This patent describes an automatic transmission connected between a drive shaft and a driven shaft and comprising: a planetary gear mechanism including a first gear driven by the drive shaft, a second gear operatively engaged with the first gear to transmit speed change output to the driven shaft, and a third gear operatively engaged with the second gear to control the operation thereof; centrifugally operated clutch means for driving the first gear and the second gear. It also includes a ratchet type one-way clutch for permitting rotation of the third gear in the same direction as that of the drive shaft but preventing rotation in the reverse direction; the clutch means comprising a ratchet pawl supporting plate coaxially disposed relative to the drive shaft and integrally connected to the third gear, the ratchet pawl supporting plate including outwardly projection radial projections united with one another at base portions thereof.

  20. Preprint No. 4 Classification: IR 2.3 Automatic Processing of Foreign Language Documents

    E-print Network

    Preprint No. 4 Classification: IR 2.3 Automatic Processing of Foreign Language Documents G. Salton of the automatic text processing methods has been the inability automatically to handle foreign language texts as input foreign language documents and queries. The foreign language texts are automatically processed

  1. A Qualitative Study on the Use of Summarizing Strategies in Elementary Education

    ERIC Educational Resources Information Center

    Susar Kirmizi, Fatma; Akkaya, Nevin

    2011-01-01

    The objective of this study is to reveal how well summarizing strategies are used by Grade 4 and Grade 5 students as a reading comprehension strategy. This study was conducted in Buca, Izmir and the document analysis method, a qualitative research strategy, was employed. The study used a text titled "Environmental Pollution" and an "Evaluation…

  2. Event detection and summarization in American football broadcast video

    NASA Astrophysics Data System (ADS)

    Li, Baoxin; Sezan, M. Ibrahim

    2001-12-01

    We propose a framework for event detection and summary generation in football broadcast video. First, we formulate summarization as a play detection problem, with play being defined as the most basic segment of time during which the ball is being played. Then we propose both deterministic and probabilistic approaches to the detection of the plays. The detected plays are concatenated to generate a compact, time-compressed summary of the original video. Such a summary is complete in the sense that it contains every meaningful action of the underlying game, and it also servers as a much better starting point for higher-level summarization and other analyses than the original video does. Based on the summary, we also propose an audio-based hierarchical summarization method. Experimental results show the proposed methods work very well on consumer grade platforms.

  3. Untangling Text Data Mining Marti A. Hearst

    E-print Network

    Untangling Text Data Mining Marti A. Hearst School of Information Management & Systems University The possibilities for data mining from large text collections are virtually untapped. Text ex- presses a vast, rich automatically. Perhaps for this rea- son, there has been little work in text data min- ing to date, and most

  4. Using Synchronic and Diachronic Relations for Summarizing Multiple Documents Describing Evolving Events

    E-print Network

    Afantenos, Stergos D; Stamatopoulos, P; Halatsis, C

    2007-01-01

    In this paper we present a fresh look at the problem of summarizing evolving events from multiple sources. After a discussion concerning the nature of evolving events we introduce a distinction between linearly and non-linearly evolving events. We present then a general methodology for the automatic creation of summaries from evolving events. At its heart lie the notions of Synchronic and Diachronic cross-document Relations (SDRs), whose aim is the identification of similarities and differences between sources, from a synchronical and diachronical perspective. SDRs do not connect documents or textual elements found therein, but structures one might call messages. Applying this methodology will yield a set of messages and relations, SDRs, connecting them, that is a graph which we call grid. We will show how such a grid can be considered as the starting point of a Natural Language Generation System. The methodology is evaluated in two case-studies, one for linearly evolving events (descriptions of football matc...

  5. Video Analytics for Indexing, Summarization and Searching of Video Archives

    SciTech Connect

    Trease, Harold E.; Trease, Lynn L.

    2009-08-01

    This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful" content from image and video data.

  6. Upper-Intermediate-Level ESL Students' Summarizing in English

    ERIC Educational Resources Information Center

    Vorobel, Oksana; Kim, Deoksoon

    2011-01-01

    This qualitative instrumental case study explores various factors that might influence upper-intermediate-level English as a second language (ESL) students' summarizing from a sociocultural perspective. The study was conducted in a formal classroom setting, during a reading and writing class in the English Language Institute at a university in the…

  7. Web-page Classification through Summarization Zheng Chen2

    E-print Network

    Yang, Qiang

    techniques due to the labor-intensive nature of human editing. On a first glance, Web-page classification canWeb-page Classification through Summarization Dou Shen1 Zheng Chen2 Qiang Yang3 Hua-Jun Zeng2 Benyu and Technology Clearwater Bay Kowloon, Hong Kong qyang@cs.ust.hk ABSTRACT Web-page classification is much more

  8. Historical Recall and Precision: Summarizing Generated Hypotheses Richard Zanibbi

    E-print Network

    Zanibbi, Richard

    Historical Recall and Precision: Summarizing Generated Hypotheses Richard Zanibbi Centre and computing the recall and preci- sion of this set: we call these the `historical recall' and `historical precision.' Using table cell detection exam- ples, we demonstrate how historical recall and precision along

  9. Investigation of Learners' Perceptions for Video Summarization and Recommendation

    ERIC Educational Resources Information Center

    Yang, Jie Chi; Chen, Sherry Y.

    2012-01-01

    Recently, multimedia-based learning is widespread in educational settings. A number of studies investigate how to develop effective techniques to manage a huge volume of video sources, such as summarization and recommendation. However, few studies examine how these techniques affect learners' perceptions in multimedia learning systems. This…

  10. California DOT 1. Briefly summarize your current pavement smoothness requirements.

    E-print Network

    California DOT 1. Briefly summarize your current pavement smoothness requirements. For HMA pavement to OGFC placed on existing pavement not constructed under the same project. If concrete pavement is placed ordered. 39-1.12B Straightedge The top layer of HMA pavement must not vary from the lower edge of a12-foot

  11. Wisconsin DOT 1. Briefly summarize your current pavement smoothness requirements.

    E-print Network

    Wisconsin DOT 1. Briefly summarize your current pavement smoothness requirements. We currently-contact profiling equipment. Most PCC pavements are profiled using lightweight profilers when the project is still closed to traffic. Most HMA pavements are profiled using high speed profilers (with the same measuring

  12. Alabama DOT 1. Briefly summarize your current pavement smoothness requirements.

    E-print Network

    Alabama DOT 1. Briefly summarize your current pavement smoothness requirements. Concrete pavement string applied parallel to the surface. The surface is checked 1' inside the edges of the pavement, consolidated and refinished. High areas shall be cut down and refinished. The pavement is also checked

  13. Evaluation of Summarization Schemes for Learning in Streams

    E-print Network

    Chawla, Nitesh V.

    frequency quan- tiles, and prove bounds on the worst-case error introduced in computing information entropyEvaluation of Summarization Schemes for Learning in Streams Alec Pawling, Nitesh V. Chawla when the data is in the form of a stream from an unknown, possibly changing, dis- tribution. We present

  14. Building a Sentiment Summarizer for Local Service Reviews

    E-print Network

    Cortes, Corinna

    Building a Sentiment Summarizer for Local Service Reviews Sasha Blair-Goldensohn Google Inc. 76@google.com Tyler Neylon Google Inc. 1600 Amphitheatre Parkway Mountain View, CA 94043 tylern@google.com George A. In this study we specifically look at the problem of sum- marizing opinions of local services. This designation

  15. Multiscale Histograms: Summarizing Topological Relations in Large Spatial Datasets

    E-print Network

    Zhou, Xiaofang

    Multiscale Histograms: Summarizing Topological Relations in Large Spatial Datasets Xuemin Lin Qing datasets demonstrated that the approximate mul- tiscale histogram techniques may improve the ac- curacy for the real datasets. 1 Introduction Research in spatial database systems has a great impact on the technical

  16. Summarization Techniques for Visualization of Large Multidimensional Datasets

    E-print Network

    Young, R. Michael

    Summarization Techniques for Visualization of Large Multidimensional Datasets Technical Report TR- mensional datasets within a limited display area, without overwhelming the user. In this report, we discuss and explore large, complex datasets [16, 35]. Visualization techniques assist users in managing and displaying

  17. Automatic transmission

    SciTech Connect

    Miura, M.; Inuzuka, T.

    1986-08-26

    1. An automatic transmission with four forward speeds and one reverse position, is described which consists of: an input shaft; an output member; first and second planetary gear sets each having a sun gear, a ring gear and a carrier supporting a pinion in mesh with the sun gear and ring gear; the carrier of the first gear set, the ring gear of the second gear set and the output member all being connected; the ring gear of the first gear set connected to the carrier of the second gear set; a first clutch means for selectively connecting the input shaft to the sun gear of the first gear set, including friction elements, a piston selectively engaging the friction elements and a fluid servo in which hydraulic fluid is selectively supplied to the piston; a second clutch means for selectively connecting the input shaft to the sun gear of the second gear set a third clutch means for selectively connecting the input shaft to the carrier of the second gear set including friction elements, a piston selectively engaging the friction elements and a fluid servo in which hydraulic fluid is selectively supplied to the piston; a first drive-establishing means for selectively preventing rotation of the ring gear of the first gear set and the carrier of the second gear set in only one direction and, alternatively, in any direction; a second drive-establishing means for selectively preventing rotation of the sun gear of the second gear set; and a drum being open to the first planetary gear set, with a cylindrical intermediate wall, an inner peripheral wall and outer peripheral wall and forming the hydraulic servos of the first and third clutch means between the intermediate wall and the inner peripheral wall and between the intermediate wall and the outer peripheral wall respectively.

  18. What's yours and what's mine: Determining Intellectual Attribution in Scientific Text

    E-print Network

    the structure of scien- tific argumentation in articles can help in tasks such as automatic summarization to presenting the innovative scien- tific claim. Insteacl, one establishes other, well- known scientific facts

  19. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity. PMID:24801112

  20. An extended framework for adaptive playback-based video summarization

    NASA Astrophysics Data System (ADS)

    Peker, Kadir A.; Divakaran, Ajay

    2003-11-01

    In our previous work, we described an adaptive fast playback framework for video summarization where we changed the playback rate using the motion activity feature so as to maintain a constant "pace." This method provides an effective way of skimming through video, especially when the motion is not too complex and the background is mostly still, such as in surveillance video. In this paper, we present an extended summarization framework that, in addition to motion activity, uses semantic cues such as face or skin color appearance, speech and music detection, or other domain dependent semantically significant events to control the playback rate. The semantic features we use are computationally inexpensive and can be computed in compressed domain, yet are robust, reliable, and have a wide range of applicability across different content types. The presented framework also allows for adaptive summaries based on preference, for example, to include more dramatic vs. action elements, or vice versa. The user can switch at any time between the skimming and the normal playback modes. The continuity of the video is preserved, and complete omission of segments that may be important to the user is avoided by using adaptive fast playback instead of skipping over long segments. The rule-set and the input parameters can be further modified to fit a certain domain or application. Our framework can be used by itself, or as a subsequent presentation stage for a summary produced by any other summarization technique that relies on generating a sub-set of the content.

  1. Video summarization and personalization for pervasive mobile devices

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.

    2001-12-01

    We have designed and implemented a video semantic summarization system, which includes an MPEG-7 compliant annotation interface, a semantic summarization middleware, a real-time MPEG-1/2 video transcoder on PCs, and an application interface on color/black-and-white Palm-OS PDAs. We designed a video annotation tool, VideoAnn, to annotate semantic labels associated with video shots. Videos are first segmentated into shots based on their visual-audio characteristics. They are played back using an interactive interface, which facilitate and fasten the annotation process. Users can annotate the video content with the units of temporal shots or spatial regions. The annotated results are stored in the MPEG-7 XML format. We also designed and implemented a video transmission system, Universal Tuner, for wireless video streaming. This system transcodes MPEG-1/2 videos or live TV broadcasting videos to the BW or indexed color Palm OS devices. In our system, the complexity of multimedia compression and decompression algorithms is adaptively partitioned between the encoder and decoder. In the client end, users can access the summarized video based on their preferences, time, keywords, as well as the transmission bandwidth and the remaining battery power on the pervasive devices.

  2. Thesaurus-Based Automatic Book Indexing.

    ERIC Educational Resources Information Center

    Dillon, Martin

    1982-01-01

    Describes technique for automatic book indexing requiring dictionary of terms with text strings that count as instances of term and text in form suitable for processing by text formatter. Results of experimental application to portion of book text are presented, including measures of precision and recall. Ten references are noted. (EJS)

  3. Applying a sunburst visualization to summarize user navigation sequences.

    PubMed

    Rodden, Kerry

    2014-01-01

    For many Web-based applications, it's important to be able to analyze the paths users have taken through a site--for example, to understand how they're discovering engaging content. These paths are difficult to summarize visually because of the underlying data's complexity. A Google researcher applied a sunburst visualization to this problem, after simplifying the data into a hierarchical format. The resulting visualization was successful in YouTube and is widely referenced and accessed. The code for the visualization is available as open source. PMID:25248198

  4. Capturing User Reading Behaviors for Personalized Document Summarization

    SciTech Connect

    Xu, Songhua; Jiang, Hao; Lau, Francis

    2011-01-01

    We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.

  5. Automatic Acquisition of Subcategorization Frames from Tagged Text

    E-print Network

    the subcategoriza- tion frames of verbs, as shown by (1). (1) a. I expected [NP the man who smoked NP] to eat ice-cream b. I doubted [NP the man who liked to eat ice-cream NP] Current high-coverage parsers tend to use

  6. Events in Emerging Text Types (eETTs) -Borovets, Bulgaria, pages 2331 Summarizing Threads in Blogs Using Opinion Polarity

    E-print Network

    on sentimentality; 3 a: an idea colored by emotion b: the emotional significance of a passage or expression textual genres expressing subjective content by means of emotions, feelings, sentiments, moods or opinions, the concepts of emotions, feelings, sentiments, moods and opinions need to be defined with precision. Emotion

  7. Autoclass: An automatic classification system

    NASA Technical Reports Server (NTRS)

    Stutz, John; Cheeseman, Peter; Hanson, Robin

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.

  8. A Graph Summarization Algorithm Based on RFID Logistics

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Hu, Kongfa; Lu, Zhipeng; Zhao, Li; Chen, Ling

    Radio Frequency Identification (RFID) applications are set to play an essential role in object tracking and supply chain management systems. The volume of data generated by a typical RFID application will be enormous as each item will generate a complete history of all the individual locations that it occupied at every point in time. The movement trails of such RFID data form gigantic commodity flowgraph representing the locations and durations of the path stages traversed by each item. In this paper, we use graph to construct a warehouse of RFID commodity flows, and introduce a database-style operation to summarize graphs, which produces a summary graph by grouping nodes based on user-selected node attributes, further allows users to control the hierarchy of summaries. It can cut down the size of graphs, and provide convenience for users to study just on the shrunk graph which they interested. Through extensive experiments, we demonstrate the effectiveness and efficiency of the proposed method.

  9. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  10. An anatomy of automatism.

    PubMed

    Mackay, R D

    2015-07-01

    The automatism defence has been described as a quagmire of law and as presenting an intractable problem. Why is this so? This paper will analyse and explore the current legal position on automatism. In so doing, it will identify the problems which the case law has created, including the distinction between sane and insane automatism and the status of the 'external factor doctrine', and comment briefly on recent reform proposals. PMID:26378105

  11. Summarization and visualization of target trajectories from massive video archives

    NASA Astrophysics Data System (ADS)

    Yue, Zhanfeng; Narasimha, Pramod L.; Topiwala, Pankaj

    2009-05-01

    Video, especially massive video archives, is by nature dense information medium. Compactly presenting the activities of targets of interest provides an efficient and cost saving way to analyze the content of the video. In this paper, we propose a video content analysis system to summarize and visualize the trajectories of targets from massive video archives. We first present an adaptive appearance-based algorithm to robustly track the targets in a particle filtering framework. It provides high performance while facilitating implementation of this algorithm in hardware with parallel processing. Phase correlation algorithm is used to estimate the motion of the observation platform which is then compensated in order to extract the independent trajectories of the targets. Based on the trajectory information, we develop the interface for browsing the videos which enables us to directly manipulate the video. The user could scroll over objects to view their trajectories. If interested, he/she could click on the object and drag it along the displayed path. The actual video will be played in synchronous to the mouse movement.

  12. Automatic differentiation bibliography

    SciTech Connect

    Corliss, G.F.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  13. A Language Independent Algorithm for Single and Multiple Document Summarization

    E-print Network

    Kavi, Krishna

    ://www-nlpir.nist.gov/projects/duc/ · E.g.: Supervised learning (Teufel 97), Unsupervised extraction (Salton97) TextRank ­ fully to prepare for high winds , heavy rains and high seas . 10. The storm was approaching from the southeast

  14. A novel tool for assessing and summarizing the built environment

    PubMed Central

    2012-01-01

    Background A growing corpus of research focuses on assessing the quality of the local built environment and also examining the relationship between the built environment and health outcomes and indicators in communities. However, there is a lack of research presenting a highly resolved, systematic, and comprehensive spatial approach to assessing the built environment over a large geographic extent. In this paper, we contribute to the built environment literature by describing a tool used to assess the residential built environment at the tax parcel-level, as well as a methodology for summarizing the data into meaningful indices for linkages with health data. Methods A database containing residential built environment variables was constructed using the existing body of literature, as well as input from local community partners. During the summer of 2008, a team of trained assessors conducted an on-foot, curb-side assessment of approximately 17,000 tax parcels in Durham, North Carolina, evaluating the built environment on over 80 variables using handheld Global Positioning System (GPS) devices. The exercise was repeated again in the summer of 2011 over a larger geographic area that included roughly 30,700 tax parcels; summary data presented here are from the 2008 assessment. Results Built environment data were combined with Durham crime data and tax assessor data in order to construct seven built environment indices. These indices were aggregated to US Census blocks, as well as to primary adjacency communities (PACs) and secondary adjacency communities (SACs) which better described the larger neighborhood context experienced by local residents. Results were disseminated to community members, public health professionals, and government officials. Conclusions The assessment tool described is both easily-replicable and comprehensive in design. Furthermore, our construction of PACs and SACs introduces a novel concept to approximate varying scales of community and describe the built environment at those scales. Our collaboration with community partners at all stages of the tool development, data collection, and dissemination of results provides a model for engaging the community in an active research program. PMID:23075269

  15. An Evaluation Road Map for Summarization Research Breck Baldwin,1

    E-print Network

    Radev, Dragomir R.

    : text, graphics, audio, video. 1 Baldwin Language Technologies, breck@linc.cis.upenn.edu. 2 Department of Defense, rldonaw@super.org. 3 Information Sciences Institute, University of Southern California, hovy@isi@mitre.org. 6 Information Sciences Institute, University of Southern California, marcu@isi.edu. 7 Columbia

  16. Automatic Log Analysis System Integration

    E-print Network

    Maguire Jr., Gerald Q.

    learning system, called the Awesome Automatic Log Analysis Application (AALAA), is used at Ericsson automatisera denna analysprocess har ett maskininlärningssystem utvecklats, Awesome Automatic Log Analysis

  17. Reviewing “Text Mining” : Textual Data Mining

    NASA Astrophysics Data System (ADS)

    Yasuda, Akio

    The objective of this paper is to give overviews of text mining or textual data mining in Japan from the practical aspects. Text mining is the technology utilized for analyzing large volumes of textual data applying various parameters for purpose of withdrawing useful knowledge and information. The essence of “Mining” is "the discovery of knowledge or information." And target of text mining is to objectively discover and extract knowledge, facts, and meaningful relationships from the text documents. This paper summarizes the related disciplines and application fields which are applied in text mining, and introduces features and application examples of text mining tools.

  18. Text documents as social networks

    NASA Astrophysics Data System (ADS)

    Balinsky, Helen; Balinsky, Alexander; Simske, Steven J.

    2012-03-01

    The extraction of keywords and features is a fundamental problem in text data mining. Document processing applications directly depend on the quality and speed of the identification of salient terms and phrases. Applications as disparate as automatic document classification, information visualization, filtering and security policy enforcement all rely on the quality of automatically extracted keywords. Recently, a novel approach to rapid change detection in data streams and documents has been developed. It is based on ideas from image processing and in particular on the Helmholtz Principle from the Gestalt Theory of human perception. By modeling a document as a one-parameter family of graphs with its sentences or paragraphs defining the vertex set and with edges defined by Helmholtz's principle, we demonstrated that for some range of the parameters, the resulting graph becomes a small-world network. In this article we investigate the natural orientation of edges in such small world networks. For two connected sentences, we can say which one is the first and which one is the second, according to their position in a document. This will make such a graph look like a small WWW-type network and PageRank type algorithms will produce interesting ranking of nodes in such a document.

  19. Automated de-identification of free-text medical records

    E-print Network

    Neamatullah, Ishna

    2006-01-01

    This paper presents a de-identification study at the Harvard-MIT Division of Health Science and Technology (HST) to automatically de-identify confidential patient information from text medical records used in intensive ...

  20. Are extractive text summarisation techniques portable to broadcast news? 

    E-print Network

    Christensen, Heidi; Gotoh, Yoshihiko; Kolluru, BalaKrishna; Renals, Steve

    2003-01-01

    In this paper we report on a series of experiments which compare the effect of individual features on both text and speech summarisation, the effect of basing the speech summaries on automatic speech recognition transcripts ...

  1. Automatic amino acid analyzer

    NASA Technical Reports Server (NTRS)

    Berdahl, B. J.; Carle, G. C.; Oyama, V. I.

    1971-01-01

    Analyzer operates unattended or up to 15 hours. It has an automatic sample injection system and can be programmed. All fluid-flow valve switching is accomplished pneumatically from miniature three-way solenoid pilot valves.

  2. Automatic natural language parsing

    SciTech Connect

    Sprack-Jones, K.; Wilks, Y.

    1985-01-01

    This collection of papers on automatic natural language parsing examines research and development in language processing over the past decade. It focuses on current trends toward a phrase structure grammar and deterministic parsing.

  3. Exploring the style-technique interaction in extractive summarization of broadcast news. 

    E-print Network

    Kolluru, BalaKrishna; Christensen, Heidi; Gotoh, Yoshihiko; Renals, Steve

    2003-01-01

    In this paper we seek to explore the interaction between the style of a broadcast news story and its summarization technique. We report the performance of three different summarization techniques on broadcast news stories, ...

  4. Automatic switching matrix

    DOEpatents

    Schlecht, Martin F. (Cambridge, MA); Kassakian, John G. (Newton, MA); Caloggero, Anthony J. (Lynn, MA); Rhodes, Bruce (Dorchester, MA); Otten, David (Newton, MA); Rasmussen, Neil (Sudbury, MA)

    1982-01-01

    An automatic switching matrix that includes an apertured matrix board containing a matrix of wires that can be interconnected at each aperture. Each aperture has associated therewith a conductive pin which, when fully inserted into the associated aperture, effects electrical connection between the wires within that particular aperture. Means is provided for automatically inserting the pins in a determined pattern and for removing all the pins to permit other interconnecting patterns.

  5. A hierarchical structure for automatic meshing and adaptive FEM analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Saxena, Mukul; Perucchio, Renato

    1987-01-01

    A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.

  6. On Automatic Plagiarism Detection Based on n-Grams Comparison

    E-print Network

    Rosso, Paolo

    On Automatic Plagiarism Detection Based on n-Grams Comparison Alberto Barr´on-Cede~no and Paolo automatic plagiarism detection is carried out consid- ering a reference corpus, a suspicious text when considering low level word n-grams compar- isons (n = {2, 3}). Keywords: Plagiarism detection

  7. Statistical Detection of Local Coherence Relations in Narrative Recall and Summarization Data

    E-print Network

    Golden, Richard M.

    3 Statistical Detection of Local Coherence Relations in Narrative Recall and Summarization Data differences in recall and summarization production data as a function of reproductive and semantic coherence a dominant role in characterizing differences between recall and summarization. Moreover, the methodology

  8. Scenario forms for web information seeking and summarizing in bone marrow transplantation

    E-print Network

    Scenario forms for web information seeking and summarizing in bone marrow transplantation Margit the user-centered interface of a summarization system for physicians in Bone Marrow Transplan- tation (BMT This paper presents the user interface of a summarization system for physicians in Bone Marrow

  9. Improving Text Recall with Multiple Summaries

    ERIC Educational Resources Information Center

    van der Meij, Hans; van der Meij, Jan

    2012-01-01

    Background. QuikScan (QS) is an innovative design that aims to improve accessibility, comprehensibility, and subsequent recall of expository text by means of frequent within-document summaries that are formatted as numbered list items. The numbers in the QS summaries correspond to numbers placed in the body of the document where the summarized

  10. Automatic Discrimination of Emotion from Spoken Finnish

    ERIC Educational Resources Information Center

    Toivanen, Juhani; Vayrynen, Eero; Seppanen, Tapio

    2004-01-01

    In this paper, experiments on the automatic discrimination of basic emotions from spoken Finnish are described. For the purpose of the study, a large emotional speech corpus of Finnish was collected; 14 professional actors acted as speakers, and simulated four primary emotions when reading out a semantically neutral text. More than 40 prosodic…

  11. Automatic Video Shot Detection from MPEG Stream

    E-print Network

    Fan, Jianping

    Automatic Video Shot Detection from MPEG Stream Jianping Fan Department of Computer Science;Why we need video shots? a. Text Retrieval: Keyword Extraction Indexing Document Storage Reverse File Indexing #12;Why we need video shots? b. Database Query: Entity Extraction sid name login age gpa 53666

  12. Text + Time Search & Analytics

    E-print Network

    Waldmann, Uwe

    Text + Time Search & Analytics Klaus Berberich (kberberi@mpi-inf.mpg.de) #12;Text + Time Search & Analytics ­ Klaus Berberich / 40 Text + Time 2 The New York Times June 2nd, 1889 #12;Text + Time Search & Analytics ­ Klaus Berberich / 40 Text + Time 2 Wikipedia April 23rd, 2001 #12;Text + Time Search & Analytics

  13. Automatic audio and manual transcripts alignment, time-code transfer and selection of exact transcripts

    E-print Network

    ): the quality of the automatic tran- script depends on the recognizer. If audio-related texts are availableAutomatic audio and manual transcripts alignment, time-code transfer and selection of exact focuses on automatic processing of sibling resources of audio and written documents, such as available

  14. On the Relevance of Search Space Reduction in Automatic Plagiarism Detection

    E-print Network

    Rosso, Paolo

    On the Relevance of Search Space Reduction in Automatic Plagiarism Detection Sobre la importancia de texto Abstract: In automatic plagiarism detection with reference, the text fragments a problem when we con- sider performance and precision. In this paper, we approach automatic plagiarism

  15. Writing Home/Decolonizing Text(s)

    ERIC Educational Resources Information Center

    Asher, Nina

    2009-01-01

    The article draws on postcolonial and feminist theories, combined with critical reflection and autobiography, and argues for generating decolonizing texts as one way to write and reclaim home in a postcolonial world. Colonizers leave home to seek power and control elsewhere, and the colonized suffer loss of home as they know it. This dislocation…

  16. Proceedings A General Feature Space for Automatic Verb Classification

    E-print Network

    Stevenson, Suzanne

    bottle­ neck. Because verbs play a central role in the syn­ tactic and semantic interpretation of a sentence, much research has focused on automatically learn­ ing properties of verbs from text corpora

  17. Automatically Generating Wikipedia Articles: A Structure-Aware Approach

    E-print Network

    Sauper, Christina Joan

    In this paper, we investigate an approach for creating a comprehensive textual overview of a subject composed of information drawn from the Internet. We use the high-level structure of human-authored texts to automatically ...

  18. Fully automatic telemetry data processor

    NASA Technical Reports Server (NTRS)

    Cox, F. B.; Keipert, F. A.; Lee, R. C.

    1968-01-01

    Satellite Telemetry Automatic Reduction System /STARS 2/, a fully automatic computer-controlled telemetry data processor, maximizes data recovery, reduces turnaround time, increases flexibility, and improves operational efficiency. The system incorporates a CDC 3200 computer as its central element.

  19. Automatic Web Spreadsheet Data

    E-print Network

    Cafarella, Michael J.

    Automatic Web Spreadsheet Data Extraction Shirley Zhe Chen Michael Cafarella SSW 2013 #12.wto.org 3863 0.94% www.doh.wa.gov 3579 0.87% www.nsf.gov 2770 0.67% nces.ed.gov 2177 0.53% ·Our Web crawl spreadsheet with either a hierarchical left or top attribute region. #12;Web Spreadsheets Observations

  20. Automatic Dance Lesson Generation

    ERIC Educational Resources Information Center

    Yang, Yang; Leung, H.; Yue, Lihua; Deng, LiQun

    2012-01-01

    In this paper, an automatic lesson generation system is presented which is suitable in a learning-by-mimicking scenario where the learning objects can be represented as multiattribute time series data. The dance is used as an example in this paper to illustrate the idea. Given a dance motion sequence as the input, the proposed lesson generation…

  1. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  2. Automatic multiple applicator electrophoresis

    NASA Technical Reports Server (NTRS)

    Grunbaum, B. W.

    1977-01-01

    Easy-to-use, economical device permits electrophoresis on all known supporting media. System includes automatic multiple-sample applicator, sample holder, and electrophoresis apparatus. System has potential applicability to fields of taxonomy, immunology, and genetics. Apparatus is also used for electrofocusing.

  3. XML and Free Text.

    ERIC Educational Resources Information Center

    Riggs, Ken Roger

    2002-01-01

    Discusses problems with marking free text, text that is either natural language or semigrammatical but unstructured, that prevent well-formed XML from marking text for readily available meaning. Proposes a solution to mark meaning in free text that is consistent with the intended simplicity of XML versus SGML. (Author/LRW)

  4. Semantic Video Summarization Using Mutual Reinforcement Principle and Shot Arrangement Patterns

    E-print Network

    Lyu, Michael R.

    Semantic Video Summarization Using Mutual Reinforcement Principle and Shot Arrangement Patterns Shi a novel semantic video summarization framework, which generates video skimmings that guaran- tee both the balanced content coverage and the visual co- herence. First, we collect video semantic information

  5. Efficient Summarization Based On Categorized Keywords Christos Bouras Vassilis Poulopoulos Vassilis Tsogkas

    E-print Network

    Efficient Summarization Based On Categorized Keywords Christos Bouras Vassilis Poulopoulos Vassilis efficient summary and on the other hand when the categorization procedure becomes too overloaded, the summarized articles can be used in order to categorize the article more efficiently. Moreover this paper

  6. Red-Tide Research Summarized to 1964 Including an Annotated Bibliography

    E-print Network

    535^ Red-Tide Research Summarized to 1964 Including an Annotated Bibliography By George A, Harold E. Crowther, Acting Director Red-Tide Research Summarized to 1964 Including an Annotated Historical 2 General conditions during red-tide outbreaks 3 Temperature 3 Salinity 3 Rainfall 4 Wind 4 Light

  7. Texting on the Move

    MedlinePLUS

    ... fatal crash. When people text while behind the wheel, they're focusing their attention — and often their ... of alcohol or drugs. Texting from behind the wheel is against the law in 41 states and ...

  8. Automatic Calibration Systems

    NASA Technical Reports Server (NTRS)

    Ferris, A. T.; Edwards, S. F.; Stewart, W. F.; Mason, D. R. J.; Finley, T. D.; Williams, H. E.

    1982-01-01

    A continuous requirement exists for calibration and environmental testing of instruments in use at multitude of test facilities at Langley Research Center. Brief summarizes several automated systems available for calibration of research instruments to include: six-component balance, multimeter, amplifier, pyrometer, voltage-controlled oscillator, pressure transducer and accelerometer.

  9. Automatic transmission control method

    SciTech Connect

    Hasegawa, H.; Ishiguro, T.

    1989-07-04

    This patent describes a method of controlling an automatic transmission of an automotive vehicle. The transmission has a gear train which includes a brake for establishing a first lowest speed of the transmission, the brake acting directly on a ring gear which meshes with a pinion, the pinion meshing with a sun gear in a planetary gear train, the ring gear connected with an output member, the sun gear being engageable and disengageable with an input member of the transmission by means of a clutch. The method comprises the steps of: detecting that a shift position of the automatic transmission has been shifted to a neutral range; thereafter introducing hydraulic pressure to the brake if present vehicle velocity is below a predetermined value, whereby the brake is engaged to establish the first lowest speed; and exhausting hydraulic pressure from the brake if present vehicle velocity is higher than a predetermined value, whereby the brake is disengaged.

  10. Automatic vehicle monitoring

    NASA Technical Reports Server (NTRS)

    Bravman, J. S.; Durrani, S. H.

    1976-01-01

    Automatic vehicle monitoring systems are discussed. In a baseline system for highway applications, each vehicle obtains position information through a Loran-C receiver in rural areas and through a 'signpost' or 'proximity' type sensor in urban areas; the vehicle transmits this information to a central station via a communication link. In an advance system, the vehicle carries a receiver for signals emitted by satellites in the Global Positioning System and uses a satellite-aided communication link to the central station. An advanced railroad car monitoring system uses car-mounted labels and sensors for car identification and cargo status; the information is collected by electronic interrogators mounted along the track and transmitted to a central station. It is concluded that automatic vehicle monitoring systems are technically feasible but not economically feasible unless a large market develops.

  11. Automatic Abstraction in Planning

    NASA Technical Reports Server (NTRS)

    Christensen, J.

    1991-01-01

    Traditionally, abstraction in planning has been accomplished by either state abstraction or operator abstraction, neither of which has been fully automatic. We present a new method, predicate relaxation, for automatically performing state abstraction. PABLO, a nonlinear hierarchical planner, implements predicate relaxation. Theoretical, as well as empirical results are presented which demonstrate the potential advantages of using predicate relaxation in planning. We also present a new definition of hierarchical operators that allows us to guarantee a limited form of completeness. This new definition is shown to be, in some ways, more flexible than previous definitions of hierarchical operators. Finally, a Classical Truth Criterion is presented that is proven to be sound and complete for a planning formalism that is general enough to include most classical planning formalisms that are based on the STRIPS assumption.

  12. Automatism and driving offences.

    PubMed

    Rumbold, John

    2013-10-01

    Automatism is a rarely used defence, but it is particularly used for driving offences because many are strict liability offences. Medical evidence is almost always crucial to argue the defence, and it is important to understand the bars that limit the use of automatism so that the important medical issues can be identified. The issue of prior fault is an important public safeguard to ensure that reasonable precautions are taken to prevent accidents. The total loss of control definition is more problematic, especially with disorders of more gradual onset like hypoglycaemic episodes. In these cases the alternative of 'effective loss of control' would be fairer. This article explores several cases, how the criteria were applied to each, and the types of medical assessment required. PMID:24112330

  13. Automatic digital image registration

    NASA Technical Reports Server (NTRS)

    Goshtasby, A.; Jain, A. K.; Enslin, W. R.

    1982-01-01

    This paper introduces a general procedure for automatic registration of two images which may have translational, rotational, and scaling differences. This procedure involves (1) segmentation of the images, (2) isolation of dominant objects from the images, (3) determination of corresponding objects in the two images, and (4) estimation of transformation parameters using the center of gravities of objects as control points. An example is given which uses this technique to register two images which have translational, rotational, and scaling differences.

  14. Automatic Generation of Textual Summaries from Neonatal Intensive Care Data

    E-print Network

    Reiter, Ehud

    of a graphical one (Law et al., 2005). In this GraphVsText experiment, forty nurses and doc- tors with different with texts. These results motivated the BabyTalk1 project whose goal is the automatic genera- tion of texts step in the BabyTalk project, a prototype is being developed which will generate a textual summary

  15. Motor automaticity in Parkinson's disease.

    PubMed

    Wu, Tao; Hallett, Mark; Chan, Piu

    2015-10-01

    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson's disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  16. The Perfect Text.

    ERIC Educational Resources Information Center

    Russo, Ruth

    1998-01-01

    A chemistry teacher describes the elements of the ideal chemistry textbook. The perfect text is focused and helps students draw a coherent whole out of the myriad fragments of information and interpretation. The text would show chemistry as the central science necessary for understanding other sciences and would also root chemistry firmly in the…

  17. Solar Energy Project: Text.

    ERIC Educational Resources Information Center

    Tullock, Bruce, Ed.; And Others

    The text is a compilation of background information which should be useful to teachers wishing to obtain some technical information on solar technology. Twenty sections are included which deal with topics ranging from discussion of the sun's composition to the legal implications of using solar energy. The text is intended to provide useful…

  18. Computational Linguistics for Metadata Building: Aggregating Text Processing Technologies for Enhanced Image Access

    E-print Network

    Murphy, Robert F.

    applies text mining using computational linguistic techniques to automatically extract, categorizeComputational Linguistics for Metadata Building: Aggregating Text Processing Technologies difficult. Studies indicate that current cataloging practices are insufficient for accommodating this volume

  19. Improving Robust Domain Independent Summarization Jim Cowie, Eugene Ludovik, Hugo Molina-Salgado

    E-print Network

    Improving Robust Domain Independent Summarization Jim Cowie, Eugene Ludovik, Hugo Molina-Salgado Dept. 3CRL, Box 30001, NMSU, Las Cruces, NM 88003, USA (jcowie, eugene, hsalgado)@crl.nmsu.edu Abstract

  20. Phenotype-genotype association grid: a convenient method for summarizing multiple association analyses

    E-print Network

    Levy, Daniel

    Background: High-throughput genotyping generates vast amounts of data for analysis; results can be difficult to summarize succinctly. A single project may involve genotyping many genes with multiple variants per gene and ...

  1. Combining Coherence Models and Machine Translation Evaluation Metrics for Summarization Evaluation

    E-print Network

    Kan, Min-Yen

    Combining Coherence Models and Machine Translation Evaluation Metrics for Summarization Evaluation- tent coverage, apply an enhanced discourse coherence model to evaluate summary read- ability, and combine both in a trained regres- sion model to evaluate overall responsiveness. The results show

  2. Mining for Surprise Events within Text Streams

    SciTech Connect

    Whitney, Paul D.; Engel, David W.; Cramer, Nicholas O.

    2009-04-30

    This paper summarizes algorithms and analysis methodology for mining the evolving content in text streams. Text streams include news, press releases from organizations, speeches, Internet blogs, etc. These data are a fundamental source for detecting and characterizing strategic intent of individuals and organizations as well as for detecting abrupt or surprising events within communities. Specifically, an analyst may need to know if and when the topic within a text stream changes. Much of the current text feature methodology is focused on understanding and analyzing a single static collection of text documents. Corresponding analytic activities include summarizing the contents of the collection, grouping the documents based on similarity of content, and calculating concise summaries of the resulting groups. The approach reported here focuses on taking advantage of the temporal characteristics in a text stream to identify relevant features (such as change in content), and also on the analysis and algorithmic methodology to communicate these characteristics to a user. We present a variety of algorithms for detecting essential features within a text stream. A critical finding is that the characteristics used to identify features in a text stream are uncorrelated with the characteristics used to identify features in a static document collection. Our approach for communicating the information back to the user is to identify feature (word/phrase) groups. These resulting algorithms form the basis of developing software tools for a user to analyze and understand the content of text streams. We present analysis using both news information and abstracts from technical articles, and show how these algorithms provide understanding of the contents of these text streams.

  3. Toponym Resolution in Text 

    E-print Network

    Leidner, Jochen Lothar

    2007-06-26

    Background. In the area of Geographic Information Systems (GIS), a shared discipline between informatics and geography, the term geo-parsing is used to describe the process of identifying names in text, which in computational ...

  4. [Type text] Equipment Use

    E-print Network

    Gibbons, Megan

    #12;[Type text] Equipment Use The library has laptops and other equipment for checkout. Personal access times: Harbert 301, Humanities Computer Lab 118, Library ARC, Olin 104, Olin 201, Stevens Science

  5. Terminology extraction from medical texts in Polish

    PubMed Central

    2014-01-01

    Background Hospital documents contain free text describing the most important facts relating to patients and their illnesses. These documents are written in specific language containing medical terminology related to hospital treatment. Their automatic processing can help in verifying the consistency of hospital documentation and obtaining statistical data. To perform this task we need information on the phrases we are looking for. At the moment, clinical Polish resources are sparse. The existing terminologies, such as Polish Medical Subject Headings (MeSH), do not provide sufficient coverage for clinical tasks. It would be helpful therefore if it were possible to automatically prepare, on the basis of a data sample, an initial set of terms which, after manual verification, could be used for the purpose of information extraction. Results Using a combination of linguistic and statistical methods for processing over 1200 children hospital discharge records, we obtained a list of single and multiword terms used in hospital discharge documents written in Polish. The phrases are ordered according to their presumed importance in domain texts measured by the frequency of use of a phrase and the variety of its contexts. The evaluation showed that the automatically identified phrases cover about 84% of terms in domain texts. At the top of the ranked list, only 4% out of 400 terms were incorrect while out of the final 200, 20% of expressions were either not domain related or syntactically incorrect. We also observed that 70% of the obtained terms are not included in the Polish MeSH. Conclusions Automatic terminology extraction can give results which are of a quality high enough to be taken as a starting point for building domain related terminological dictionaries or ontologies. This approach can be useful for preparing terminological resources for very specific subdomains for which no relevant terminologies already exist. The evaluation performed showed that none of the tested ranking procedures were able to filter out all improperly constructed noun phrases from the top of the list. Careful choice of noun phrases is crucial to the usefulness of the created terminological resource in applications such as lexicon construction or acquisition of semantic relations from texts. PMID:24976943

  6. Automatic readout micrometer

    DOEpatents

    Lauritzen, Ted (Lafayette, CA)

    1982-01-01

    A measuring system is disclosed for surveying and very accurately positioning objects with respect to a reference line. A principal use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse or fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  7. Theory and implementation of summarization: Improving sensor interpretation for spacecraft operations

    NASA Astrophysics Data System (ADS)

    Swartwout, Michael Alden

    New paradigms in space missions require radical changes in spacecraft operations. In the past, operations were insulated from competitive pressures of cost, quality and time by system infrastructures, technological limitations and historical precedent. However, modern demands now require that operations meet competitive performance goals. One target for improvement is the telemetry downlink, where significant resources are invested to acquire thousands of measurements for human interpretation. This cost-intensive method is used because conventional operations are not based on formal methodologies but on experiential reasoning and incrementally adapted procedures. Therefore, to improve the telemetry downlink it is first necessary to invent a rational framework for discussing operations. This research explores operations as a feedback control problem, develops the conceptual basis for the use of spacecraft telemetry, and presents a method to improve performance. The method is called summarization, a process to make vehicle data more useful to operators. Summarization enables rational trades for telemetry downlink by defining and quantitatively ranking these elements: all operational decisions, the knowledge needed to inform each decision, and all possible sensor mappings to acquire that knowledge. Summarization methods were implemented for the Sapphire microsatellite; conceptual health management and system models were developed and a degree-of-observability metric was defined. An automated tool was created to generate summarization methods from these models. Methods generated using a Sapphire model were compared against the conventional operations plan. Summarization was shown to identify the key decisions and isolate the most appropriate sensors. Secondly, a form of summarization called beacon monitoring was experimentally verified. Beacon monitoring automates the anomaly detection and notification tasks and migrates these responsibilities to the space segment. A set of experiments using Sapphire demonstrated significant cost and time savings compared to conventional operations. Summarization is based on rational concepts for defining and understanding operations. Therefore, it enables additional trade studies that were formerly not possible and also can form the basis for future detailed research into spacecraft operations.

  8. Text as Image.

    ERIC Educational Resources Information Center

    Woal, Michael; Corn, Marcia Lynn

    As electronically mediated communication becomes more prevalent, print is regaining the original pictorial qualities which graphemes (written signs) lost when primitive pictographs (or picture writing) and ideographs (simplified graphemes used to communicate ideas as well as to represent objects) evolved into first written, then printed, texts of…

  9. NUTRIENT MANAGEMENT [Type text

    E-print Network

    Minnesota, University of

    NUTRIENT MANAGEMENT [Type text] AG-NM-1501 (2015) Fertilizing Corn Grown on Irrigated Sandy Soils of the soil, the cost of N fertilizer, the price received for corn, and the grower's attitude towards risk research conducted since 2007 on irrigated sandy soils. The corn market and fertilizer costs do affect

  10. Polymorphous Perversity in Texts

    ERIC Educational Resources Information Center

    Johnson-Eilola, Johndan

    2012-01-01

    Here's the tricky part: If we teach ourselves and our students that texts are made to be broken apart, remixed, remade, do we lose the polymorphous perversity that brought us pleasure in the first place? Does the pleasure of transgression evaporate when the borders are opened?

  11. Automatic Coal-Mining System

    NASA Technical Reports Server (NTRS)

    Collins, E. R., Jr.

    1985-01-01

    Coal cutting and removal done with minimal hazard to people. Automatic coal mine cutting, transport and roof-support movement all done by automatic machinery. Exposure of people to hazardous conditions reduced to inspection tours, maintenance, repair, and possibly entry mining.

  12. Consistent and Automatic Replica Regeneration

    E-print Network

    Yu, Haifeng

    Consistent and Automatic Replica Regeneration HAIFENG YU Intel Research Pittsburgh/Carnegie Mellon the availability of large-scale distributed systems re- quire automatic replica regeneration, that is, creating new replicas in response to replica failures. A major challenge to regeneration is maintaining consistency when

  13. Automatic Structures — Recent Results and Open Questions

    NASA Astrophysics Data System (ADS)

    Stephan, Frank

    2015-06-01

    Regular languages are languages recognised by finite automata; automatic structures are a generalisation of regular languages where one also uses automatic relations (which are relations recognised by synchronous finite automata) and automatic functions (which are functions whose graph is an automatic relation). Functions and relations first-order definable from other automatic functions and relations are again automatic. Automatic functions coincide with the functions computed by position-faithful one-tape Turing machines in linear time. This survey addresses recent results and open questions on topics related to automatic structures: How difficult is the isomorphism problem for various types of automatic structures? Which groups are automatic? When are automatic groups Abelian or orderable? How can one overcome some of the limitations to represent rings and fields by weakening the automaticity requirements of a structure?

  14. Hierarchical Automatic Function Definition in Genetic Programming

    E-print Network

    Fernandez, Thomas

    Hierarchical Automatic Function Definition in Genetic Programming John R. Koza Computer Science. This paper describes two extensions to genetic programming, called "automatic" function definition and "hierarchical automatic" function definition, wherein functions that might be useful in solving a problem

  15. Clinicians' evaluation of computer-assisted medication summarization of electronic medical records.

    PubMed

    Zhu, Xinxin; Cimino, James J

    2015-04-01

    Each year thousands of patients die of avoidable medication errors. When a patient is admitted to, transferred within, or discharged from a clinical facility, clinicians should review previous medication orders, current orders and future plans for care, and reconcile differences if there are any. If medication reconciliation is not accurate and systematic, medication errors such as omissions, duplications, dosing errors, or drug interactions may occur and cause harm. Computer-assisted medication applications showed promise as an intervention to reduce medication summarization inaccuracies and thus avoidable medication errors. In this study, a computer-assisted medication summarization application, designed to abstract and represent multi-source time-oriented medication data, was introduced to assist clinicians with their medication reconciliation processes. An evaluation study was carried out to assess clinical usefulness and analyze potential impact of such application. Both quantitative and qualitative methods were applied to measure clinicians' performance efficiency and inaccuracy in medication summarization process with and without the intervention of computer-assisted medication application. Clinicians' feedback indicated the feasibility of integrating such a medication summarization tool into clinical practice workflow as a complementary addition to existing electronic health record systems. The result of the study showed potential to improve efficiency and reduce inaccuracy in clinician performance of medication summarization, which could in turn improve care efficiency, quality of care, and patient safety. PMID:24393492

  16. Calibrating Item Families and Summarizing the Results Using Family Expected Response Functions

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Johnson, Matthew S.; Williamson, David M.

    2003-01-01

    Item families, which are groups of related items, are becoming increasingly popular in complex educational assessments. For example, in automatic item generation (AIG) systems, a test may consist of multiple items generated from each of a number of item models. Item calibration or scoring for such an assessment requires fitting models that can…

  17. The Texting Principal

    ERIC Educational Resources Information Center

    Kessler, Susan Stone

    2009-01-01

    The author was appointed principal of a large, urban comprehensive high school in spring 2008. One of the first things she had to figure out was how she would develop a connection with her students when there were so many of them--nearly 2,000--and only one of her. Texts may be exchanged more quickly than having a conversation over the phone,…

  18. Automatic alkaloid removal system.

    PubMed

    Yahaya, Muhammad Rizuwan; Hj Razali, Mohd Hudzari; Abu Bakar, Che Abdullah; Ismail, Wan Ishak Wan; Muda, Wan Musa Wan; Mat, Nashriyah; Zakaria, Abd

    2014-01-01

    This alkaloid automated removal machine was developed at Instrumentation Laboratory, Universiti Sultan Zainal Abidin Malaysia that purposely for removing the alkaloid toxicity from Dioscorea hispida (DH) tuber. It is a poisonous plant where scientific study has shown that its tubers contain toxic alkaloid constituents, dioscorine. The tubers can only be consumed after it poisonous is removed. In this experiment, the tubers are needed to blend as powder form before inserting into machine basket. The user is need to push the START button on machine controller for switching the water pump ON by then creating turbulence wave of water in machine tank. The water will stop automatically by triggering the outlet solenoid valve. The powders of tubers are washed for 10 minutes while 1 liter of contaminated water due toxin mixture is flowing out. At this time, the controller will automatically triggered inlet solenoid valve and the new water will flow in machine tank until achieve the desire level that which determined by ultra sonic sensor. This process will repeated for 7 h and the positive result is achieved and shows it significant according to the several parameters of biological character ofpH, temperature, dissolve oxygen, turbidity, conductivity and fish survival rate or time. From that parameter, it also shows the positive result which is near or same with control water and assuming was made that the toxin is fully removed when the pH of DH powder is near with control water. For control water, the pH is about 5.3 while water from this experiment process is 6.0 and before run the machine the pH of contaminated water is about 3.8 which are too acid. This automated machine can save time for removing toxicity from DH compared with a traditional method while less observation of the user. PMID:24783795

  19. Happiness in texting times

    PubMed Central

    Hevey, David; Hand, Karen; MacLachlan, Malcolm

    2015-01-01

    Assessing national levels of happiness has become an important research and policy issue in recent years. We examined happiness and satisfaction in Ireland using phone text messaging to collect large-scale longitudinal data from 3,093 members of the general Irish population. For six consecutive weeks, participants’ happiness and satisfaction levels were assessed. For four consecutive weeks (weeks 2–5) a different random third of the sample got feedback on the previous week’s mean happiness and satisfaction ratings. Text messaging proved a feasible means of assessing happiness and satisfaction, with almost three quarters (73%) of participants completing all assessments. Those who received feedback on the previous week’s mean ratings were eight times more likely to complete the subsequent assessments than those not receiving feedback. Providing such feedback data on mean levels of happiness and satisfaction did not systematically bias subsequent ratings either toward or away from these normative anchors. Texting is a simple and effective means to collect population level happiness and satisfaction data. PMID:26441804

  20. Text Mining the History of Medicine.

    PubMed

    Thompson, Paul; Batista-Navarro, Riza Theresa; Kontonatsios, Georgios; Carter, Jacob; Toon, Elizabeth; McNaught, John; Timmermann, Carsten; Worboys, Michael; Ananiadou, Sophia

    2016-01-01

    Historical text archives constitute a rich and diverse source of information, which is becoming increasingly readily accessible, due to large-scale digitisation efforts. However, it can be difficult for researchers to explore and search such large volumes of data in an efficient manner. Text mining (TM) methods can help, through their ability to recognise various types of semantic information automatically, e.g., instances of concepts (places, medical conditions, drugs, etc.), synonyms/variant forms of concepts, and relationships holding between concepts (which drugs are used to treat which medical conditions, etc.). TM analysis allows search systems to incorporate functionality such as automatic suggestions of synonyms of user-entered query terms, exploration of different concepts mentioned within search results or isolation of documents in which concepts are related in specific ways. However, applying TM methods to historical text can be challenging, according to differences and evolutions in vocabulary, terminology, language structure and style, compared to more modern text. In this article, we present our efforts to overcome the various challenges faced in the semantic analysis of published historical medical text dating back to the mid 19th century. Firstly, we used evidence from diverse historical medical documents from different periods to develop new resources that provide accounts of the multiple, evolving ways in which concepts, their variants and relationships amongst them may be expressed. These resources were employed to support the development of a modular processing pipeline of TM tools for the robust detection of semantic information in historical medical documents with varying characteristics. We applied the pipeline to two large-scale medical document archives covering wide temporal ranges as the basis for the development of a publicly accessible semantically-oriented search system. The novel resources are available for research purposes, while the processing pipeline and its modules may be used and configured within the Argo TM platform. PMID:26734936

  1. Publishing Historical Texts on the Semantic Web --A Case Study

    E-print Network

    Hyvönen, Eero

    archived in libraries all over the world document history in an exhaustive way that has an undeniable value] or by manual work. As a result, libraries of historical materials exist relatively widely on the world wide web names. As is stated in [5], the research of automatic text processing tends to focus on contemporary

  2. Bayesian Text Segmentation for Index Term Identification and Keyphrase Extraction

    E-print Network

    Newman, David

    ABSTRACT Automatically extracting terminology and index terms from scientific literature is useful-Segmentation model can be used to successfully extract nested terminology, outperforming previous methods for solvingBayesian Text Segmentation for Index Term Identification and Keyphrase Extraction David Newman

  3. Extracting Verb-Noun Collocations from Text Jia Yan Jian

    E-print Network

    names, idioms, and terminology. Automatic extraction of monolingual and bilingual collocationsExtracting Verb-Noun Collocations from Text Jia Yan Jian Department of Computer Science National, we describe a new method for extracting monolingual collocations. The method is based on statistical

  4. Computer illustrated texts

    NASA Astrophysics Data System (ADS)

    Harding, Robert D.

    1986-09-01

    A computer illustrated text (CIT) is a textbook in which software plays an integral part. This should be distinguished from a CAL package which aims to put instructional material and calculations, simulations, etc., all together on the computer, stressing the computing component in CAL (computer assisted learning). The CIT aims to combine the advantages of a book (e.g. the ability to browse) with the advantages of using a computer (e.g. the control by the user and the illustrative powers of computer graphics). The article describes the thinking behind some CITs, gives examples drawn from the first three, and compares their styles.

  5. TRMM Gridded Text Products

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2007-01-01

    NASA's Tropical Rainfall Measuring Mission (TRMM) has many products that contain instantaneous or gridded rain rates often among many other parameters. However, these products because of their completeness can often seem intimidating to users just desiring surface rain rates. For example one of the gridded monthly products contains well over 200 parameters. It is clear that if only rain rates are desired, this many parameters might prove intimidating. In addition, for many good reasons these products are archived and currently distributed in HDF format. This also can be an inhibiting factor in using TRMM rain rates. To provide a simple format and isolate just the rain rates from the many other parameters, the TRMM product created a series of gridded products in ASCII text format. This paper describes the various text rain rate products produced. It provides detailed information about parameters and how they are calculated. It also gives detailed format information. These products are used in a number of applications with the TRMM processing system. The products are produced from the swath instantaneous rain rates and contain information from the three major TRMM instruments: radar, radiometer, and combined. They are simple to use, human readable, and small for downloading.

  6. Injury narrative text classification using factorization model

    PubMed Central

    2015-01-01

    Narrative text is a useful way of identifying injury circumstances from the routine emergency department data collections. Automatically classifying narratives based on machine learning techniques is a promising technique, which can consequently reduce the tedious manual classification process. Existing works focus on using Naive Bayes which does not always offer the best performance. This paper proposes the Matrix Factorization approaches along with a learning enhancement process for this task. The results are compared with the performance of various other classification approaches. The impact on the classification results from the parameters setting during the classification of a medical text dataset is discussed. With the selection of right dimension k, Non Negative Matrix Factorization-model method achieves 10 CV accuracy of 0.93. PMID:26043671

  7. Attaining Automaticity in the Visual Numerosity Task is Not Automatic

    PubMed Central

    Speelman, Craig P.; Muller Townsend, Katrina L.

    2015-01-01

    This experiment is a replication of experiments reported by Lassaline and Logan (1993) using the visual numerosity task. The aim was to replicate the transition from controlled to automatic processing reported by Lassaline and Logan (1993), and to examine the extent to which this result, reported with average group results, can be observed in the results of individuals within a group. The group results in this experiment did replicate those reported by Lassaline and Logan (1993); however, one half of the sample did not attain automaticity with the task, and one-third did not exhibit a transition from controlled to automatic processing. These results raise questions about the pervasiveness of automaticity, and the interpretation of group means when examining cognitive processes. PMID:26635658

  8. A Novel Video Summarization Framework for Document Preparation and Archival Applications

    E-print Network

    Lyu, Michael R.

    A Novel Video Summarization Framework for Document Preparation and Archival Applications Shi Lu of network bandwidth and high-capacity storage devices, videos have become an im- portant way of communication in the aerospace industry and many other entities. However, browsing and managing huge video

  9. Wearable Hand Activity Recognition for Event Summarization W.W. Mayol D.W. Murray

    E-print Network

    Murray, David

    Wearable Hand Activity Recognition for Event Summarization W.W. Mayol D.W. Murray Department develop a first step towards the recogni- tion of hand activity by detecting objects subject to manip from hand activity without requiring that the wearer is explicit as in gesture-based interaction. Our

  10. One-Class Learning and Concept Summarization for Vaguely Labeled Data Streams*

    E-print Network

    Ding, Wei

    require the labeling of the training samples, where finding labeling information for the training data be a big advantage if the learning does not require any contrast samples other than the positive instancesOne-Class Learning and Concept Summarization for Vaguely Labeled Data Streams* Xingquan Zhu Dept

  11. Summarizing the Evidence on the International Trade in Illegal Gail Emilia Rosen1,2

    E-print Network

    Smith, Kate

    Summarizing the Evidence on the International Trade in Illegal Wildlife Gail Emilia Rosen1). The legal trade, however, does not represent the entirety of the international market for wildlife. Little Original Contribution Ó 2010 International Association for Ecology and Health #12;The global trade

  12. VIDEO SUMMARIZATION BY SPATIAL-TEMPORAL GRAPH OPTIMIZATION Shi Lu, Michael R. Lyu, Irwin King

    E-print Network

    Lyu, Michael R.

    VIDEO SUMMARIZATION BY SPATIAL-TEMPORAL GRAPH OPTIMIZATION Shi Lu, Michael R. Lyu, Irwin King SAR {slu, lyu, king}@cse.cuhk.edu.hk ABSTRACT In this paper we present a novel approach for video sum-temporal content cov- erage and visual coherence of the video summary. The ap- proach has three stages. First

  13. CLASSIFICATION OF SUMMARIZED VIDEOS USING HIDDEN MARKOV MODELS ON COMPRESSED CHROMATICITY

    E-print Network

    Drew, Mark S.

    1 CLASSIFICATION OF SUMMARIZED VIDEOS USING HIDDEN MARKOV MODELS ON COMPRESSED CHROMATICITY Science Simon Fraser University Vancouver, B.C., CANADA ABSTRACT As digital libraries and video databases grow, we need methods to assist us in the synthesis and analysis of digital video. Since

  14. Medical Volume Image Summarization Feng Ding Hao Li Yuan Cheng Wee Kheng Leow

    E-print Network

    Leow, Wee Kheng

    Medical Volume Image Summarization Feng Ding Hao Li Yuan Cheng Wee Kheng Leow Dept. of Computer images, there is now an explosion of medical images in any moderate-sized hospital. Access to medical images provided by standard medical databases is very limited. Therefore, there is an increasing interest

  15. Legal Provisions on Expanded Functions for Dental Hygienists and Assistants. Summarized by State. Second Edition.

    ERIC Educational Resources Information Center

    Johnson, Donald W.; Holz, Frank M.

    This second edition summarizes and interprets, from the pertinent documents of each state, those provisions which establish and regulate the tasks of hygienists and assistants, with special attention given to expanded functions. Information is updated for all jurisdictions through the end of 1973, based chiefly on materials received in response to…

  16. Simplification of Patent Claim Sentences for their Multilingual Paraphrasing and Summarization

    E-print Network

    Simplification of Patent Claim Sentences for their Multilingual Paraphrasing and Summarization Joe patent writing regula- tions, patent claims must be rendered in a single sen- tence. As a result, sentences with more than 250 words are not uncommon. In order to achieve an easier com- prehension of patent

  17. A new hybrid summarizer based on Vector Space model Statistical Physics and Linguistics

    E-print Network

    Avignon et des Pays de Vaucluse, Université de

    TorresMoreno Institute for Applied Linguistics, Universitat Pompeu Fabra, Barcelona, España. {iriaA new hybrid summarizer based on Vector Space model Statistical Physics and Linguistics Iria da or linguistics, but only a few of them combining both techniques. Our idea is that to reach a good sum- mary we

  18. Effects on Science Summarization of a Reading Comprehension Intervention for Adolescents with Behavior and Attention Disorders

    ERIC Educational Resources Information Center

    Rogevich, Mary E.; Perin, Dolores

    2008-01-01

    Sixty-three adolescent boys with behavioral disorders (BD), 31 of whom had comorbid attention deficit hyperactivity disorder (ADHD), participated in a self-regulated strategy development intervention called Think Before Reading, Think While Reading, Think After Reading, With Written Summarization (TWA-WS). TWA-WS adapted Linda Mason's TWA…

  19. SCHOOL LAW OF MONTANA 20-4-301. Duties of the teacher (summarized)

    E-print Network

    Maxwell, Bruce D.

    SCHOOL LAW OF MONTANA 20-4-301. Duties of the teacher (summarized): Conform to and enforce - definition of corporal punishment - penalty - defense. (l) "A teacher or principal has the authority to hold district may not inflict or cause to be inflicted corporal punishment on a pupil." (4) A teacher has

  20. Utilizing Marzano's Summarizing and Note Taking Strategies on Seventh Grade Students' Mathematics Performance

    ERIC Educational Resources Information Center

    Jeanmarie-Gardner, Charmaine

    2013-01-01

    A quasi-experimental research study was conducted that investigated the academic impact of utilizing Marzano's summarizing and note taking strategies on mathematic achievement. A sample of seventh graders from a middle school located on Long Island's North Shore was tested to determine whether significant differences existed in mathematic test…

  1. Towards Coherent Multi-Document Summarization Janara Christensen, Mausam, Stephen Soderland, Oren Etzioni

    E-print Network

    Mausam

    Organizing Committee, asked the Foreign Ministry to urge the Saudi government to reconsider withdrawing its- resentation is a graph that approximates the discourse relations across sentences based on indicators of multi-document summarization (MDS) is to produce high quality summaries of collections of related

  2. Automatic Command Sequence Generation

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Gladded, Roy; Khanampompan, Teerapat

    2007-01-01

    Automatic Sequence Generator (Autogen) Version 3.0 software automatically generates command sequences for the Mars Reconnaissance Orbiter (MRO) and several other JPL spacecraft operated by the multi-mission support team. Autogen uses standard JPL sequencing tools like APGEN, ASP, SEQGEN, and the DOM database to automate the generation of uplink command products, Spacecraft Command Message Format (SCMF) files, and the corresponding ground command products, DSN Keywords Files (DKF). Autogen supports all the major multi-mission mission phases including the cruise, aerobraking, mapping/science, and relay mission phases. Autogen is a Perl script, which functions within the mission operations UNIX environment. It consists of two parts: a set of model files and the autogen Perl script. Autogen encodes the behaviors of the system into a model and encodes algorithms for context sensitive customizations of the modeled behaviors. The model includes knowledge of different mission phases and how the resultant command products must differ for these phases. The executable software portion of Autogen, automates the setup and use of APGEN for constructing a spacecraft activity sequence file (SASF). The setup includes file retrieval through the DOM (Distributed Object Manager), an object database used to store project files. This step retrieves all the needed input files for generating the command products. Depending on the mission phase, Autogen also uses the ASP (Automated Sequence Processor) and SEQGEN to generate the command product sent to the spacecraft. Autogen also provides the means for customizing sequences through the use of configuration files. By automating the majority of the sequencing generation process, Autogen eliminates many sequence generation errors commonly introduced by manually constructing spacecraft command sequences. Through the layering of commands into the sequence by a series of scheduling algorithms, users are able to rapidly and reliably construct the desired uplink command products. With the aid of Autogen, sequences may be produced in a matter of hours instead of weeks, with a significant reduction in the number of people on the sequence team. As a result, the uplink product generation process is significantly streamlined and mission risk is significantly reduced. Autogen is used for operations of MRO, Mars Global Surveyor (MGS), Mars Exploration Rover (MER), Mars Odyssey, and will be used for operations of Phoenix. Autogen Version 3.0 is the operational version of Autogen including the MRO adaptation for the cruise mission phase, and was also used for development of the aerobraking and mapping mission phases for MRO.

  3. Automatic transmission system

    SciTech Connect

    Ha, J.S.

    1989-04-25

    An automatic transmission system is described for use in vehicles, which comprises: a clutch wheel containing a plurality of concentric rings of decreasing diameter, the clutch wheel being attached to an engine of the vehicle; a plurality of clutch gears corresponding in size to the concentric rings, the clutch gears being adapted to selectively and frictionally engage with the concentric rings of the clutch wheel; an accelerator pedal and a gear selector, the accelerator pedals being connected to one end of a substantially U-shaped frame member, the other end of the substantially U-shaped frame member selectively engaging with one end of one of wires received in a pair of apertures of the gear selector; a plurality of drive gear controllers and a reverse gear controller; means operatively connected with the gear selector and the plurality of drive gear controllers and reverse gear controller for selectively engaging one of the drive and reverse gear controllers depending upon the position of the gear selector; and means for individually connecting the drive and reverse gear controllers with the corresponding clutch gears whereby upon the selection of the gear selector, friction engagement is achieved between the clutch gear and the clutch wheels for rotating the wheel in the forward or reverse direction.

  4. Automatic Welding System

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Robotic welding has been of interest to industrial firms because it offers higher productivity at lower cost than manual welding. There are some systems with automated arc guidance available, but they have disadvantages, such as limitations on types of materials or types of seams that can be welded; susceptibility to stray electrical signals; restricted field of view; or tendency to contaminate the weld seam. Wanting to overcome these disadvantages, Marshall Space Flight Center, aided by Hayes International Corporation, developed system that uses closed-circuit TV signals for automatic guidance of the welding torch. NASA granted license to Combined Technologies, Inc. for commercial application of the technology. They developed a refined and improved arc guidance system. CTI in turn, licensed the Merrick Corporation, also of Nashville, for marketing and manufacturing of the new system, called the CT2 Optical Trucker. CT2 is a non-contracting system that offers adaptability to broader range of welding jobs and provides greater reliability in high speed operation. It is extremely accurate and can travel at high speed of up to 150 inches per minute.

  5. Electronically controlled automatic transmission

    SciTech Connect

    Ohkubo, M.; Shiba, H.; Nakamura, K.

    1989-03-28

    This patent describes an electronically controlled automatic transmission having a manual valve working in connection with a manual shift lever, shift valves operated by solenoid valves which are driven by an electronic control circuit previously memorizing shift patterns, and a hydraulic circuit controlled by these manual valve and shift valves for driving brakes and a clutch in order to change speed. Shift patterns of 2-range and L-range, in addition to a shift pattern of D-range, are memorized previously in the electronic control circuit, an operation switch is provided which changes the shift pattern of the electronic control circuit to any shift pattern among those of D-range, 2-range and L-range at time of the manual shift lever being in a D-range position, a releasable lock mechanism is provided which prevents the manual shift lever from entering 2-range and L-range positions, and the hydraulic circuit is set to a third speed mode when the manual shift lever is in the D-range position. The circuit is set to a second speed mode when it is in the 2-range position, and the circuit is set to a first speed mode when it is in the L-range position, respectively, in case where the shift valves are not working.

  6. Clothes Dryer Automatic Termination Evaluation

    SciTech Connect

    TeGrotenhuis, Ward E.

    2014-10-01

    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this report shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.

  7. Incremental Evolutionary Methods for Automatic

    E-print Network

    controller and robot building (Behavior-Based Robotics, BBR). Still, the complexity of programming build up our experimental field through studies of experimental and educational robotics systemsIncremental Evolutionary Methods for Automatic Programming of Robot Controllers Thesis

  8. Automatic Replies Outlook Web App User Guide

    E-print Network

    Calgary, University of

    Automatic Replies Outlook Web App User Guide Automatic Replies (Out of Office) allows you to create or on vacation. Use the following steps to use the Automatic Replies option in Outlook Web App (OWA). Turn On Automatic Reply 1. Log On to Outlook Web App http://mail.ucalgary.ca 2. Click Options in the top right

  9. Automatic safety rod for reactors

    DOEpatents

    Germer, John H. (San Jose, CA)

    1988-01-01

    An automatic safety rod for a nuclear reactor containing neutron absorbing material and designed to be inserted into a reactor core after a loss-of-core flow. Actuation is based upon either a sudden decrease in core pressure drop or the pressure drop decreases below a predetermined minimum value. The automatic control rod includes a pressure regulating device whereby a controlled decrease in operating pressure due to reduced coolant flow does not cause the rod to drop into the core.

  10. Proceedings of the Workshop on Automatic Summarization for Different Genres, Media, and Languages, pages 3340, Portland, Oregon, June 23, 2011. c 2011 Association for Computational Linguistics

    E-print Network

    ://dammit.lt/wikistats Barack Obama Joe Biden White House Inauguration ... US Airways Flight 1549 Chesley Sullenberger Hudson the arti- cles into 3 clusters, {Barack Obama, Joe Biden, White House, Inauguration} which corresponds to the inaugu- ration of Barack Obama, {US Airways Flight 1549, Ches- ley Sullenburger, Hudson River} which

  11. Proceedings of the Workshop on Automatic Summarization for Different Genres, Media, and Languages, pages 17, Portland, Oregon, June 23, 2011. c 2011 Association for Computational Linguistics

    E-print Network

    in the mil- itary (Duffy, 2008; Eovito, 2006). On US Navy ships, watchstanders (i.e., personnel who continu- ously monitor and respond to situation updates dur- ing a ship's operation, Stavridis and Girrier (2007 reason for this is that chat is a difficult medium to analyze: its characteristics make it difficult

  12. Reading Text While Driving

    PubMed Central

    Horrey, William J.; Hoffman, Joshua D.

    2015-01-01

    Objective In this study, we investigated how drivers adapt secondary-task initiation and time-sharing behavior when faced with fluctuating driving demands. Background Reading text while driving is particularly detrimental; however, in real-world driving, drivers actively decide when to perform the task. Method In a test track experiment, participants were free to decide when to read messages while driving along a straight road consisting of an area with increased driving demands (demand zone) followed by an area with low demands. A message was made available shortly before the vehicle entered the demand zone. We manipulated the type of driving demands (baseline, narrow lane, pace clock, combined), message format (no message, paragraph, parsed), and the distance from the demand zone when the message was available (near, far). Results In all conditions, drivers started reading messages (drivers’ first glance to the display) before entering or before leaving the demand zone but tended to wait longer when faced with increased driving demands. While reading messages, drivers looked more or less off road, depending on types of driving demands. Conclusions For task initiation, drivers avoid transitions from low to high demands; however, they are not discouraged when driving demands are already elevated. Drivers adjust time-sharing behavior according to driving demands while performing secondary tasks. Nonetheless, such adjustment may be less effective when total demands are high. Application This study helps us to understand a driver’s role as an active controller in the context of distracted driving and provides insights for developing distraction interventions. PMID:25850162

  13. Automatic system for computer program documentation

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.; Elliott, R. W.; Arseven, S.; Colunga, D.

    1972-01-01

    Work done on a project to design an automatic system for computer program documentation aids was made to determine what existing programs could be used effectively to document computer programs. Results of the study are included in the form of an extensive bibliography and working papers on appropriate operating systems, text editors, program editors, data structures, standards, decision tables, flowchart systems, and proprietary documentation aids. The preliminary design for an automated documentation system is also included. An actual program has been documented in detail to demonstrate the types of output that can be produced by the proposed system.

  14. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  15. An evaluation of an automatic markup system

    SciTech Connect

    Taghva, K.; Condit, A.; Borsack, J.

    1995-04-01

    One predominant application of OCR is the recognition of full text documents for information retrieval. Modern retrieval systems exploit both the textual content of the document as well as its structure. The relationship between textual content and character accuracy have been the focus of recent studies. It has been shown that due to the redundancies in text, average precision and recall is not heavily affected by OCR character errors. What is not fully known is to what extent OCR devices can provide reliable information that can be used to capture the structure of the document. In this paper, the authors present a preliminary report on the design and evaluation of a system to automatically markup technical documents, based on information provided by an OCR device. The device the authors use differs from traditional OCR devices in that it not only performs optical character recognition, but also provides detailed information about page layout, word geometry, and font usage. Their automatic markup program, which they call Autotag, uses this information, combined with dictionary, lookup and content analysis, to identify structural components of the text. These include the document title, author information, abstract, sections, section titles, paragraphs, sentences, and de-hyphenated words. A visual examination of the hardcopy will be compared to the output of their markup system to determine its correctness.

  16. Problem of Automatic Thesaurus Construction (K Voprosu Ob Avtomaticheskom Postroenii Tezarusa). Subject Country: USSR.

    ERIC Educational Resources Information Center

    Ivanova, I. S.

    With respect to automatic indexing and information retrieval, statistical analysis of word usages in written texts is finding broad application in the solution of a number of problems. One of these problems is compiling a thesaurus on a digital computer. Using two methods, a comparative experiment in automatic thesaurus construction is presented.…

  17. Automatic classification of citation function Simone Teufel Advaith Siddharthan Dan Tidhar

    E-print Network

    Teufel, Simone

    Automatic classification of citation function Simone Teufel Advaith Siddharthan Dan Tidhar Natural.Teufel,Advaith.Siddharthan,Dan.Tidhar}@cl.cam.ac.uk Abstract The automatic recognition of the rhetori- cal function of citations in scientific text has many citation indexers. Citation function is defined as the author's reason for citing a given pa- per (e

  18. Proceedings of EACL '99 Encoding a Parallel Corpus for Automatic Terminology

    E-print Network

    Proceedings of EACL '99 Encoding a Parallel Corpus for Automatic Terminology Extraction Johann-)automatic terminology acquisi- tion at the European Academy Bolzano. The main focus will be on encoding a text corpus resources in all areas dealing with natural language processing in one form or another. Terminology is one

  19. Energy efficient video summarization and transmission over a slow fading wireless channel

    NASA Astrophysics Data System (ADS)

    Li, Zhu; Zhai, Fan; Katsaggelos, Aggelos K.; Pappas, Thrasyvoulos N.

    2005-03-01

    With the deployment of 2.5G/3G cellular network infrastructure and large number of camera equipped cell phones, the demand for video enabled applications are high. However, for an uplink wireless channel, both the bandwidth and battery energy capability are limited in a mobile phone for the video communication. These technical problems need to be effectively addressed before the practical and affordable video applications can be made available to consumers. In this paper we investigate the energy efficient video communication solution through joint video summarization and transmission adaptation over a slow fading channel. Coding and modulation schemes, as well as packet transmission strategy are optimized and adapted to the unique packet arrival and delay characteristics of the video summaries. Operational energy efficiency -- summary distortion performance is characterized under an optimal summarization setting.

  20. #. Creating Metabolic Network Models using Text Mining and Expert Knowledge

    E-print Network

    Wurtele, Eve Syrkin

    with a genetic algorithm-based data-mining tool. · FCModeler: Predictive models summarize known metabolic rela1 #. Creating Metabolic Network Models using Text Mining and Expert Knowledge J.A. Dickerson1 , D.W. Fulmer Proctor & Gamble Corporation, Cincinnati, Ohio, USA Introduction RNA profiling analysis and new

  1. Automatic caption generation for news images.

    PubMed

    Feng, Yansong; Lapata, Mirella

    2013-04-01

    This paper is concerned with the task of automatically generating captions for images, which is important for many image-related applications. Examples include video and image retrieval as well as the development of tools that aid visually impaired individuals to access pictorial information. Our approach leverages the vast resource of pictures available on the web and the fact that many of them are captioned and colocated with thematically related documents. Our model learns to create captions from a database of news articles, the pictures embedded in them, and their captions, and consists of two stages. Content selection identifies what the image and accompanying article are about, whereas surface realization determines how to verbalize the chosen content. We approximate content selection with a probabilistic image annotation model that suggests keywords for an image. The model postulates that images and their textual descriptions are generated by a shared set of latent variables (topics) and is trained on a weakly labeled dataset (which treats the captions and associated news articles as image labels). Inspired by recent work in summarization, we propose extractive and abstractive surface realization models. Experimental results show that it is viable to generate captions that are pertinent to the specific content of an image and its associated article, while permitting creativity in the description. Indeed, the output of our abstractive model compares favorably to handwritten captions and is often superior to extractive methods. PMID:22641700

  2. BaffleText: a Human Interactive Proof

    NASA Astrophysics Data System (ADS)

    Chew, Monica; Baird, Henry S.

    2003-01-01

    Internet services designed for human use are being abused by programs. We present a defense against such attacks in the form of a CAPTCHA (Completely Automatic Public Turing test to tell Computers and Humans Apart) that exploits the difference in ability between humans and machines in reading images of text. CAPTCHAs are a special case of 'human interactive proofs,' a broad class of security protocols that allow people to identify themselves over networks as members of given groups. We point out vulnerabilities of reading-based CAPTCHAs to dictionary and computer-vision attacks. We also draw on the literature on the psychophysics of human reading, which suggests fresh defenses available to CAPTCHAs. Motivated by these considerations, we propose BaffleText, a CAPTCHA which uses non-English pronounceable words to defend against dictionary attacks, and Gestalt-motivated image-masking degradations to defend against image restoration attacks. Experiments on human subjects confirm the human legibility and user acceptance of BaffleText images. We have found an image-complexity measure that correlates well with user acceptance and assists in engineering the generation of challenges to fit the ability gap. Recent computer-vision attacks, run independently by Mori and Jitendra, suggest that BaffleText is stronger than two existing CAPTCHAs.

  3. Sentence Similarity Analysis with Applications in Automatic Short Answer Grading

    ERIC Educational Resources Information Center

    Mohler, Michael A. G.

    2012-01-01

    In this dissertation, I explore unsupervised techniques for the task of automatic short answer grading. I compare a number of knowledge-based and corpus-based measures of text similarity, evaluate the effect of domain and size on the corpus-based measures, and also introduce a novel technique to improve the performance of the system by integrating…

  4. Automatic Identification and Organization of Index Terms for Interactive Browsing.

    ERIC Educational Resources Information Center

    Wacholder, Nina; Evans, David K.; Klavans, Judith L.

    The potential of automatically generated indexes for information access has been recognized for several decades, but the quantity of text and the ambiguity of natural language processing have made progress at this task more difficult than was originally foreseen. Recently, a body of work on development of interactive systems to support phrase…

  5. Automatic Content-based Categorization of Wikipedia Articles Zeno Gantner

    E-print Network

    Schmidt-Thieme, Lars

    Automatic Content-based Categorization of Wikipedia Articles Zeno Gantner University of Hildesheim schmidt-thieme@ismll.de Abstract Wikipedia's article contents and its cate- gory hierarchy are widely used articles ­ has attracted less attention so far. We propose to "return the favor" and use text classi- fiers

  6. Young Children's Thinking in Relation to Texts: A Comparison with Older Children.

    ERIC Educational Resources Information Center

    Feathers, Karen M.

    2002-01-01

    Compared the thinking of kindergartners and sixth-graders as expressed in unassisted retellings of a narrative text. Found no significant age differences in retelling lengths and few significant age differences in the amount of types of thinking. Older children tended to summarize paragraphs and single sentences; young children tended to summarize

  7. Comment on se rappelle et on resume des histoires (How We Remember and Summarize Stories)

    ERIC Educational Resources Information Center

    Kintsch, Walter; Van Dijk, Teun A.

    1975-01-01

    Working from theories of text grammar and logic, the authors suggest and tentatively confirm several hypotheses concerning the role of micro- and macro-structures in comprehension and recall of texts. (Text is in French.) (DB)

  8. Automatic rapid attachable warhead section

    DOEpatents

    Trennel, Anthony J. (Albuquerque, NM)

    1994-05-10

    Disclosed are a method and apparatus for (1) automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, (2) automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, (3) manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and (4) automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly.

  9. Automatic rapid attachable warhead section

    DOEpatents

    Trennel, A.J.

    1994-05-10

    Disclosed are a method and apparatus for automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly. 10 figures.

  10. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1990-01-01

    The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.

  11. Formalization and separation: A systematic basis for interpreting approaches to summarizing science for climate policy.

    PubMed

    Sundqvist, Göran; Bohlin, Ingemar; Hermansen, Erlend A T; Yearley, Steven

    2015-06-01

    In studies of environmental issues, the question of how to establish a productive interplay between science and policy is widely debated, especially in relation to climate change. The aim of this article is to advance this discussion and contribute to a better understanding of how science is summarized for policy purposes by bringing together two academic discussions that usually take place in parallel: the question of how to deal with formalization (structuring the procedures for assessing and summarizing research, e.g. by protocols) and separation (maintaining a boundary between science and policy in processes of synthesizing science for policy). Combining the two dimensions, we draw a diagram onto which different initiatives can be mapped. A high degree of formalization and separation are key components of the canonical image of scientific practice. Influential Science and Technology Studies analysts, however, are well known for their critiques of attempts at separation and formalization. Three examples that summarize research for policy purposes are presented and mapped onto the diagram: the Intergovernmental Panel on Climate Change, the European Union's Science for Environment Policy initiative, and the UK Committee on Climate Change. These examples bring out salient differences concerning how formalization and separation are dealt with. Discussing the space opened up by the diagram, as well as the limitations of the attraction to its endpoints, we argue that policy analyses, including much Science and Technology Studies work, are in need of a more nuanced understanding of the two crucial dimensions of formalization and separation. Accordingly, two analytical claims are presented, concerning trajectories, how organizations represented in the diagram move over time, and mismatches, how organizations fail to handle the two dimensions well in practice. PMID:26477199

  12. Research on the automatic laser navigation system of the tunnel boring machine

    NASA Astrophysics Data System (ADS)

    Liu, Yake; Li, Yueqiang

    2011-12-01

    By establishing relevant coordinates of the Automatic Laser Navigation System, the basic principle of the system which accesses the TBM three-dimensional reference point and yawing angle by mathematical transformation between TBM, target prism and earth coordinate systems is discussed deeply in details. According to the way of rigid body descriptions of its posture, TBM attitude parameters measurement and data acquisition methods are proposed, and measures to improve the accuracy of the Laser Navigation System are summarized.

  13. Important Text Characteristics for Early-Grades Text Complexity

    ERIC Educational Resources Information Center

    Fitzgerald, Jill; Elmore, Jeff; Koons, Heather; Hiebert, Elfrieda H.; Bowen, Kimberly; Sanford-Moore, Eleanor E.; Stenner, A. Jackson

    2015-01-01

    The Common Core set a standard for all children to read increasingly complex texts throughout schooling. The purpose of the present study was to explore text characteristics specifically in relation to early-grades text complexity. Three hundred fifty primary-grades texts were selected and digitized. Twenty-two text characteristics were identified…

  14. PressureText: Pressure Input for Mobile Phone Text Entry

    E-print Network

    PressureText: Pressure Input for Mobile Phone Text Entry Abstract Pressure sensitive buttons presses are currently necessary to record an action. We present PressureText, a text-entry technique for a pressure augmented mobile phone. In a study comparing PressureText to MultiTap, we found that despite

  15. How to Summarize a 6,000-Word Paper in a Six-Minute Video Clip

    PubMed Central

    Vachon, Patrick; Daudelin, Genevieve; Hivon, Myriam

    2013-01-01

    As part of our research team's knowledge transfer and exchange (KTE) efforts, we created a six-minute video clip that summarizes, in plain language, a scientific paper that describes why and how three teams of academic entrepreneurs developed new health technologies. Recognizing that video-based KTE strategies can be a valuable tool for health services and policy researchers, this paper explains the constraints and sources of inspiration that shaped our video production process. Aiming to provide practical guidance, we describe the steps and tools that we used to identify, refine and package the key content of the scientific paper into an original video format. PMID:23968634

  16. Automatic diluter for bacteriological samples.

    PubMed Central

    Trinel, P A; Bleuze, P; Leroy, G; Moschetto, Y; Leclerc, H

    1983-01-01

    The described apparatus, carrying 190 tubes, allows automatic and aseptic dilution of liquid or suspended-solid samples. Serial 10-fold dilutions are programmable from 10(-1) to 10(-9) and are carried out in glass tubes with screw caps and split silicone septa. Dilution assays performed with strains of Escherichia coli and Bacillus stearothermophilus permitted efficient conditions for sterilization of the needle to be defined and showed that the automatic dilutions were as accurate and as reproducible as the most rigorous conventional dilutions. Images PMID:6338826

  17. Algorithms for skiascopy measurement automatization

    NASA Astrophysics Data System (ADS)

    Fomins, Sergejs; Trukša, Ren?rs; Kr?mi?a, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  18. An Enterprise Ontology Building the Bases for Automatic Metadata Generation

    NASA Astrophysics Data System (ADS)

    Thönssen, Barbara

    'Information Overload' or 'Document Deluge' is a problem enterprises and Public Administrations alike are still dealing with. Although commercial products for Enterprise Content or Records Management are available since more than two decades, especially in Small and Medium Enterprises and Public Administrations they didn't get through. Because of the wide range of document types and formats full-text indexing is not sufficient, but assigning metadata manually is not possible. Thus, automatic, format-independent generation of metadata for (public) enterprise documents is needed. Using context to infer metadata automatically has been researched for example for web-documents or learning objects. If (public) enterprise objects were modelled 'machine understandable' they could be build the context for automatic metadata generation. The approach introduced in this paper is to model context (the (public) enterprise objects) in an ontology and using that ontology to infer content-related metadata.

  19. Torpedo: topic periodicity discovery from text data

    NASA Astrophysics Data System (ADS)

    Wang, Jingjing; Deng, Hongbo; Han, Jiawei

    2015-05-01

    Although history may not repeat itself, many human activities are inherently periodic, recurring daily, weekly, monthly, yearly or following some other periods. Such recurring activities may not repeat the same set of keywords, but they do share similar topics. Thus it is interesting to mine topic periodicity from text data instead of just looking at the temporal behavior of a single keyword/phrase. Some previous preliminary studies in this direction prespecify a periodic temporal template for each topic. In this paper, we remove this restriction and propose a simple yet effective framework Torpedo to mine periodic/recurrent patterns from text, such as news articles, search query logs, research papers, and web blogs. We first transform text data into topic-specific time series by a time dependent topic modeling module, where each of the time series characterizes the temporal behavior of a topic. Then we use time series techniques to detect periodicity. Hence we both obtain a clear view of how topics distribute over time and enable the automatic discovery of periods that are inherent in each topic. Theoretical and experimental analyses demonstrate the advantage of Torpedo over existing work.

  20. An overview of text-independent speaker recognition: From features to supervectors

    E-print Network

    Joensuu, University of

    on future directions. Ó 2009 Elsevier B.V. All rights reserved. Keywords: Speaker recognition; TextAn overview of text-independent speaker recognition: From features to supervectors Tomi Kinnunen a gives an overview of automatic speaker recognition technology, with an emphasis on text

  1. MORPHOLOGICAL ANALYSIS FOR A GERMAN TEXT-TO-SPEECH SYSTEM Amanda Pounder, Markus Kommenda

    E-print Network

    MORPHOLOGICAL ANALYSIS FOR A GERMAN TEXT-TO-SPEECH SYSTEM Amanda Pounder, Markus Kommenda Institut is the automatic derivation of correct pronunciation from the graphemic form of a text. The software module GRAPHON for German word-forms. It provides each text input item with an individual characterization

  2. Semi-Supervised Data Summarization: Using Spectral Libraries to Improve Hyperspectral Clustering

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Shu, H. P.; Mazzoni, D.; Castano, R.

    2005-01-01

    Hyperspectral imagers produce very large images, with each pixel recorded at hundreds or thousands of different wavelengths. The ability to automatically generate summaries of these data sets enables several important applications, such as quickly browsing through a large image repository or determining the best use of a limited bandwidth link (e.g., determining which images are most critical for full transmission). Clustering algorithms can be used to generate these summaries, but traditional clustering methods make decisions based only on the information contained in the data set. In contrast, we present a new method that additionally leverages existing spectral libraries to identify materials that are likely to be present in the image target area. We find that this approach simultaneously reduces runtime and produces summaries that are more relevant to science goals.

  3. Automatic Utterance Type Detection Using Suprasegmental Features 

    E-print Network

    Wright, Helen

    The goal of the work presented here is to automatically predict the type of an utterance in spoken dialogue by using automatically extracted suprasegmental information. For this task we present and compare three stochastic ...

  4. AUTOMATIC PARTICLE IMAGE VELOCIMETRY UNCERTAINTY QUANTIFICATION

    E-print Network

    Smith, Barton L.

    AUTOMATIC PARTICLE IMAGE VELOCIMETRY UNCERTAINTY QUANTIFICATION by Benjamin H. Timmins A thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.1 Particle Image Velocimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 PIV Copyright c Benjamin H. Timmins 2011 All Rights Reserved #12;iii Abstract Automatic Particle Image

  5. Automatic caption generation for news images 

    E-print Network

    Feng, Yansong

    2011-06-30

    This thesis is concerned with the task of automatically generating captions for images, which is important for many image-related applications. Automatic description generation for video frames would help security ...

  6. Evaluating Automatic Summaries of Meeting Recordings 

    E-print Network

    Murray, Gabriel; Renals, Steve; Carletta, Jean; Moore, Johanna

    2005-01-01

    The research below explores schemes for evaluating automatic summaries of business meetings, using the ICSI Meeting Corpus. Both automatic and subjective evaluations were carried out, with a central interest being whether ...

  7. Automatic agar tray inoculation device

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Mills, S. M.

    1972-01-01

    Automatic agar tray inoculation device is simple in design and foolproof in operation. It employs either conventional inoculating loop or cotton swab for uniform inoculation of agar media, and it allows technician to carry on with other activities while tray is being inoculated.

  8. Short Papers___________________________________________________________________________________________________ Automatic Classification of

    E-print Network

    Lyons, Michael J.

    image sets are presented for the classification of sex, ªrace,º and expression. A visual interpretation single digital images. The examples chosen to demonstrate our method are facial expression, sex___________________________________________________________________________________________________ Automatic Classification of Single Facial Images Michael J. Lyons, Julien Budynek, and Shigeru Akamatsu

  9. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  10. Automatically Preparing Safe SQL Queries

    NASA Astrophysics Data System (ADS)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  11. Automatic Detection of Altered Fingerprints

    E-print Network

    Automatic Detection of Altered Fingerprints Soweon Yoon, Jianjiang Feng, Anil K. Jain, Arun Ross Michigan State University http://biometrics.cse.msu.edu #12;Outline · Fingerprint matching · Fingerprint · Future work #12;Fingerprints Law enforcement Border control Most widely used biometrics in law

  12. AUTOMATIC RECORD REVIEWS Brian Whitman

    E-print Network

    Ellis, Dan

    analyze a large testbed of music and a corpus of reviews for each work to uncover pat- terns and developAUTOMATIC RECORD REVIEWS Brian Whitman MIT Media Lab Music Mind and Machine Group Daniel P.W. Ellis and focused source of linguistic data that can be related to musical recordings, to provide a basis

  13. 46 CFR 63.25-1 - Small automatic auxiliary boilers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...2010-10-01 false Small automatic auxiliary boilers. 63.25-1 Section 63.25-1...MARINE ENGINEERING AUTOMATIC AUXILIARY BOILERS Requirements for Specific Types of Automatic Auxiliary Boilers § 63.25-1 Small automatic...

  14. 46 CFR 63.25-1 - Small automatic auxiliary boilers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...2012-10-01 false Small automatic auxiliary boilers. 63.25-1 Section 63.25-1...MARINE ENGINEERING AUTOMATIC AUXILIARY BOILERS Requirements for Specific Types of Automatic Auxiliary Boilers § 63.25-1 Small automatic...

  15. 46 CFR 63.25-1 - Small automatic auxiliary boilers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...2013-10-01 false Small automatic auxiliary boilers. 63.25-1 Section 63.25-1...MARINE ENGINEERING AUTOMATIC AUXILIARY BOILERS Requirements for Specific Types of Automatic Auxiliary Boilers § 63.25-1 Small automatic...

  16. 46 CFR 63.25-1 - Small automatic auxiliary boilers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...2014-10-01 false Small automatic auxiliary boilers. 63.25-1 Section 63.25-1...MARINE ENGINEERING AUTOMATIC AUXILIARY BOILERS Requirements for Specific Types of Automatic Auxiliary Boilers § 63.25-1 Small automatic...

  17. 46 CFR 63.25-1 - Small automatic auxiliary boilers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...2011-10-01 false Small automatic auxiliary boilers. 63.25-1 Section 63.25-1...MARINE ENGINEERING AUTOMATIC AUXILIARY BOILERS Requirements for Specific Types of Automatic Auxiliary Boilers § 63.25-1 Small automatic...

  18. Text analysis methods, text analysis apparatuses, and articles of manufacture

    DOEpatents

    Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M

    2014-10-28

    Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.

  19. Self-Compassion and Automatic Thoughts

    ERIC Educational Resources Information Center

    Akin, Ahmet

    2012-01-01

    The aim of this research is to examine the relationships between self-compassion and automatic thoughts. Participants were 299 university students. In this study, the Self-compassion Scale and the Automatic Thoughts Questionnaire were used. The relationships between self-compassion and automatic thoughts were examined using correlation analysis…

  20. FaceTrack: Tracking and summarizing faces from compressed video Hualu Wang, Harold S. Stone*, Shih-Fu Chang

    E-print Network

    Chang, Shih-Fu

    FaceTrack: Tracking and summarizing faces from compressed video Hualu Wang, Harold S. Stone*, ShihTrack, a system that detects, tracks, and groups faces from compressed video data. We introduce the face tracking. Keywords: Face tracking, face summarization, the Kalman filter, compressed domain, MPEG, video analysis. 1

  1. Mining the Text: 34 Text Features that Can Ease or Obstruct Text Comprehension and Use

    ERIC Educational Resources Information Center

    White, Sheida

    2012-01-01

    This article presents 34 characteristics of texts and tasks ("text features") that can make continuous (prose), noncontinuous (document), and quantitative texts easier or more difficult for adolescents and adults to comprehend and use. The text features were identified by examining the assessment tasks and associated texts in the national…

  2. Evidence Summarized in Attorneys' Closing Arguments Predicts Acquittals in Criminal Trials of Child Sexual Abuse

    PubMed Central

    Stolzenberg, Stacia N.; Lyon, Thomas D.

    2014-01-01

    Evidence summarized in attorney's closing arguments of criminal child sexual abuse cases (N = 189) was coded to predict acquittal rates. Ten variables were significant bivariate predictors; five variables significant at p < .01 were entered into a multivariate model. Cases were likely to result in an acquittal when the defendant was not charged with force, the child maintained contact with the defendant after the abuse occurred, or the defense presented a hearsay witness regarding the victim's statements, a witness regarding the victim's character, or a witness regarding another witnesses' character (usually the mother). The findings suggest that jurors might believe that child molestation is akin to a stereotype of violent rape and that they may be swayed by defense challenges to the victim's credibility and the credibility of those close to the victim. PMID:24920247

  3. [Summarization of professor Zhang Jia-wei's experiences in treatment of female climacteric syndrome].

    PubMed

    Chen, Gui-zhen; Xu, Yun-xiang

    2007-06-01

    Professor ZHANG Jia-wei's unique experiences in treatment of women's climacteric syndrome are introduced. According to physio-pathologic characteristics of the women in climacteric, in acupuncture and moxibustion treatment, he puts forward the therapeutic principles of reinforcing the spleen and replenishing the kidney, harmonizing yin and yang, and pays attention to application of the needling method, flying needling manipulation with stable, accurate, light and quick characteristics, which are summarized by professor Zhang according to his clinical experience of many years, and catgut embedding at back-shu points; in addition, he advocates selection of treatment methods according to climatic and seasonal condition, geographical location and individual condition, puncturing followed by moxibustion, combining acupuncture with medicine. PMID:17663111

  4. Interactive exploration of surveillance video through action shot summarization and trajectory visualization.

    PubMed

    Meghdadi, Amir H; Irani, Pourang

    2013-12-01

    We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems. PMID:24051778

  5. Text Complexity and the CCSS

    ERIC Educational Resources Information Center

    Aspen Institute, 2012

    2012-01-01

    What is meant by text complexity is a measurement of how challenging a particular text is to read. There are a myriad of different ways of explaining what makes text challenging to read, from the sophistication of the vocabulary employed to the length of its sentences to even measurements of how the text as a whole coheres. Research shows that no…

  6. The Challenge of Challenging Text

    ERIC Educational Resources Information Center

    Shanahan, Timothy; Fisher, Douglas; Frey, Nancy

    2012-01-01

    The Common Core State Standards emphasize the value of teaching students to engage with complex text. But what exactly makes a text complex, and how can teachers help students develop their ability to learn from such texts? The authors of this article discuss five factors that determine text complexity: vocabulary, sentence structure, coherence,…

  7. Automatic Word Sense Disambiguation of Acronyms and Abbreviations in Clinical Texts

    ERIC Educational Resources Information Center

    Moon, Sungrim

    2012-01-01

    The use of acronyms and abbreviations is increasing profoundly in the clinical domain in large part due to the greater adoption of electronic health record (EHR) systems and increased electronic documentation within healthcare. A single acronym or abbreviation may have multiple different meanings or senses. Comprehending the proper meaning of an…

  8. Automatic Identification of Topic Tags from Texts Based on Expansion-Extraction Approach

    ERIC Educational Resources Information Center

    Yang, Seungwon

    2013-01-01

    Identifying topics of a textual document is useful for many purposes. We can organize the documents by topics in digital libraries. Then, we could browse and search for the documents with specific topics. By examining the topics of a document, we can quickly understand what the document is about. To augment the traditional manual way of topic…

  9. Entity Quick Click: Rapid Text Copying Based on Automatic Entity Extraction

    E-print Network

    Ishak, Edward

    -59593-298-4/06/0004. Eric A. Bier Palo Alto Research Center, Inc. 3333 Coyote Hill Road Palo Alto, CA 94304 USA bier York, NY 10027 USA ishak@cs.columbia.edu Ed Chi Palo Alto Research Center, Inc. 3333 Coyote Hill Road

  10. Automatic Derivation of Surface Text Patterns for a Maximum Entropy Based Question Answering System

    E-print Network

    in an un- supervised fashion using a collection of trivia question and answer pairs as seeds. These pat Wat- son Research Center during Summer 2002. KM database. Each of the pairs in KM represents a trivia question and its corresponding answer, such as the ones used in the trivia card game. The question

  11. Automatically Detecting Acute Myocardial Infarction Events from EHR Text: A Preliminary Study

    PubMed Central

    Zheng, Jiaping; Yarzebski, Jorge; Ramesh, Balaji Polepalli; Goldberg, Robert J.; Yu, Hong

    2014-01-01

    The Worcester Heart Attack Study (WHAS) is a population-based surveillance project examining trends in the incidence, in-hospital, and long-term survival rates of acute myocardial infarction (AMI) among residents of central Massachusetts. It provides insights into various aspects of AMI. Much of the data has been assessed manually. We are developing supervised machine learning approaches to automate this process. Since the existing WHAS data cannot be used directly for an automated system, we first annotated the AMI information in electronic health records (EHR). With strict inter-annotator agreement over 0.74 and un-strict agreement over 0.9 of Cohen’s ?, we annotated 105 EHR discharge summaries (135k tokens). Subsequently, we applied the state-of-the-art supervised machine-learning model, Conditional Random Fields (CRFs) for AMI detection. We explored different approaches to overcome the data sparseness challenge and our results showed that cluster-based word features achieved the highest performance. PMID:25954440

  12. Evaluating a variety of text-mined features for automatic protein function prediction with GOstruct.

    PubMed

    Funk, Christopher S; Kahanda, Indika; Ben-Hur, Asa; Verspoor, Karin M

    2015-01-01

    Most computational methods that predict protein function do not take advantage of the large amount of information contained in the biomedical literature. In this work we evaluate both ontology term co-mention and bag-of-words features mined from the biomedical literature and analyze their impact in the context of a structured output support vector machine model, GOstruct. We find that even simple literature based features are useful for predicting human protein function (F-max: Molecular Function =0.408, Biological Process =0.461, Cellular Component =0.608). One advantage of using literature features is their ability to offer easy verification of automated predictions. We find through manual inspection of misclassifications that some false positive predictions could be biologically valid predictions based upon support extracted from the literature. Additionally, we present a "medium-throughput" pipeline that was used to annotate a large subset of co-mentions; we suggest that this strategy could help to speed up the rate at which proteins are curated. PMID:26005564

  13. A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text

    ERIC Educational Resources Information Center

    Nguyen, Bao-An; Yang, Don-Lin

    2012-01-01

    An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…

  14. Use of a New Set of Linguistic Features to Improve Automatic Assessment of Text Readability

    ERIC Educational Resources Information Center

    Yoshimi, Takehiko; Kotani, Katsunori; Isahara, Hitoshi

    2012-01-01

    The present paper proposes and evaluates a readability assessment method designed for Japanese learners of EFL (English as a foreign language). The proposed readability assessment method is constructed by a regression algorithm using a new set of linguistic features that were employed separately in previous studies. The results showed that the…

  15. Text analysis devices, articles of manufacture, and text analysis methods

    SciTech Connect

    Turner, Alan E; Hetzler, Elizabeth G; Nakamura, Grant C

    2013-05-28

    Text analysis devices, articles of manufacture, and text analysis methods are described according to some aspects. In one aspect, a text analysis device includes processing circuitry configured to analyze initial text to generate a measurement basis usable in analysis of subsequent text, wherein the measurement basis comprises a plurality of measurement features from the initial text, a plurality of dimension anchors from the initial text and a plurality of associations of the measurement features with the dimension anchors, and wherein the processing circuitry is configured to access a viewpoint indicative of a perspective of interest of a user with respect to the analysis of the subsequent text, and wherein the processing circuitry is configured to use the viewpoint to generate the measurement basis.

  16. Compare and Contrast Electronic Text with Traditionally Printed Text.

    ERIC Educational Resources Information Center

    Karchmer, Rachel

    The electronic text program described in this lesson plan guides students to compare and contrast the characteristics of electronic text with the characteristics of traditionally printed text, gaining a deeper understanding of how to navigate and comprehend information found on the Internet. During a 30 minute and a 45 minutes lesson, students…

  17. Towards Automatic Classification of Wikipedia Content

    NASA Astrophysics Data System (ADS)

    Szyma?ski, Julian

    Wikipedia - the Free Encyclopedia encounters the problem of proper classification of new articles everyday. The process of assignment of articles to categories is performed manually and it is a time consuming task. It requires knowledge about Wikipedia structure, which is beyond typical editor competence, which leads to human-caused mistakes - omitting or wrong assignments of articles to categories. The article presents application of SVM classifier for automatic classification of documents from The Free Encyclopedia. The classifier application has been tested while using two text representations: inter-documents connections (hyperlinks) and word content. The results of the performed experiments evaluated on hand crafted data show that the Wikipedia classification process can be partially automated. The proposed approach can be used for building a decision support system which suggests editors the best categories that fit new content entered to Wikipedia.

  18. Multimodal Excitatory Interfaces with Automatic Content Classification

    NASA Astrophysics Data System (ADS)

    Williamson, John; Murray-Smith, Roderick

    We describe a non-visual interface for displaying data on mobile devices, based around active exploration: devices are shaken, revealing the contents rattling around inside. This combines sample-based contact sonification with event playback vibrotactile feedback for a rich and compelling display which produces an illusion much like balls rattling inside a box. Motion is sensed from accelerometers, directly linking the motions of the user to the feedback they receive in a tightly closed loop. The resulting interface requires no visual attention and can be operated blindly with a single hand: it is reactive rather than disruptive. This interaction style is applied to the display of an SMS inbox. We use language models to extract salient features from text messages automatically. The output of this classification process controls the timbre and physical dynamics of the simulated objects. The interface gives a rapid semantic overview of the contents of an inbox, without compromising privacy or interrupting the user.

  19. Automatic information extraction for computerized clinical guideline.

    PubMed

    Zhu, Huijia; Ni, Yuan; Cai, Peng; Cao, Feng

    2013-01-01

    Clinical Guidelines (CG) are recommendations on the appropriate treatment and care of people with specific diseases and conditions. CG should be used by both physicians and patients to make the informed decision. However, the CGs are not well used due to their complexity and because they are frequently updated. The computerized CGs are proposed to make use of the computer to do the decision making. However, it needs a lot of human effort to transform the narrative CG into computerized CG. In this paper, we proposed a method to use the NLP techniques to extract the fine-grained information from the text based CG automatically. Such information could be easily converted to the computer interpretable models. PMID:23920797

  20. Text editor on a chip

    SciTech Connect

    Jung Wan Cho; Heung Kyu Lee

    1983-01-01

    The authors propose a processor which provides useful facilities for implementing text editing commands. The processor now being developed is a component of the general front-end editing system which parses the program text and processes the text. This processor attached to a conventional microcomputer system bus executes screen editing functions. Conventional text editing is a typical application of the microprocessors. But in this paper emphasis is given to the firmware and hardware processing of texts in order that the processor can be fabricated in a single VLSI chip. To increase the overall regularity and decrease the design cost, the basic instructions are text editing oriented with short basic cycles. 6 references.

  1. Automatic design of magazine covers

    NASA Astrophysics Data System (ADS)

    Jahanian, Ali; Liu, Jerry; Tretter, Daniel R.; Lin, Qian; Damera-Venkata, Niranjan; O'Brien-Strain, Eamonn; Lee, Seungyon; Fan, Jian; Allebach, Jan P.

    2012-03-01

    In this paper, we propose a system for automatic design of magazine covers that quantifies a number of concepts from art and aesthetics. Our solution to automatic design of this type of media has been shaped by input from professional designers, magazine art directors and editorial boards, and journalists. Consequently, a number of principles in design and rules in designing magazine covers are delineated. Several techniques are derived and employed in order to quantify and implement these principles and rules in the format of a software framework. At this stage, our framework divides the task of design into three main modules: layout of magazine cover elements, choice of color for masthead and cover lines, and typography of cover lines. Feedback from professional designers on our designs suggests that our results are congruent with their intuition.

  2. Automatic transmission for electric wheelchairs.

    PubMed

    Reswick, J B

    1985-07-01

    A new infinitely variable automatic transmission called the RESATRAN that automatically changes its speed ratio in response to load torque being transmitted is presented. A prototype has been built and tested on a conventional three-wheeled electric motor propelled wheelchair. It is shown theoretically that more than 50 percent reduction in power during hill climbing may be expected when a transmission-equipped wheelchair is compared to a direct-drive vehicle operating at the same voltage. It is suggested that with such a transmission, wheelchairs can use much smaller motors and associated electronic controls, while at the same time gaining in efficiency that results in longer operating distances for the same battery charge. Design details of the transmission and test results are presented. These results show a substantial reduction in operating current and increased distance of operation over a test course. PMID:3835264

  3. Intelligent Text Retrieval and Knowledge Acquisition from Texts for NASA Applications: Preprocessing Issues

    NASA Technical Reports Server (NTRS)

    2001-01-01

    In this contract, which is a component of a larger contract that we plan to submit in the coming months, we plan to study the preprocessing issues which arise in applying natural language processing techniques to NASA-KSC problem reports. The goals of this work will be to deal with the issues of: a) automatically obtaining the problem reports from NASA-KSC data bases, b) the format of these reports and c) the conversion of these reports to a format that will be adequate for our natural language software. At the end of this contract, we expect that these problems will be solved and that we will be ready to apply our natural language software to a text database of over 1000 KSC problem reports.

  4. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Št?pán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  5. Automatic Home Nursing Activity Recommendation

    PubMed Central

    Luo, Gang; Tang, Chunqiang

    2009-01-01

    The rapid deployment of Web-based, consumer-centric electronic medical records (CEMRs) is an important trend in healthcare. In this paper, we incorporate nursing knowledge into CEMR so that it can automatically recommend home nursing activities (HNAs). Those more complex HNAs are made clickable for users to find detailed implementation procedures. We demonstrate the effectiveness of our techniques using USMLE medical exam cases. PMID:20351888

  6. Automatic computation of transfer functions

    DOEpatents

    Atcitty, Stanley; Watson, Luke Dale

    2015-04-14

    Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

  7. Toward automatic finite element analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Perucchio, Renato; Voelcker, Herbert

    1987-01-01

    Two problems must be solved if the finite element method is to become a reliable and affordable blackbox engineering tool. Finite element meshes must be generated automatically from computer aided design databases and mesh analysis must be made self-adaptive. The experimental system described solves both problems in 2-D through spatial and analytical substructuring techniques that are now being extended into 3-D.

  8. Automatic translation among spoken languages

    NASA Technical Reports Server (NTRS)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  9. Automatic Contrail Detection and Segmentation

    NASA Technical Reports Server (NTRS)

    Weiss, John M.; Christopher, Sundar A.; Welch, Ronald M.

    1998-01-01

    Automatic contrail detection is of major importance in the study of the atmospheric effects of aviation. Due to the large volume of satellite imagery, selecting contrail images for study by hand is impractical and highly subject to human error. It is far better to have a system in place that will automatically evaluate an image to determine 1) whether it contains contrails and 2) where the contrails are located. Preliminary studies indicate that it is possible to automatically detect and locate contrails in Advanced Very High Resolution Radiometer (AVHRR) imagery with a high degree of confidence. Once contrails have been identified and localized in a satellite image, it is useful to segment the image into contrail versus noncontrail pixels. The ability to partition image pixels makes it possible to determine the optical properties of contrails, including optical thickness and particle size. In this paper, we describe a new technique for segmenting satellite images containing contrails. This method has good potential for creating a contrail climatology in an automated fashion. The majority of contrails are detected, rejecting clutter in the image, even cirrus streaks. Long, thin contrails are most easily detected. However, some contrails may be missed because they are curved, diffused over a large area, or present in short segments. Contrails average 2-3 km in width for the cases studied.

  10. Prioritized text spotting using SLAM

    E-print Network

    Landa, Yafim

    2013-01-01

    We show how to exploit temporal and spatial coherence of image observations to achieve efficient and effective text detection and decoding for a sensor suite moving through an environment rich in text at a variety of scales ...

  11. Text structure-aware classification

    E-print Network

    Dzunic, Zoran, S.M. Massachusetts Institute of Technology

    2009-01-01

    Bag-of-words representations are used in many NLP applications, such as text classification and sentiment analysis. These representations ignore relations across different sentences in a text and disregard the underlying ...

  12. Informational Text and the CCSS

    ERIC Educational Resources Information Center

    Aspen Institute, 2012

    2012-01-01

    What constitutes an informational text covers a broad swath of different types of texts. Biographies & memoirs, speeches, opinion pieces & argumentative essays, and historical, scientific or technical accounts of a non-narrative nature are all included in what the Common Core State Standards (CCSS) envisions as informational text. Also included…

  13. Text Signals Influence Team Artifacts

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Rysavy, Monica D.; Taricani, Ellen

    2015-01-01

    This exploratory quasi-experimental investigation describes the influence of text signals on team visual map artifacts. In two course sections, four-member teams were given one of two print-based text passage versions on the course-related topic "Social influence in groups" downloaded from Wikipedia; this text had two paragraphs, each…

  14. Choosing Software for Text Processing.

    ERIC Educational Resources Information Center

    Mason, Robert M.

    1983-01-01

    Review of text processing software for microcomputers covers data entry, text editing, document formatting, and spelling and proofreading programs including "Wordstar,""PeachText,""PerfectWriter,""Select," and "The Word Plus.""The Whole Earth Software Catalog" and a new terminal to be manufactured for OCLC by IBM are mentioned. (EJS)

  15. Selecting Texts and Course Materials.

    ERIC Educational Resources Information Center

    Smith, Robert E.

    One of the most important decisions speech communication basic course directors make is the selection of the textbook. The first consideration in their choice of text should be whether or not the proposed text covers the units integral to the course. A second consideration should be whether or not the text covers the special topics integral to the…

  16. Slippery Texts and Evolving Literacies

    ERIC Educational Resources Information Center

    Mackey, Margaret

    2007-01-01

    The idea of "slippery texts" provides a useful descriptor for materials that mutate and evolve across different media. Eight adult gamers, encountering the slippery text "American McGee's Alice," demonstrate a variety of ways in which players attempt to manage their attention as they encounter a new text with many resonances. The range of their…

  17. Translation and Text-Analysis.

    ERIC Educational Resources Information Center

    Barbe, Katharina

    The primary goal of translation is to enable an audience in a Target Language to understand a text/discourse which was ultimately not intended for them. The primary goal of text-analysis is to further the understanding of phenomena inside one language. There are several similarities between translation and text-analysis: both translation and…

  18. Semantic Annotation of Complex Text Structures in Problem Reports

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Throop, David R.; Fleming, Land D.

    2011-01-01

    Text analysis is important for effective information retrieval from databases where the critical information is embedded in text fields. Aerospace safety depends on effective retrieval of relevant and related problem reports for the purpose of trend analysis. The complex text syntax in problem descriptions has limited statistical text mining of problem reports. The presentation describes an intelligent tagging approach that applies syntactic and then semantic analysis to overcome this problem. The tags identify types of problems and equipment that are embedded in the text descriptions. The power of these tags is illustrated in a faceted searching and browsing interface for problem report trending that combines automatically generated tags with database code fields and temporal information.

  19. Text Association Analysis and Ambiguity in Text Mining

    NASA Astrophysics Data System (ADS)

    Bhonde, S. B.; Paikrao, R. L.; Rahane, K. U.

    2010-11-01

    Text Mining is the process of analyzing a semantically rich document or set of documents to understand the content and meaning of the information they contain. The research in Text Mining will enhance human's ability to process massive quantities of information, and it has high commercial values. Firstly, the paper discusses the introduction of TM its definition and then gives an overview of the process of text mining and the applications. Up to now, not much research in text mining especially in concept/entity extraction has focused on the ambiguity problem. This paper addresses ambiguity issues in natural language texts, and presents a new technique for resolving ambiguity problem in extracting concept/entity from texts. In the end, it shows the importance of TM in knowledge discovery and highlights the up-coming challenges of document mining and the opportunities it offers.

  20. ParaText : scalable text analysis and visualization.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-07-01

    Automated analysis of unstructured text documents (e.g., web pages, newswire articles, research publications, business reports) is a key capability for solving important problems in areas including decision making, risk assessment, social network analysis, intelligence analysis, scholarly research and others. However, as data sizes continue to grow in these areas, scalable processing, modeling, and semantic analysis of text collections becomes essential. In this paper, we present the ParaText text analysis engine, a distributed memory software framework for processing, modeling, and analyzing collections of unstructured text documents. Results on several document collections using hundreds of processors are presented to illustrate the exibility, extensibility, and scalability of the the entire process of text modeling from raw data ingestion to application analysis.

  1. Development of a Summarized Health Index (SHI) for use in predicting survival in sea turtles.

    PubMed

    Li, Tsung-Hsien; Chang, Chao-Chin; Cheng, I-Jiunn; Lin, Suen-Chuain

    2015-01-01

    Veterinary care plays an influential role in sea turtle rehabilitation, especially in endangered species. Physiological characteristics, hematological and plasma biochemistry profiles, are useful references for clinical management in animals, especially when animals are during the convalescence period. In this study, these factors associated with sea turtle surviving were analyzed. The blood samples were collected when sea turtles remained alive, and then animals were followed up for surviving status. The results indicated that significantly negative correlation was found between buoyancy disorders (BD) and sea turtle surviving (p < 0.05). Furthermore, non-surviving sea turtles had significantly higher levels of aspartate aminotranspherase (AST), creatinine kinase (CK), creatinine and uric acid (UA) than surviving sea turtles (all p < 0.05). After further analysis by multiple logistic regression model, only factors of BD, creatinine and UA were included in the equation for calculating summarized health index (SHI) for each individual. Through evaluation by receiver operating characteristic (ROC) curve, the result indicated that the area under curve was 0.920 ± 0.037, and a cut-off SHI value of 2.5244 showed 80.0% sensitivity and 86.7% specificity in predicting survival. Therefore, the developed SHI could be a useful index to evaluate health status of sea turtles and to improve veterinary care at rehabilitation facilities. PMID:25803431

  2. Development of a Summarized Health Index (SHI) for Use in Predicting Survival in Sea Turtles

    PubMed Central

    Li, Tsung-Hsien; Chang, Chao-Chin; Cheng, I-Jiunn; Lin, Suen-Chuain

    2015-01-01

    Veterinary care plays an influential role in sea turtle rehabilitation, especially in endangered species. Physiological characteristics, hematological and plasma biochemistry profiles, are useful references for clinical management in animals, especially when animals are during the convalescence period. In this study, these factors associated with sea turtle surviving were analyzed. The blood samples were collected when sea turtles remained alive, and then animals were followed up for surviving status. The results indicated that significantly negative correlation was found between buoyancy disorders (BD) and sea turtle surviving (p < 0.05). Furthermore, non-surviving sea turtles had significantly higher levels of aspartate aminotranspherase (AST), creatinine kinase (CK), creatinine and uric acid (UA) than surviving sea turtles (all p < 0.05). After further analysis by multiple logistic regression model, only factors of BD, creatinine and UA were included in the equation for calculating summarized health index (SHI) for each individual. Through evaluation by receiver operating characteristic (ROC) curve, the result indicated that the area under curve was 0.920 ± 0.037, and a cut-off SHI value of 2.5244 showed 80.0% sensitivity and 86.7% specificity in predicting survival. Therefore, the developed SHI could be a useful index to evaluate health status of sea turtles and to improve veterinary care at rehabilitation facilities. PMID:25803431

  3. Summarizing polygenic risks for complex diseases in a clinical whole genome report

    PubMed Central

    Kong, Sek Won; Lee, In-Hee; Leschiner, Ignaty; Krier, Joel; Kraft, Peter; Rehm, Heidi L.; Green, Robert C.; Kohane, Isaac S.; MacRae, Calum A.

    2015-01-01

    Purpose Disease-causing mutations and pharmacogenomic variants are of primary interest for clinical whole-genome sequencing. However, estimating genetic liability for common complex diseases using established risk alleles might one day prove clinically useful. Methods We compared polygenic scoring methods using a case-control data set with independently discovered risk alleles in the MedSeq Project. For eight traits of clinical relevance in both the primary-care and cardiomyopathy study cohorts, we estimated multiplicative polygenic risk scores using 161 published risk alleles and then normalized using the population median estimated from the 1000 Genomes Project. Results Our polygenic score approach identified the overrepresentation of independently discovered risk alleles in cases as compared with controls using a large-scale genome-wide association study data set. In addition to normalized multiplicative polygenic risk scores and rank in a population, the disease prevalence and proportion of heritability explained by known common risk variants provide important context in the interpretation of modern multilocus disease risk models. Conclusion Our approach in the MedSeq Project demonstrates how complex trait risk variants from an individual genome can be summarized and reported for the general clinician and also highlights the need for definitive clinical studies to obtain reference data for such estimates and to establish clinical utility. PMID:25341114

  4. Proceedings of the 2006 Australasian Language Technology Workshop (ALTW2006), pages 7582. Automatic Mapping Clinical Notes to Medical Terminologies

    E-print Network

    ­82. Automatic Mapping Clinical Notes to Medical Terminologies Jon Patrick, Yefeng Wang and Peter Budd School. The present pa- per describes a system that automatically maps free text into a medical reference terminology to terminology is a fundamental problem in many advanced medical information systems. SNOMED CT is the most

  5. Text Detection and Recognition in Imagery: A Survey.

    PubMed

    Ye, Qixiang; Doermann, David

    2015-07-01

    This paper analyzes, compares, and contrasts technical challenges, methods, and the performance of text detection and recognition research in color imagery. It summarizes the fundamental problems and enumerates factors that should be considered when addressing these problems. Existing techniques are categorized as either stepwise or integrated and sub-problems are highlighted including text localization, verification, segmentation and recognition. Special issues associated with the enhancement of degraded text and the processing of video text, multi-oriented, perspectively distorted and multilingual text are also addressed. The categories and sub-categories of text are illustrated, benchmark datasets are enumerated, and the performance of the most representative approaches is compared. This review provides a fundamental comparison and analysis of the remaining problems in the field. PMID:26352454

  6. Machine aided indexing from natural language text

    NASA Technical Reports Server (NTRS)

    Silvester, June P.; Genuardi, Michael T.; Klingbiel, Paul H.

    1993-01-01

    The NASA Lexical Dictionary (NLD) Machine Aided Indexing (MAI) system was designed to (1) reuse the indexing of the Defense Technical Information Center (DTIC); (2) reuse the indexing of the Department of Energy (DOE); and (3) reduce the time required for original indexing. This was done by automatically generating appropriate NASA thesaurus terms from either the other agency's index terms, or, for original indexing, from document titles and abstracts. The NASA STI Program staff devised two different ways to generate thesaurus terms from text. The first group of programs identified noun phrases by a parsing method that allowed for conjunctions and certain prepositions, on the assumption that indexable concepts are found in such phrases. Results were not always satisfactory, and it was noted that indexable concepts often occurred outside of noun phrases. The first method also proved to be too slow for the ultimate goal of interactive (online) MAI. The second group of programs used the knowledge base (KB), word proximity, and frequency of word and phrase occurrence to identify indexable concepts. Both methods are described and illustrated. Online MAI has been achieved, as well as several spinoff benefits, which are also described.

  7. ParaText : scalable text modeling and analysis.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-06-01

    Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language.

  8. Guidelines for Effective Usage of Text Highlighting Techniques.

    PubMed

    Strobelt, Hendrik; Oelke, Daniela; Kwon, Bum Chul; Schreck, Tobias; Pfister, Hanspeter

    2016-01-01

    Semi-automatic text analysis involves manual inspection of text. Often, different text annotations (like part-of-speech or named entities) are indicated by using distinctive text highlighting techniques. In typesetting there exist well-known formatting conventions, such as bold typeface, italics, or background coloring, that are useful for highlighting certain parts of a given text. Also, many advanced techniques for visualization and highlighting of text exist; yet, standard typesetting is common, and the effects of standard typesetting on the perception of text are not fully understood. As such, we surveyed and tested the effectiveness of common text highlighting techniques, both individually and in combination, to discover how to maximize pop-out effects while minimizing visual interference between techniques. To validate our findings, we conducted a series of crowdsourced experiments to determine: i) a ranking of nine commonly-used text highlighting techniques; ii) the degree of visual interference between pairs of text highlighting techniques; iii) the effectiveness of techniques for visual conjunctive search. Our results show that increasing font size works best as a single highlighting technique, and that there are significant visual interferences between some pairs of highlighting techniques. We discuss the pros and cons of different combinations as a design guideline to choose text highlighting techniques for text viewers. PMID:26529715

  9. An NLP Framework for Non-Topical Text Analysis in Urdu--A Resource Poor Language

    ERIC Educational Resources Information Center

    Mukund, Smruthi

    2012-01-01

    Language plays a very important role in understanding the culture and mindset of people. Given the abundance of electronic multilingual data, it is interesting to see what insight can be gained by automatic analysis of text. This in turn calls for text analysis which is focused on non-topical information such as emotions being expressed that is in…

  10. Identification of Chinese Personal Names in Unrestricted Texts. Lawrence CHEUNG, Benjamin K. TSOU

    E-print Network

    Identification of Chinese Personal Names in Unrestricted Texts. Lawrence CHEUNG, Benjamin K. TSOU Automatic identification of Chinese personal names in unrestricted texts is a key task in Chinese word, if it is not properly addressed. This paper (1) demonstrates the problems of Chinese personal name identification

  11. Differences in Text Structure and Its Implications for Assessment of Struggling Readers

    ERIC Educational Resources Information Center

    Deane, Paul; Sheehan, Kathleen M.; Sabatini, John; Futagi, Yoko; Kostin, Irene

    2006-01-01

    One source of potential difficulty for struggling readers is the variability of texts across grade levels. This article explores the use of automatic natural language processing techniques to identify dimensions of variation within a corpus of school-appropriate texts. Specifically, we asked: Are there identifiable dimensions of lexical and…

  12. MUSIC GENRES CLASSIFICATION USING TEXT CATEGORIZATION METHOD Kai Chen, Sheng Gao, Yongwei Zhu, Qibin Sun

    E-print Network

    Sun, Qibin

    MUSIC GENRES CLASSIFICATION USING TEXT CATEGORIZATION METHOD Kai Chen, Sheng Gao, Yongwei Zhu.a-star.edu.sg ABSTRACT Automatic music genre classification is one of the most challenging problems in music information retrieval and management of digital music database. In this paper, we propose a new framework using text

  13. Automated Text Classification in a Big-Data Context: Some Issues and Proposal Solutions

    E-print Network

    Friedl, Herwig

    Automated Text Classification in a Big-Data Context: Some Issues and Proposal Solutions Corrado with analysis and management of big- data set of texts. In the current year we have worked on two projects concerning big-data analysis on the following topics. The first one was about an automatic categorization

  14. Automatic identification of algal community from microscopic images.

    PubMed

    Santhi, Natchimuthu; Pradeepa, Chinnaraj; Subashini, Parthasarathy; Kalaiselvi, Senthil

    2013-01-01

    A good understanding of the population dynamics of algal communities is crucial in several ecological and pollution studies of freshwater and oceanic systems. This paper reviews the subsequent introduction to the automatic identification of the algal communities using image processing techniques from microscope images. The diverse techniques of image preprocessing, segmentation, feature extraction and recognition are considered one by one and their parameters are summarized. Automatic identification and classification of algal community are very difficult due to various factors such as change in size and shape with climatic changes, various growth periods, and the presence of other microbes. Therefore, the significance, uniqueness, and various approaches are discussed and the analyses in image processing methods are evaluated. Algal identification and associated problems in water organisms have been projected as challenges in image processing application. Various image processing approaches based on textures, shapes, and an object boundary, as well as some segmentation methods like, edge detection and color segmentations, are highlighted. Finally, artificial neural networks and some machine learning algorithms were used to classify and identifying the algae. Further, some of the benefits and drawbacks of schemes are examined. PMID:24151424

  15. Alexithymic features and automatic amygdala reactivity to facial emotion.

    PubMed

    Kugel, Harald; Eichmann, Mischa; Dannlowski, Udo; Ohrmann, Patricia; Bauer, Jochen; Arolt, Volker; Heindel, Walter; Suslow, Thomas

    2008-04-11

    Alexithymic individuals have difficulties in identifying and verbalizing their emotions. The amygdala is known to play a central role in processing emotion stimuli and in generating emotional experience. In the present study automatic amygdala reactivity to facial emotion was investigated as a function of alexithymia (as assessed by the 20-Item Toronto Alexithymia Scale). The Beck-Depression Inventory (BDI) and the State-Trait-Anxiety Inventory (STAI) were administered to measure participants' depressivity and trait anxiety. During 3T fMRI scanning, pictures of faces bearing sad, happy, and neutral expressions masked by neutral faces were presented to 21 healthy volunteers. The amygdala was selected as the region of interest (ROI) and voxel values of the ROI were extracted, summarized by mean and tested among the different conditions. A detection task was applied to assess participants' awareness of the masked emotional faces shown in the fMRI experiment. Masked sad and happy facial emotions led to greater right amygdala activation than masked neutral faces. The alexithymia feature difficulties identifying feelings was negatively correlated with the neural response of the right amygdala to masked sad faces, even when controlling for depressivity and anxiety. Reduced automatic amygdala responsivity may contribute to problems in identifying one's emotions in everyday life. Low spontaneous reactivity of the amygdala to sad faces could implicate less engagement in the encoding of negative emotional stimuli. PMID:18314269

  16. Text analysis devices, articles of manufacture, and text analysis methods

    DOEpatents

    Turner, Alan E; Hetzler, Elizabeth G; Nakamura, Grant C

    2015-03-31

    Text analysis devices, articles of manufacture, and text analysis methods are described according to some aspects. In one aspect, a text analysis device includes a display configured to depict visible images, and processing circuitry coupled with the display and wherein the processing circuitry is configured to access a first vector of a text item and which comprises a plurality of components, to access a second vector of the text item and which comprises a plurality of components, to weight the components of the first vector providing a plurality of weighted values, to weight the components of the second vector providing a plurality of weighted values, and to combine the weighted values of the first vector with the weighted values of the second vector to provide a third vector.

  17. Texting while driving: is speech-based text entry less risky than handheld text entry?

    PubMed

    He, J; Chaparro, A; Nguyen, B; Burge, R J; Crandall, J; Chaparro, B; Ni, R; Cao, S

    2014-11-01

    Research indicates that using a cell phone to talk or text while maneuvering a vehicle impairs driving performance. However, few published studies directly compare the distracting effects of texting using a hands-free (i.e., speech-based interface) versus handheld cell phone, which is an important issue for legislation, automotive interface design and driving safety training. This study compared the effect of speech-based versus handheld text entries on simulated driving performance by asking participants to perform a car following task while controlling the duration of a secondary text-entry task. Results showed that both speech-based and handheld text entries impaired driving performance relative to the drive-only condition by causing more variation in speed and lane position. Handheld text entry also increased the brake response time and increased variation in headway distance. Text entry using a speech-based cell phone was less detrimental to driving performance than handheld text entry. Nevertheless, the speech-based text entry task still significantly impaired driving compared to the drive-only condition. These results suggest that speech-based text entry disrupts driving, but reduces the level of performance interference compared to text entry with a handheld device. In addition, the difference in the distraction effect caused by speech-based and handheld text entry is not simply due to the difference in task duration. PMID:25089769

  18. Hierarchical Concept Indexing of Full-Text Documents in the Unified Medical Language System Information Sources Map.

    ERIC Educational Resources Information Center

    Wright, Lawrence W.; Nardini, Holly K. Grossetta; Aronson, Alan R.; Rindflesch, Thomas C.

    1999-01-01

    Describes methods for applying natural-language processing for automatic concept-based indexing of full text and methods for exploiting the structure and hierarchy of full-text documents to a large collection of full-text documents drawn from the Health Services/Technology Assessment Text database at the National Library of Medicine. Examines how…

  19. Hermeneutic reading of classic texts.

    PubMed

    Koskinen, Camilla A-L; Lindström, Unni Å

    2013-09-01

    The purpose of this article is to broaden the understandinfg of the hermeneutic reading of classic texts. The aim is to show how the choice of a specific scientific tradition in conjunction with a methodological approach creates the foundation that clarifies the actual realization of the reading. This hermeneutic reading of classic texts is inspired by Gadamer's notion that it is the researcher's own research tradition and a clearly formulated theoretical fundamental order that shape the researcher's attitude towards texts and create the starting point that guides all reading, uncovering and interpretation. The researcher's ethical position originates in a will to openness towards what is different in the text and which constantly sets the researcher's preunderstanding and research tradition in movement. It is the researcher's attitude towards the text that allows the text to address, touch and arouse wonder. Through a flexible, lingering and repeated reading of classic texts, what is different emerges with a timeless value. The reading of classic texts is an act that may rediscover and create understanding for essential dimensions and of human beings' reality on a deeper level. The hermeneutic reading of classic texts thus brings to light constantly new possibilities of uncovering for a new envisioning and interpretation for a new understanding of the essential concepts and phenomena within caring science. PMID:23004237

  20. Unification of automatic target tracking and automatic target recognition

    NASA Astrophysics Data System (ADS)

    Schachter, Bruce J.

    2014-06-01

    The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT?ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.

  1. Supporting the education evidence portal via text mining

    PubMed Central

    Ananiadou, Sophia; Thompson, Paul; Thomas, James; Mu, Tingting; Oliver, Sandy; Rickinson, Mark; Sasaki, Yutaka; Weissenbacher, Davy; McNaught, John

    2010-01-01

    The UK Education Evidence Portal (eep) provides a single, searchable, point of access to the contents of the websites of 33 organizations relating to education, with the aim of revolutionizing work practices for the education community. Use of the portal alleviates the need to spend time searching multiple resources to find relevant information. However, the combined content of the websites of interest is still very large (over 500?000 documents and growing). This means that searches using the portal can produce very large numbers of hits. As users often have limited time, they would benefit from enhanced methods of performing searches and viewing results, allowing them to drill down to information of interest more efficiently, without having to sift through potentially long lists of irrelevant documents. The Joint Information Systems Committee (JISC)-funded ASSIST project has produced a prototype web interface to demonstrate the applicability of integrating a number of text-mining tools and methods into the eep, to facilitate an enhanced searching, browsing and document-viewing experience. New features include automatic classification of documents according to a taxonomy, automatic clustering of search results according to similar document content, and automatic identification and highlighting of key terms within documents. PMID:20643679

  2. Improve Reading with Complex Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    The Common Core State Standards have cast a renewed light on reading instruction, presenting teachers with the new requirements to teach close reading of complex texts. Teachers and administrators should consider a number of essential features of close reading: They are short, complex texts; rich discussions based on worthy questions; revisiting…

  3. Text Mining Using Linear Models

    E-print Network

    Stine, Robert A.

    ) or something else. Wiki example Jim bought 300 shares of Acme Corp in 2006. Customized systems build Statistical Models for Text Markov chains Hidden Markov models have been successfully used in text mining, particularly speech tagging Hidden Markov model (HMM) Transition probabilities for observed words ! P(wt|wt-1

  4. Understanding and Teaching Complex Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2014-01-01

    Teachers in today's classrooms struggle every day to design instructional interventions that would build students' reading skills and strategies in order to ensure their comprehension of complex texts. Text complexity can be determined in both qualitative and quantitative ways. In this article, the authors describe various innovative…

  5. Towards Sustainable Text Concept Mapping

    ERIC Educational Resources Information Center

    Conlon, Tom

    2009-01-01

    Previous experimental studies have indicated that young people's text comprehension and summarisation skills can be improved by techniques based on text concept mapping (TCM). However, these studies have done little to elucidate a practical pedagogy that can make the techniques adoptable within the context of typical secondary school classrooms.…

  6. Learnability vs. Readability of Texts.

    ERIC Educational Resources Information Center

    Guthrie, John T.

    A distinction is made between the learnability and readability of text materials. Learnability refers to the extent to which new learning results from reading a passage; readability refers to the extent to which a passage is comprehended. Cleraly, comprehension can occur without new learning. Classic readability formulas use text characteristics…

  7. Toward integrated scene text reading.

    PubMed

    Weinman, Jerod J; Butler, Zachary; Knoll, Dugan; Feild, Jacqueline

    2014-02-01

    The growth in digital camera usage combined with a worldly abundance of text has translated to a rich new era for a classic problem of pattern recognition, reading. While traditional document processing often faces challenges such as unusual fonts, noise, and unconstrained lexicons, scene text reading amplifies these challenges and introduces new ones such as motion blur, curved layouts, perspective projection, and occlusion among others. Reading scene text is a complex problem involving many details that must be handled effectively for robust, accurate results. In this work, we describe and evaluate a reading system that combines several pieces, using probabilistic methods for coarsely binarizing a given text region, identifying baselines, and jointly performing word and character segmentation during the recognition process. By using scene context to recognize several words together in a line of text, our system gives state-of-the-art performance on three difficult benchmark data sets. PMID:24356356

  8. Temporal reasoning over clinical text: the state of the art

    PubMed Central

    Sun, Weiyi; Rumshisky, Anna; Uzuner, Ozlem

    2013-01-01

    Objectives To provide an overview of the problem of temporal reasoning over clinical text and to summarize the state of the art in clinical natural language processing for this task. Target audience This overview targets medical informatics researchers who are unfamiliar with the problems and applications of temporal reasoning over clinical text. Scope We review the major applications of text-based temporal reasoning, describe the challenges for software systems handling temporal information in clinical text, and give an overview of the state of the art. Finally, we present some perspectives on future research directions that emerged during the recent community-wide challenge on text-based temporal reasoning in the clinical domain. PMID:23676245

  9. Text structures in medical text processing: empirical evidence and a text understanding prototype.

    PubMed Central

    Hahn, U.; Romacker, M.

    1997-01-01

    We consider the role of textual structures in medical texts. In particular, we examine the impact the lacking recognition of text phenomena has on the validity of medical knowledge bases fed by a natural language understanding front-end. First, we review the results from an empirical study on a sample of medical texts considering, in various forms of local coherence phenomena (anaphora and textual ellipses). We then discuss the representation bias emerging in the text knowledge base that is likely to occur when these phenomena are not dealt with--mainly the emergence of referentially incoherent and invalid representations. We then turn to a medical text understanding system designed to account for local text coherence. PMID:9357739

  10. Humans in Space: Summarizing the Medico-Biological Results of the Space Shuttle Program

    NASA Technical Reports Server (NTRS)

    Risin, Diana; Stepaniak, P. C.; Grounds, D. J.

    2011-01-01

    As we celebrate the 50th anniversary of Gagarin's flight that opened the era of Humans in Space we also commemorate the 30th anniversary of the Space Shuttle Program (SSP) which was triumphantly completed by the flight of STS-135 on July 21, 2011. These were great milestones in the history of Human Space Exploration. Many important questions regarding the ability of humans to adapt and function in space were answered for the past 50 years and many lessons have been learned. Significant contribution to answering these questions was made by the SSP. To ensure the availability of the Shuttle Program experiences to the international space community NASA has made a decision to summarize the medico-biological results of the SSP in a fundamental edition that is scheduled to be completed by the end of 2011 beginning 2012. The goal of this edition is to define the normal responses of the major physiological systems to short-duration space flights and provide a comprehensive source of information for planning, ensuring successful operational activities and for management of potential medical problems that might arise during future long-term space missions. The book includes the following sections: 1. History of Shuttle Biomedical Research and Operations; 2. Medical Operations Overview Systems, Monitoring, and Care; 3. Biomedical Research Overview; 4. System-specific Adaptations/Responses, Issues, and Countermeasures; 5. Multisystem Issues and Countermeasures. In addition, selected operational documents will be presented in the appendices. The chapters are written by well-recognized experts in appropriate fields, peer reviewed, and edited by physicians and scientists with extensive expertise in space medical operations and space-related biomedical research. As Space Exploration continues the major question whether humans are capable of adapting to long term presence and adequate functioning in space habitats remains to be answered We expect that the comprehensive review of the medico-biological results of the SSP along with the data collected during the missions on the space stations (Mir and ISS) provides a good starting point in seeking the answer to this question.

  11. Text mining for systems modeling.

    PubMed

    Kowald, Axel; Schmeier, Sebastian

    2011-01-01

    The yearly output of scientific papers is constantly rising and makes it often impossible for the individual researcher to keep up. Text mining of scientific publications is, therefore, an interesting method to automate knowledge and data retrieval from the literature. In this chapter, we discuss specific tasks required for text mining, including their problems and limitations. The second half of the chapter demonstrates the various aspects of text mining using a practical example. Publications are transformed into a vector space representation and then support vector machines are used to classify papers depending on their content of kinetic parameters, which are required for model building in systems biology. PMID:21063956

  12. Automatic Computer Mapping of Terrain

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.

    1971-01-01

    Computer processing of 17 wavelength bands of visible, reflective infrared, and thermal infrared scanner spectrometer data, and of three wavelength bands derived from color aerial film has resulted in successful automatic computer mapping of eight or more terrain classes in a Yellowstone National Park test site. The tests involved: (1) supervised and non-supervised computer programs; (2) special preprocessing of the scanner data to reduce computer processing time and cost, and improve the accuracy; and (3) studies of the effectiveness of the proposed Earth Resources Technology Satellite (ERTS) data channels in the automatic mapping of the same terrain, based on simulations, using the same set of scanner data. The following terrain classes have been mapped with greater than 80 percent accuracy in a 12-square-mile area with 1,800 feet of relief; (1) bedrock exposures, (2) vegetated rock rubble, (3) talus, (4) glacial kame meadow, (5) glacial till meadow, (6) forest, (7) bog, and (8) water. In addition, shadows of clouds and cliffs are depicted, but were greatly reduced by using preprocessing techniques.

  13. Expert system for automatically correcting OCR output

    NASA Astrophysics Data System (ADS)

    Taghva, Kazem; Borsack, Julie; Condit, Allen

    1994-03-01

    This paper describes a new expert system for automatically correcting errors made by optical character recognition (OCR) devices. The system, which we call the post-processing system, is designed to improve the quality of text produced by an OCR device in preparation for subsequent retrieval from an information system. The system is composed of numerous parts: an information retrieval system, an English dictionary, a domain-specific dictionary, and a collection of algorithms and heuristics designed to correct as many OCR errors as possible. For the remaining errors that cannot be corrected, the system passes them on to a user-level editing program. This post-processing system can be viewed as part of a larger system that would streamline the steps of taking a document from its hard copy form to its usable electronic form, or it can be considered a stand alone system for OCR error correction. An earlier version of this system has been used to process approximately 10,000 pages of OCR generated text. Among the OCR errors discovered by this version, about 87% were corrected. We implement numerous new parts of the system, test this new version, and present the results.

  14. An Experimental Text-Commentary

    ERIC Educational Resources Information Center

    O'Brien, Joan

    1976-01-01

    An experimental text-commentary of selected passages from Sophocles'"Antigone" is described. The commentary is intended for students seeking more than a conventional translation who do not know enough Greek to use a standard commentary. (RM)

  15. Why is Light Text Harder to Read Than Dark Text?

    NASA Technical Reports Server (NTRS)

    Scharff, Lauren V.; Ahumada, Albert J.

    2005-01-01

    Scharff and Ahumada (2002, 2003) measured text legibility for light text and dark text. For paragraph readability and letter identification, responses to light text were slower and less accurate for a given contrast. Was this polarity effect (1) an artifact of our apparatus, (2) a physiological difference in the separate pathways for positive and negative contrast or (3) the result of increased experience with dark text on light backgrounds? To rule out the apparatus-artifact hypothesis, all data were collected on one monitor. Its luminance was measured at all levels used, and the spatial effects of the monitor were reduced by pixel doubling and quadrupling (increasing the viewing distance to maintain constant angular size). Luminances of vertical and horizontal square-wave gratings were compared to assess display speed effects. They existed, even for 4-pixel-wide bars. Tests for polarity asymmetries in display speed were negative. Increased experience might develop full letter templates for dark text, while recognition of light letters is based on component features. Earlier, an observer ran all conditions at one polarity and then switched. If dark and light letters were intermixed, the observer might use component features on all trials and do worse on the dark letters, reducing the polarity effect. We varied polarity blocking (completely blocked, alternating smaller blocks, and intermixed blocks). Letter identification responses times showed polarity effects at all contrasts and display resolution levels. Observers were also more accurate with higher contrasts and more pixels per degree. Intermixed blocks increased the polarity effect by reducing performance on the light letters, but only if the randomized block occurred prior to the nonrandomized block. Perhaps observers tried to use poorly developed templates, or they did not work as hard on the more difficult items. The experience hypothesis and the physiological gain hypothesis remain viable explanations.

  16. Text Format, Text Comprehension, and Related Reader Variables

    ERIC Educational Resources Information Center

    Nichols, Jodi L.

    2009-01-01

    This investigation explored relationships between format of text (electronic or print-based) and reading comprehension of adolescent readers. Also in question were potential influences on comprehension from related measures including academic placement of participants, gender, prior knowledge of the content, and overall reading ability. Influences…

  17. Text Structures, Readings, and Retellings: An Exploration of Two Texts

    ERIC Educational Resources Information Center

    Martens, Prisca; Arya, Poonam; Wilson, Pat; Jin, Lijun

    2007-01-01

    The purpose of this study is to explore the relationship between children's use of reading strategies and language cues while reading and their comprehension after reading two texts: "Cherries and Cherry Pits" (Williams, 1986) and "There's Something in My Attic" (Mayer, 1988). The data were drawn from a larger study of the reading strategies of…

  18. Automatic extraction of relationships between concepts based on ontology

    NASA Astrophysics Data System (ADS)

    Yuan, Yifan; Du, Junping; Yang, Yuehua; Zhou, Jun; He, Pengcheng; Cao, Shouxin

    This paper applies Chinese word segmentation technology to the automatic extraction and description of the relationship between concepts. It takes text as corpus, matches the concept-pairs by rules and then describes the relationship between concepts in statistical methods. The paper implements an experiment based on the text in the field "respond to emergency", and optimizes speech tagging on account of experimental results, so that the relations extracted are more meaningful to emergency response. It analyzes the display order of inquiries and formulates rules of response and makes the results more meaningful. Consequently, the method turns out to be effective, and can be flexibly extended to other areas.

  19. Multi-Orientation Scene Text Detection with Adaptive Clustering.

    PubMed

    Yin, Xu-Cheng; Pei, Wei-Yi; Zhang, Jun; Hao, Hong-Wei

    2015-09-01

    Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks, while most current research efforts only focus on horizontal or near horizontal scene text. In this paper, first we present a unified distance metric learning framework for adaptive hierarchical clustering, which can simultaneously learn similarity weights (to adaptively combine different feature similarities) and the clustering threshold (to automatically determine the number of clusters). Then, we propose an effective multi-orientation scene text detection system, which constructs text candidates by grouping characters based on this adaptive clustering. Our text candidates construction method consists of several sequential coarse-to-fine grouping steps: morphology-based grouping via single-link clustering, orientation-based grouping via divisive hierarchical clustering, and projection-based grouping also via divisive clustering. The effectiveness of our proposed system is evaluated on several public scene text databases, e.g., ICDAR Robust Reading Competition data sets (2011 and 2013), MSRA-TD500 and NEOCR. Specifically, on the multi-orientation text data set MSRA-TD500, the f measure of our system is 71 percent, much better than the state-of-the-art performance. We also construct and release a practical challenging multi-orientation scene text data set (USTB-SV1K), which is available at http://prir.ustb.edu.cn/TexStar/MOMV-text-detection/. PMID:26353137

  20. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.

    PubMed

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  1. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  2. Automatic Contour Tracking in Ultrasound Images

    ERIC Educational Resources Information Center

    Li, Min; Kambhamettu, Chandra; Stone, Maureen

    2005-01-01

    In this paper, a new automatic contour tracking system, EdgeTrak, for the ultrasound image sequences of human tongue is presented. The images are produced by a head and transducer support system (HATS). The noise and unrelated high-contrast edges in ultrasound images make it very difficult to automatically detect the correct tongue surfaces. In…

  3. Integrating Automatic Genre Analysis into Digital Libraries.

    ERIC Educational Resources Information Center

    Rauber, Andreas; Muller-Kogler, Alexander

    With the number and types of documents in digital library systems increasing, tools for automatically organizing and presenting the content have to be found. While many approaches focus on topic-based organization and structuring, hardly any system incorporates automatic structural analysis and representation. Yet, genre information…

  4. 47 CFR 87.219 - Automatic operations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Automatic operations. 87.219 Section 87.219 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES AVIATION SERVICES Aeronautical Advisory Stations (Unicoms) § 87.219 Automatic operations. (a) A station operator need not...

  5. 47 CFR 87.219 - Automatic operations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Automatic operations. 87.219 Section 87.219 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES AVIATION SERVICES Aeronautical Advisory Stations (Unicoms) § 87.219 Automatic operations. (a) A station operator need not...

  6. 47 CFR 87.219 - Automatic operations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Automatic operations. 87.219 Section 87.219 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES AVIATION SERVICES Aeronautical Advisory Stations (Unicoms) § 87.219 Automatic operations. (a) A station operator need not...

  7. AUTOMATIC DETECTION OF CORRUPT SPECTROGRAPHIC FEATURES FOR

    E-print Network

    Stern, Richard

    of automatic speech recognition systems degrades significantly. There have been many algorithms proposed. In order to derive the features in the voiced speech regions, a new pitch tracking algorithm is proposedAUTOMATIC DETECTION OF CORRUPT SPECTROGRAPHIC FEATURES FOR ROBUST SPEECH RECOGNITION Michael L

  8. Automatic data editing: a brief introduction

    SciTech Connect

    Liepins, G.E.

    1982-01-01

    This paper briefly discusses the automatic data editing process: (1) check the data records for consistency, (2) analyze the inconsistent records to determine the inconsistent variables. It is stated that the application of automatic data editing is broad, and two specific examples are cited. One example, that of a vehicle maintenance data base is used to illustrate the process.

  9. Automatic star-horizon angle measurement system

    NASA Technical Reports Server (NTRS)

    Koerber, K.; Koso, D. A.; Nardella, P. C.

    1969-01-01

    Automatic star horizontal angle measuring aid for general navigational use incorporates an Apollo type sextant. The eyepiece of the sextant is replaced with two light detectors and appropriate circuitry. The device automatically determines the angle between a navigational star and a unique point on the earths horizon as seen on a spacecraft.

  10. Comparing Human and Automatic Face Recognition Performance

    E-print Network

    Schuckers, Michael E.

    Comparing Human and Automatic Face Recognition Performance Andy Adler Systems and Computer@stlawu.edu Abstract Automatic face recognition (AFR) technologies have seen dramatic improvements in performance over to compare the performance of different biometric matchers. Face recognition performance was tested

  11. Interactive Graphic Design Using Automatic Presentation Knowledge

    E-print Network

    Derthick, Mark

    Interactive Graphic Design Using Automatic Presentation Knowledge Steven F. Roth, John Kolojejchick­based presentation system that automatically designs graphics and also interprets a user's specifications conveyed: design as a constructive process of selecting and arranging graphical elements, and design as a process

  12. Automatic Grading of Spreadsheet and Database Skills

    ERIC Educational Resources Information Center

    Kovacic, Zlatko J.; Green, John Steven

    2012-01-01

    Growing enrollment in distance education has increased student-to-lecturer ratios and, therefore, increased the workload of the lecturer. This growing enrollment has resulted in mounting efforts to develop automatic grading systems in an effort to reduce this workload. While research in the design and development of automatic grading systems has a…

  13. 28 CFR 17.28 - Automatic declassification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 1 2012-07-01 2012-07-01 false Automatic declassification. 17.28 Section 17.28 Judicial Administration DEPARTMENT OF JUSTICE CLASSIFIED NATIONAL SECURITY INFORMATION AND ACCESS TO CLASSIFIED INFORMATION Classified Information § 17.28 Automatic declassification. (a) Subject to paragraph (b) of this section, all...

  14. 6 CFR 7.28 - Automatic declassification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 Domestic Security 1 2013-01-01 2013-01-01 false Automatic declassification. 7.28 Section 7.28 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CLASSIFIED NATIONAL SECURITY INFORMATION Classified Information § 7.28 Automatic declassification. (a) Subject to paragraph (b) of this section, all classified...

  15. 28 CFR 17.28 - Automatic declassification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Automatic declassification. 17.28 Section 17.28 Judicial Administration DEPARTMENT OF JUSTICE CLASSIFIED NATIONAL SECURITY INFORMATION AND ACCESS TO CLASSIFIED INFORMATION Classified Information § 17.28 Automatic declassification. (a) Subject to paragraph (b) of this section, all...

  16. 6 CFR 7.28 - Automatic declassification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false Automatic declassification. 7.28 Section 7.28 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CLASSIFIED NATIONAL SECURITY INFORMATION Classified Information § 7.28 Automatic declassification. (a) Subject to paragraph (b) of this section, all classified...

  17. Automatic Data Layout for High Performance Fortran

    E-print Network

    Kremer, Ulrich

    Automatic Data Layout for High Performance Fortran Ken Kennedy Ulrich Kremer CRPC­TR94498­S, TX 77005­1892 Revised April 1995, August 1995 #12; Automatic Data Layout for High Performance Fortran the algorithm selection, the data layout choice is the key intellectual step in writing an efficient HPF program

  18. Automatic Data Layout Distributed Memory Machines

    E-print Network

    Kremer, Ulrich

    Automatic Data Layout for Distributed Memory Machines Ulrich Kremer CRPC­TR95­559­S October, 1995­1892 #12; RICE UNIVERSITY Automatic Data Layout for Distributed Memory Machines by Ulrich Kremer A Thesis Data Layout for Distributed Memory Machines Ulrich Kremer Abstract The goal of languages like Fortran D

  19. Automatic Item Generation of Probability Word Problems

    ERIC Educational Resources Information Center

    Holling, Heinz; Bertling, Jonas P.; Zeuch, Nina

    2009-01-01

    Mathematical word problems represent a common item format for assessing student competencies. Automatic item generation (AIG) is an effective way of constructing many items with predictable difficulties, based on a set of predefined task parameters. The current study presents a framework for the automatic generation of probability word problems…

  20. GRAPHICAL MODELS AND AUTOMATIC SPEECH RECOGNITION

    E-print Network

    Noble, William Stafford

    GRAPHICAL MODELS AND AUTOMATIC SPEECH RECOGNITION JEFFREY A. BILMES Abstract. Graphical models provide a promising paradigm to study both existing and novel techniques for automatic speech recognition as part of a speech recognition system can be described by a graph ­ this includes Gaussian dis

  1. Automatic Audio and Lyrics Alignment DIPLOMARBEIT

    E-print Network

    Widmer, Gerhard

    Automatic Audio and Lyrics Alignment DIPLOMARBEIT zur Erlangung des akademischen Grades Diplom for the song currently played. One kind of information is the lyrics; that is what my diploma thesis deals with. The goal is to provide a pro- gram that is able to automatically align the lyrics to the audio signal

  2. Text mining in livestock animal science: introducing the potential of text mining to animal sciences.

    PubMed

    Sahadevan, S; Hofmann-Apitius, M; Schellander, K; Tesfaye, D; Fluck, J; Friedrich, C M

    2012-10-01

    In biological research, establishing the prior art by searching and collecting information already present in the domain has equal importance as the experiments done. To obtain a complete overview about the relevant knowledge, researchers mainly rely on 2 major information sources: i) various biological databases and ii) scientific publications in the field. The major difference between the 2 information sources is that information from databases is available, typically well structured and condensed. The information content in scientific literature is vastly unstructured; that is, dispersed among the many different sections of scientific text. The traditional method of information extraction from scientific literature occurs by generating a list of relevant publications in the field of interest and manually scanning these texts for relevant information, which is very time consuming. It is more than likely that in using this "classical" approach the researcher misses some relevant information mentioned in the literature or has to go through biological databases to extract further information. Text mining and named entity recognition methods have already been used in human genomics and related fields as a solution to this problem. These methods can process and extract information from large volumes of scientific text. Text mining is defined as the automatic extraction of previously unknown and potentially useful information from text. Named entity recognition (NER) is defined as the method of identifying named entities (names of real world objects; for example, gene/protein names, drugs, enzymes) in text. In animal sciences, text mining and related methods have been briefly used in murine genomics and associated fields, leaving behind other fields of animal sciences, such as livestock genomics. The aim of this work was to develop an information retrieval platform in the livestock domain focusing on livestock publications and the recognition of relevant data from cattle and pigs. For this purpose, the rather noncomprehensive resources of pig and cattle gene and protein terminologies were enriched with orthologue synonyms, integrated in the NER platform, ProMiner, which is successfully used in human genomics domain. Based on the performance tests done, the present system achieved a fair performance with precision 0.64, recall 0.74, and F(1) measure of 0.69 in a test scenario based on cattle literature. PMID:22665627

  3. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  4. GPU-Accelerated Text Mining

    SciTech Connect

    Cui, Xiaohui; Mueller, Frank; Zhang, Yongpeng; Potok, Thomas E

    2009-01-01

    Accelerating hardware devices represent a novel promise for improving the performance for many problem domains but it is not clear for which domains what accelerators are suitable. While there is no room in general-purpose processor design to significantly increase the processor frequency, developers are instead resorting to multi-core chips duplicating conventional computing capabilities on a single die. Yet, accelerators offer more radical designs with a much higher level of parallelism and novel programming environments. This present work assesses the viability of text mining on CUDA. Text mining is one of the key concepts that has become prominent as an effective means to index the Internet, but its applications range beyond this scope and extend to providing document similarity metrics, the subject of this work. We have developed and optimized text search algorithms for GPUs to exploit their potential for massive data processing. We discuss the algorithmic challenges of parallelization for text search problems on GPUs and demonstrate the potential of these devices in experiments by reporting significant speedups. Our study may be one of the first to assess more complex text search problems for suitability for GPU devices, and it may also be one of the first to exploit and report on atomic instruction usage that have recently become available in NVIDIA devices.

  5. 46 CFR 15.816 - Automatic radar plotting aids (ARPAs).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...2014-10-01 2014-10-01 false Automatic radar plotting aids (ARPAs). 15.816 ...Computations § 15.816 Automatic radar plotting aids (ARPAs). Every person...seagoing vessels equipped with automatic radar plotting aids (ARPAs), except...

  6. 5 CFR 831.502 - Automatic separation; exemption.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 2012-01-01 false Automatic separation; exemption. 831.502 Section...Retirement § 831.502 Automatic separation; exemption. (a) When an employee...a month, he is subject to automatic separation at the end of that month. The...

  7. 5 CFR 831.502 - Automatic separation; exemption.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 2014-01-01 false Automatic separation; exemption. 831.502 Section...Retirement § 831.502 Automatic separation; exemption. (a) When an employee...a month, he is subject to automatic separation at the end of that month. The...

  8. 5 CFR 831.502 - Automatic separation; exemption.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 2013-01-01 false Automatic separation; exemption. 831.502 Section...Retirement § 831.502 Automatic separation; exemption. (a) When an employee...a month, he is subject to automatic separation at the end of that month. The...

  9. 30 CFR 77.314 - Automatic temperature control instruments.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 2010-07-01 false Automatic temperature control instruments. 77.314 Section...Thermal Dryers § 77.314 Automatic temperature control instruments. (a) Automatic temperature control instruments for thermal...

  10. 30 CFR 77.314 - Automatic temperature control instruments.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 2011-07-01 false Automatic temperature control instruments. 77.314 Section...Thermal Dryers § 77.314 Automatic temperature control instruments. (a) Automatic temperature control instruments for thermal...

  11. 30 CFR 77.314 - Automatic temperature control instruments.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 2012-07-01 false Automatic temperature control instruments. 77.314 Section...Thermal Dryers § 77.314 Automatic temperature control instruments. (a) Automatic temperature control instruments for thermal...

  12. 30 CFR 77.314 - Automatic temperature control instruments.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 2013-07-01 false Automatic temperature control instruments. 77.314 Section...Thermal Dryers § 77.314 Automatic temperature control instruments. (a) Automatic temperature control instruments for thermal...

  13. 30 CFR 77.314 - Automatic temperature control instruments.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 2014-07-01 false Automatic temperature control instruments. 77.314 Section...Thermal Dryers § 77.314 Automatic temperature control instruments. (a) Automatic temperature control instruments for thermal...

  14. SPEECH PARAMETERIZATION FOR AUTOMATIC SPEECH RECOGNITION IN NOISY CONDITIONS

    E-print Network

    SPEECH PARAMETERIZATION FOR AUTOMATIC SPEECH RECOGNITION IN NOISY CONDITIONS Bojana Gaji of automatic speech recognition systems (ASR) against additive background noise, by finding speech parameters noises. 1. INTRODUCTION State-of-the-art automatic speech recognition (ASR) systems are capable

  15. ANPS - AUTOMATIC NETWORK PROGRAMMING SYSTEM

    NASA Technical Reports Server (NTRS)

    Schroer, B. J.

    1994-01-01

    Development of some of the space program's large simulation projects -- like the project which involves simulating the countdown sequence prior to spacecraft liftoff -- requires the support of automated tools and techniques. The number of preconditions which must be met for a successful spacecraft launch and the complexity of their interrelationship account for the difficulty of creating an accurate model of the countdown sequence. Researchers developed ANPS for the Nasa Marshall Space Flight Center to assist programmers attempting to model the pre-launch countdown sequence. Incorporating the elements of automatic programming as its foundation, ANPS aids the user in defining the problem and then automatically writes the appropriate simulation program in GPSS/PC code. The program's interactive user dialogue interface creates an internal problem specification file from user responses which includes the time line for the countdown sequence, the attributes for the individual activities which are part of a launch, and the dependent relationships between the activities. The program's automatic simulation code generator receives the file as input and selects appropriate macros from the library of software modules to generate the simulation code in the target language GPSS/PC. The user can recall the problem specification file for modification to effect any desired changes in the source code. ANPS is designed to write simulations for problems concerning the pre-launch activities of space vehicles and the operation of ground support equipment and has potential for use in developing network reliability models for hardware systems and subsystems. ANPS was developed in 1988 for use on IBM PC or compatible machines. The program requires at least 640 KB memory and one 360 KB disk drive, PC DOS Version 2.0 or above, and GPSS/PC System Version 2.0 from Minuteman Software. The program is written in Turbo Prolog Version 2.0. GPSS/PC is a trademark of Minuteman Software. Turbo Prolog is a trademark of Borland International. IBM PC and PS DOS are registered trademarks of International Business Machines Corporation.

  16. Mobile Text Messaging for Health: A Systematic Review of Reviews

    PubMed Central

    Hall, Amanda K.; Cole-Lewis, Heather; Bernhardt, Jay M.

    2015-01-01

    The aim of this systematic review of reviews is to identify mobile text-messaging interventions designed for health improvement and behavior change and to derive recommendations for practice. We have compiled and reviewed existing systematic research reviews and meta-analyses to organize and summarize the text-messaging intervention evidence base, identify best-practice recommendations based on findings from multiple reviews, and explore implications for future research. Our review found that the majority of published text-messaging interventions were effective when addressing diabetes self-management, weight loss, physical activity, smoking cessation, and medication adherence for antiretroviral therapy. However, we found limited evidence across the population of studies and reviews to inform recommended intervention characteristics. Although strong evidence supports the value of integrating text-messaging interventions into public health practice, additional research is needed to establish longer-term intervention effects, identify recommended intervention characteristics, and explore issues of cost-effectiveness. PMID:25785892

  17. Mobile text messaging for health: a systematic review of reviews.

    PubMed

    Hall, Amanda K; Cole-Lewis, Heather; Bernhardt, Jay M

    2015-03-18

    The aim of this systematic review of reviews is to identify mobile text-messaging interventions designed for health improvement and behavior change and to derive recommendations for practice. We have compiled and reviewed existing systematic research reviews and meta-analyses to organize and summarize the text-messaging intervention evidence base, identify best-practice recommendations based on findings from multiple reviews, and explore implications for future research. Our review found that the majority of published text-messaging interventions were effective when addressing diabetes self-management, weight loss, physical activity, smoking cessation, and medication adherence for antiretroviral therapy. However, we found limited evidence across the population of studies and reviews to inform recommended intervention characteristics. Although strong evidence supports the value of integrating text-messaging interventions into public health practice, additional research is needed to establish longer-term intervention effects, identify recommended intervention characteristics, and explore issues of cost-effectiveness. PMID:25785892

  18. Finding text in color images

    NASA Astrophysics Data System (ADS)

    Zhou, Jiangying; Lopresti, Daniel P.; Tasdizen, Tolga

    1998-04-01

    In this paper, we consider the problem of locating and extracting text from WWW images. A previous algorithm based on color clustering and connected components analysis works well as long as the color of each character is relatively uniform and the typography is fairly simple. It breaks down quickly, however, when these assumptions are violated. In this paper, we describe more robust techniques for dealing with this challenging problem. We present an improved color clustering algorithm that measures similarity based on both RGB and spatial proximity. Layout analysis is also incorporated to handle more complex typography. THese changes significantly enhance the performance of our text detection procedure.

  19. Mapping text with phrase nets.

    PubMed

    van Ham, Frank; Wattenberg, Martin; Viégas, Fernanda B

    2009-01-01

    We present a new technique, the phrase net, for generating visual overviews of unstructured text. A phrase net displays a graph whose nodes are words and whose edges indicate that two words are linked by a user-specified relation. These relations may be defined either at the syntactic or lexical level; different relations often produce very different perspectives on the same text. Taken together, these perspectives often provide an illuminating visual overview of the key concepts and relations in a document or set of documents. PMID:19834186

  20. Text-mining and information-retrieval services for molecular biology

    PubMed Central

    Krallinger, Martin; Valencia, Alfonso

    2005-01-01

    Text-mining in molecular biology - defined as the automatic extraction of information about genes, proteins and their functional relationships from text documents - has emerged as a hybrid discipline on the edges of the fields of information science, bioinformatics and computational linguistics. A range of text-mining applications have been developed recently that will improve access to knowledge for biologists and database annotators. PMID:15998455

  1. The Shifting Sands in the Effects of Source Text Summarizability on Summary Writing

    ERIC Educational Resources Information Center

    Yu, Guoxing

    2009-01-01

    This paper reports the effects of the properties of source texts on summarization. One hundred and fifty-seven undergraduates were asked to write summaries of one of three extended English texts of similar length and readability, but differing in other discoursal features such as lexical diversity and macro-organization. The effects of…

  2. Automatic transmission for a vehicle

    SciTech Connect

    Moroto, S.; Sakakibara, S.

    1986-12-09

    An automatic transmission is described for a vehicle, comprising: a coupling means having an input shaft and an output shaft; a belt type continuously-variable speed transmission system having an input pulley mounted coaxially on a first shaft, an output pulley mounted coaxially on a second shaft and a belt extending between the first and second pulleys to transfer power, each of the first and second pulleys having a fixed sheave and a movable sheave. The first shaft is disposed coaxially with and rotatably coupled with the output shaft of the coupling means, the second shaft being disposed side by side and in parallel with the first shaft; a planetary gear mechanism; a forward-reverse changeover mechanism and a low-high speed changeover mechanism.

  3. Automatic blocking of nested loops

    NASA Technical Reports Server (NTRS)

    Schreiber, Robert; Dongarra, Jack J.

    1990-01-01

    Blocked algorithms have much better properties of data locality and therefore can be much more efficient than ordinary algorithms when a memory hierarchy is involved. On the other hand, they are very difficult to write and to tune for particular machines. The reorganization is considered of nested loops through the use of known program transformations in order to create blocked algorithms automatically. The program transformations used are strip mining, loop interchange, and a variant of loop skewing in which invertible linear transformations (with integer coordinates) of the loop indices are allowed. Some problems are solved concerning the optimal application of these transformations. It is shown, in a very general setting, how to choose a nearly optimal set of transformed indices. It is then shown, in one particular but rather frequently occurring situation, how to choose an optimal set of block sizes.

  4. Automatic Sequencing for Experimental Protocols

    NASA Astrophysics Data System (ADS)

    Hsieh, Paul F.; Stern, Ivan

    We present a paradigm and implementation of a system for the specification of the experimental protocols to be used for the calibration of AXAF mirrors. For the mirror calibration, several thousand individual measurements need to be defined. For each measurement, over one hundred parameters need to be tabulated for the facility test conductor and several hundred instrument parameters need to be set. We provide a high level protocol language which allows for a tractable representation of the measurement protocol. We present a procedure dispatcher which automatically sequences a protocol more accurately and more rapidly than is possible by an unassisted human operator. We also present back-end tools to generate printed procedure manuals and database tables required for review by the AXAF program. This paradigm has been tested and refined in the calibration of detectors to be used in mirror calibration.

  5. Automatic insulation resistance testing apparatus

    DOEpatents

    Wyant, Francis J.; Nowlen, Steven P.; Luker, Spencer M.

    2005-06-14

    An apparatus and method for automatic measurement of insulation resistances of a multi-conductor cable. In one embodiment of the invention, the apparatus comprises a power supply source, an input measuring means, an output measuring means, a plurality of input relay controlled contacts, a plurality of output relay controlled contacts, a relay controller and a computer. In another embodiment of the invention the apparatus comprises a power supply source, an input measuring means, an output measuring means, an input switching unit, an output switching unit and a control unit/data logger. Embodiments of the apparatus of the invention may also incorporate cable fire testing means. The apparatus and methods of the present invention use either voltage or current for input and output measured variables.

  6. Automatic Mechetronic Wheel Light Device

    DOEpatents

    Khan, Mohammed John Fitzgerald (Silver Spring, MD)

    2004-09-14

    A wheel lighting device for illuminating a wheel of a vehicle to increase safety and enhance aesthetics. The device produces the appearance of a "ring of light" on a vehicle's wheels as the vehicle moves. The "ring of light" can automatically change in color and/or brightness according to a vehicle's speed, acceleration, jerk, selection of transmission gears, and/or engine speed. The device provides auxiliary indicator lights by producing light in conjunction with a vehicle's turn signals, hazard lights, alarm systems, and etc. The device comprises a combination of mechanical and electronic components and can be placed on the outer or inner surface of a wheel or made integral to a wheel or wheel cover. The device can be configured for all vehicle types, and is electrically powered by a vehicle's electrical system and/or battery.

  7. Automatic Detection of Terminology Evolution

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Nina

    As archives contain documents that span over a long period of time, the language used to create these documents and the language used for querying the archive can differ. This difference is due to evolution in both terminology and semantics and will cause a significant number of relevant documents being omitted. A static solution is to use query expansion based on explicit knowledge banks such as thesauri or ontologies. However as we are able to archive resources with more varied terminology, it will be infeasible to use only explicit knowledge for this purpose. There exist only few or no thesauri covering very domain specific terminologies or slang as used in blogs etc. In this Ph.D. thesis we focus on automatically detecting terminology evolution in a completely unsupervised manner as described in this technical paper.

  8. Automatic Nanodesign Using Evolutionary Techniques

    NASA Technical Reports Server (NTRS)

    Globus, Al; Saini, Subhash (Technical Monitor)

    1998-01-01

    Many problems associated with the development of nanotechnology require custom designed molecules. We use genetic graph software, a new development, to automatically evolve molecules of interest when only the requirements are known. Genetic graph software designs molecules, and potentially nanoelectronic circuits, given a fitness function that determines which of two molecules is better. A set of molecules, the first generation, is generated at random then tested with the fitness function, Subsequent generations are created by randomly choosing two parent molecules with a bias towards high scoring molecules, tearing each molecules in two at random, and mating parts from the mother and father to create two children. This procedure is repeated until a satisfactory molecule is found. An atom pair similarity test is currently used as the fitness function to evolve molecules similar to existing pharmaceuticals.

  9. Automatic toilet seat lowering apparatus

    DOEpatents

    Guerty, Harold G. (Palm Beach Gardens, FL)

    1994-09-06

    A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat. A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat.

  10. Reviving "Walden": Mining the Text.

    ERIC Educational Resources Information Center

    Hewitt Julia

    2000-01-01

    Describes how the author and her high school English students begin their study of Thoreau's "Walden" by mining the text for quotations to inspire their own writing and discussion on the topic, "How does Thoreau speak to you or how could he speak to someone you know?" (SR)

  11. Applied Text Generation* Owen Rambow

    E-print Network

    - terface of the software design environment Ulysses. The following design goals were set for it in the Ulysses User Interface The Joyce text generation system was developped a~ part of the software design environment Ulysses (Ko- relsky and Ulysses Staff 1988; Rosenthal et al 1988) Ulysses includes a graphical

  12. Teaching Drama: Text and Performance.

    ERIC Educational Resources Information Center

    Brown, Joanne

    Because playwrights are limited to textual elements that an audience can hear and see--dialogue and movement--much of a drama's tension and interest lie in the subtext, the characters' emotions and motives implied but not directly expressed by the text itself. The teacher must help students construct what in a novel the author may have made more…

  13. Solar Concepts: A Background Text.

    ERIC Educational Resources Information Center

    Gorham, Jonathan W.

    This text is designed to provide teachers, students, and the general public with an overview of key solar energy concepts. Various energy terms are defined and explained. Basic thermodynamic laws are discussed. Alternative energy production is described in the context of the present energy situation. Described are the principal contemporary solar…

  14. AUTOMATIC ANATOMY RECOGNITION VIA FUZZY OBJECT Jayaram K. Udupaa

    E-print Network

    Ciesielski, Krzysztof Chris

    AUTOMATIC ANATOMY RECOGNITION VIA FUZZY OBJECT MODELS Jayaram K. Udupaa , Dewey Odhnera , Alexandre radiological practice, computerized automatic anatomy recognition (AAR) during radiological image reading

  15. AN INFORMATION-THEORETIC APPROACH TO SONAR AUTOMATIC TARGET RECOGNITION

    E-print Network

    Slatton, Clint

    AN INFORMATION-THEORETIC APPROACH TO SONAR AUTOMATIC TARGET RECOGNITION By RODNEY ALBERTO MOREJON................................................................................................ 7 2 SONAR AUTOMATIC TARGET RECOGNITION (ATR) ...........................................8..................................................................................................................... 8 Underwater Sonar Image Characterization

  16. Automatic Fringe Detection for Oil Film Interferometry Measurement of Skin Friction

    NASA Technical Reports Server (NTRS)

    Naughton, Jonathan W.; Decker, Robert K.; Jafari, Farhad

    2001-01-01

    This report summarizes two years of work on investigating algorithms for automatically detecting fringe patterns in images acquired using oil-drop interferometry for the determination of skin friction. Several different analysis methods were tested, and a combination of a windowed Fourier transform followed by a correlation was found to be most effective. The implementation of this method is discussed and details of the process are described. The results indicate that this method shows promise for automating the fringe detection process, but further testing is required.

  17. Direct Measurement And On-Line Automatic Interpretation Of Breast Thermographs

    NASA Astrophysics Data System (ADS)

    Milbrath, John R.; Schlager, Ken J.

    1980-08-01

    A new medical thermographic instrumentation system provides for direct measurement of human body surface temperatures and automatic on-line diagnostic interpretation by a microcomputer. This system directly measures 128 surface temperature areas with an infrared scanner array of 64 thermopile sensors. This thermal data is fed to a microcomputer which executes a diagnostic pattern recognition algorithm and prints out a report which summarizes the measurement data. Initial results indicate both a sensitivity and specificity of 80%. However, 80% of the women with cancer had Stage II or greater disease. An improved design of the initial prototype has been constructed.

  18. Diversity in Geoscience Degrees and Academic Careers, U.S.A. 2004 Summarized by T. Jordan, Earth & Atmospheric Sciences

    E-print Network

    Mahowald, Natalie

    Diversity in Geoscience Degrees and Academic Careers, U.S.A. 2004 Summarized by T. Jordan, Earth in the US earn undergraduate degrees in geosciences (inclusive of earth, atmospheric, and ocean sciences). In 2001, the % of BS/BA degrees among 3968 total graduates included: Group in Geosciences Percent of total

  19. The Effect of a Summarization-Based Cumulative Retelling Strategy on Listening Comprehension of College Students with Visual Impairments

    ERIC Educational Resources Information Center

    Tuncer, A. Tuba; Altunay, Banu

    2006-01-01

    Because students with visual impairments need auditory materials in order to access information, listening comprehension skills are important to their academic success. The present study investigated the effectiveness of summarization-based cumulative retelling strategy on the listening comprehension of four visually impaired college students. An…

  20. Abstract--This paper summarizes recent work towards an advanced cellular packet data system using a multicarrier-based

    E-print Network

    Abstract-- This paper summarizes recent work towards an advanced cellular packet data system using of user requirements and a high spectral efficiency in different deployment and usage scenarios; two goals that are often contradictory and difficult to combine. The medium access control (MAC) system layer plays

  1. Plasmid Transformation into DH5alpha E.coli cells using Heat Shock (by Manish, summarized from LIFE technologies protocol)

    E-print Network

    Raizada, Manish N.

    Plasmid Transformation into DH5alpha E.coli cells using Heat Shock (by Manish, summarized from LIFEL of antibiotic on top of the media with a spreader in sterile conditions minutes 6. Heat shock for 20 seconds at 37C. 7. Place back on ice for 2 minutes. 8. Add 950 uL of SOC

  2. Statement Summarizing Research Findings on the Issue of the Relationship Between Food-Additive-Free Diets and Hyperkinesis in Children.

    ERIC Educational Resources Information Center

    Lipton, Morris; Wender, Esther

    The National Advisory Committee on Hyperkinesis and Food Additives paper summarized some research findings on the issue of the relationship between food-additive-free diets and hyperkinesis in children. Based on several challenge studies, it is concluded that the evidence generally refutes Dr. B. F. Feingold's claim that artificial colorings in…

  3. Pavement Smoothness for Illinois DOT -Doug Dirks 1. Briefly summarize your current pavement smoothness requirements. See below.

    E-print Network

    Pavement Smoothness for Illinois DOT - Doug Dirks 1. Briefly summarize your current pavement? N/A Illinois has both standard specifications and a special provision for pavement smoothness-Depth HMA pavements, and PCC Pavements are all included in this special provision. http

  4. TU Wien : Vision 2025+ Summarizing the results of the Week of Workshops, 2nd to 6th of March 2015

    E-print Network

    Arnold, Anton

    TU Wien : Vision 2025+ Summarizing the results of the Week of Workshops, 2nd to 6th of March 2015 #12;TU Wien: vision 2025+ Zusammenfassung der Workshopwoche vom 2. bis zum 6. März 2015 How does we see the university, nationally and internationally? #12;TU Wien: vision 2025+ Zusammenfassung der

  5. NewsInEssence: Summarizing Online News Topics Dragomir Radev Jahna Otterbacher Adam Winkel Sasha Blair-Goldensohn

    E-print Network

    Radev, Dragomir R.

    to continue, according to a recent Forrester report [1], which found that people who have been using the Web to gather and summarize related online news articles. Given a user's topic specification (indicated via an example article or keywords), NIE searches across dozens of news sites to collect a group, or cluster

  6. A unified framework for multioriented text detection and recognition.

    PubMed

    Yao, Cong; Bai, Xiang; Liu, Wenyu

    2014-11-01

    High level semantics embodied in scene texts are both rich and clear and thus can serve as important cues for a wide range of vision applications, for instance, image understanding, image indexing, video search, geolocation, and automatic navigation. In this paper, we present a unified framework for text detection and recognition in natural images. The contributions of this paper are threefold: 1) text detection and recognition are accomplished concurrently using exactly the same features and classification scheme; 2) in contrast to methods in the literature, which mainly focus on horizontal or near-horizontal texts, the proposed system is capable of localizing and reading texts of varying orientations; and 3) a new dictionary search method is proposed, to correct the recognition errors usually caused by confusions among similar yet different characters. As an additional contribution, a novel image database with texts of different scales, colors, fonts, and orientations in diverse real-world scenarios, is generated and released. Extensive experiments on standard benchmarks as well as the proposed database demonstrate that the proposed system achieves highly competitive performance, especially on multioriented texts. PMID:25203989

  7. Comprehending Technical Texts: Predicting and Defining Unfamiliar Terms Noemie Elhadad, Ph.D.

    E-print Network

    to medical lit- erature for health consumers. Our focus is on medical terminology. We present a method to predict automat- ically in a given text which medical terms are unlikely to be understood by a lay reader- prehension of sentences containing technical medical terms. INTRODUCTION The field of health literacy has

  8. A Spoken Access Approach for Chinese Text and Speech Information Retrieval.

    ERIC Educational Resources Information Center

    Chien, Lee-Feng; Wang, Hsin-Min; Bai, Bo-Ren; Lin, Sun-Chein

    2000-01-01

    Presents an efficient spoken-access approach for both Chinese text and Mandarin speech information retrieval. Highlights include human-computer interaction via voice input, speech query recognition at the syllable level, automatic term suggestion, relevance feedback techniques, and experiments that show an improvement in the effectiveness of…

  9. User-Driven Development of Text Mining Resources for Cancer Risk Lin Sun, Anna Korhonen

    E-print Network

    Korhonen, Anna

    User-Driven Development of Text Mining Resources for Cancer Risk Assessment Lin Sun, Anna Korhonen abstracts. We report promising results with inter-annotator agree- ment tests and automatic classification-defined, accurate, and applicable to a real-world CRA scenario. We discuss extending and refining the taxonomy

  10. Integration of Text and Audio Features for Genre Classification in Music Information

    E-print Network

    Rauber,Andreas

    be a song's audio features as well as its lyrics. Both of these modalities have their advantages as text may, and browsing access by perceived sound similarity. Song lyrics cover semantic information about a song.e. automatically assigning musical genres to tracks based on audio features as well as content words in song lyrics

  11. Orthographic Knowledge Important in Comprehending Elementary Chinese Text by Users of Alphasyllabaries

    ERIC Educational Resources Information Center

    Leong, Che Kan; Tse, Shek Kam; Loh, Ka Yee; Ki, Wing Wah

    2011-01-01

    Orthographic knowledge in Chinese was hypothesized to affect elementary Chinese text comprehension (four essays) by 80 twelve-year-old ethnic alphasyllabary language users compared with 74 native Chinese speakers at similar reading level. This was tested with two rapid automatized naming tasks; two working memory tasks; three orthographic…

  12. On the implementation of automatic differentiation tools.

    SciTech Connect

    Bischof, C. H.; Hovland, P. D.; Norris, B.; Mathematics and Computer Science; Aachen Univ. of Technology

    2008-01-01

    Automatic differentiation is a semantic transformation that applies the rules of differential calculus to source code. It thus transforms a computer program that computes a mathematical function into a program that computes the function and its derivatives. Derivatives play an important role in a wide variety of scientific computing applications, including numerical optimization, solution of nonlinear equations, sensitivity analysis, and nonlinear inverse problems. We describe the forward and reverse modes of automatic differentiation and provide a survey of implementation strategies. We describe some of the challenges in the implementation of automatic differentiation tools, with a focus on tools based on source transformation. We conclude with an overview of current research and future opportunities.

  13. Automatic Operation For A Robot Lawn Mower

    NASA Astrophysics Data System (ADS)

    Huang, Y. Y.; Cao, Z. L.; Oh, S. J.; Kattan, E. U.; Hall, E. L.

    1987-02-01

    A domestic mobile robot, lawn mower, which performs the automatic operation mode, has been built up in the Center of Robotics Research, University of Cincinnati. The robot lawn mower automatically completes its work with the region filling operation, a new kind of path planning for mobile robots. Some strategies for region filling of path planning have been developed for a partly-known or a unknown environment. Also, an advanced omnidirectional navigation system and a multisensor-based control system are used in the automatic operation. Research on the robot lawn mower, especially on the region filling of path planning, is significant in industrial and agricultural applications.

  14. Inferring Group Processes from Computer-Mediated Affective Text Analysis

    SciTech Connect

    Schryver, Jack C; Begoli, Edmon; Jose, Ajith; Griffin, Christopher

    2011-02-01

    Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Several useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.

  15. Keyword Extraction from Arabic Legal Texts

    ERIC Educational Resources Information Center

    Rammal, Mahmoud; Bahsoun, Zeinab; Al Achkar Jabbour, Mona

    2015-01-01

    Purpose: The purpose of this paper is to apply local grammar (LG) to develop an indexing system which automatically extracts keywords from titles of Lebanese official journals. Design/methodology/approach: To build LG for our system, the first word that plays the determinant role in understanding the meaning of a title is analyzed and grouped as…

  16. Automatic transmission for motor vehicles

    SciTech Connect

    Miura, M.; Sakakibara, S.

    1989-06-27

    An automatic transmission for a motor vehicle is described, comprising: a transmission housing; a hydraulic torque converter having rotational axes, an input shaft, an output shaft and a direct coupling clutch for directly coupling the input shaft to the output shaft; an auxiliary transmission mechanism provided coaxially with the hydraulic torque converter and having an input shaft, an output shaft with an input end and an output end and an overdrive mechanism of planetary gear type having a reduction ratio smaller than 1, the input shaft and the output shaft of the auxiliary transmission being located close to and on the side of the hydraulic torque converter with respect to the auxiliary transmission, respectively, and being coupled with a planetary gear carrier and a ring gear of the overdrive mechanism, respectively, a one-way clutch being provided between the planetary gear carrier and a sun gear of the overdrive mechanism, a clutch being provided between the planetary gear carrier and a position radially and outwardly of the one-way clutch for engaging the disengaging the planetary carrier and the sun gear, a brake being provided between the transmission housing and the sun gear and positioned radially and outwardly of the clutch for controlling engagement of the sun gear with a stationary portion of the transmission housing, and the output end of the output shaft being disposed between the auxiliary transmission mechanism and the hydraulic torque converter.

  17. Actuator for automatic cruising system

    SciTech Connect

    Suzuki, K.

    1989-03-07

    An actuator for an automatic cruising system is described, comprising: a casing; a control shaft provided in the casing for rotational movement; a control motor for driving the control shaft; an input shaft; an electromagnetic clutch and a reduction gear which are provided between the control motor and the control shaft; and an external linkage mechanism operatively connected to the control shaft; wherein the reduction gear is a type of Ferguson's mechanical paradox gear having a pinion mounted on the input shaft always connected to the control motor; a planetary gear meshing with the pinion so as to revolve around the pinion; a static internal gear meshing with the planetary gear and connected with the electromagnetic clutch for movement to a position restricting rotation of the static internal gear; and a rotary internal gear fixed on the control shaft and meshed with the planetary gear, the rotary internal gear having a number of teeth slightly different from a number of teeth of the static internal gear; and the electromagnetic clutch has a tubular electromagnetic coil coaxially provided around the input shaft and an engaging means for engaging and disengaging with the static internal gear in accordance with on-off operation of the electromagnetic coil.

  18. Ekofisk automatic GPS subsidence measurements

    SciTech Connect

    Mes, M.J.; Landau, H.; Luttenberger, C.

    1996-10-01

    A fully automatic GPS satellite-based procedure for the reliable measurement of subsidence of several platforms in almost real time is described. Measurements are made continuously on platforms in the North Sea Ekofisk Field area. The procedure also yields rate measurements, which are also essential for confirming platform safety, planning of remedial work, and verification of subsidence models. GPS measurements are more attractive than seabed pressure-gauge-based platform subsidence measurements-they are much cheaper to install and maintain and not subject to gauge drift. GPS measurements were coupled to oceanographic quantities such as the platform deck clearance, which leads to less complex offshore survey procedures. Ekofisk is an oil and gas field in the southern portion of the Norwegian North Sea. Late in 1984, it was noticed that the Ekofisk platform decks were closer to the sea surface than when the platforms were installed-subsidence was the only logical explanation. After the subsidence phenomenon was recognized, an accurate measurement method was needed to measure progression of subsidence and the associated subsidence rate. One available system for which no further development was needed, was the NAVSTAR GPS-measurements started in March 1985.

  19. Automatic segmentation of psoriasis lesions

    NASA Astrophysics Data System (ADS)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  20. Automatic Weather Station (AWS) Lidar

    NASA Technical Reports Server (NTRS)

    Rall, Jonathan A.R.; Abshire, James B.; Spinhirne, James D.; Smith, David E. (Technical Monitor)

    2000-01-01

    An autonomous, low-power atmospheric lidar instrument is being developed at NASA Goddard Space Flight Center. This compact, portable lidar will operate continuously in a temperature controlled enclosure, charge its own batteries through a combination of a small rugged wind generator and solar panels, and transmit its data from remote locations to ground stations via satellite. A network of these instruments will be established by co-locating them at remote Automatic Weather Station (AWS) sites in Antarctica under the auspices of the National Science Foundation (NSF). The NSF Office of Polar Programs provides support to place the weather stations in remote areas of Antarctica in support of meteorological research and operations. The AWS meteorological data will directly benefit the analysis of the lidar data while a network of ground based atmospheric lidar will provide knowledge regarding the temporal evolution and spatial extent of Type la polar stratospheric clouds (PSC). These clouds play a crucial role in the annual austral springtime destruction of stratospheric ozone over Antarctica, i.e. the ozone hole. In addition, the lidar will monitor and record the general atmospheric conditions (transmission and backscatter) of the overlying atmosphere which will benefit the Geoscience Laser Altimeter System (GLAS). Prototype lidar instruments have been deployed to the Amundsen-Scott South Pole Station (1995-96, 2000) and to an Automated Geophysical Observatory site (AGO 1) in January 1999. We report on data acquired with these instruments, instrument performance, and anticipated performance of the AWS Lidar.

  1. Automatic locking orthotic knee device

    NASA Technical Reports Server (NTRS)

    Weddendorf, Bruce C. (inventor)

    1993-01-01

    An articulated tang in clevis joint for incorporation in newly manufactured conventional strap-on orthotic knee devices or for replacing such joints in conventional strap-on orthotic knee devices is discussed. The instant tang in clevis joint allows the user the freedom to extend and bend the knee normally when no load (weight) is applied to the knee and to automatically lock the knee when the user transfers weight to the knee, thus preventing a damaged knee from bending uncontrollably when weight is applied to the knee. The tang in clevis joint of the present invention includes first and second clevis plates, a tang assembly and a spacer plate secured between the clevis plates. Each clevis plate includes a bevelled serrated upper section. A bevelled shoe is secured to the tank in close proximity to the bevelled serrated upper section of the clevis plates. A coiled spring mounted within an oblong bore of the tang normally urges the shoes secured to the tang out of engagement with the serrated upper section of each clevic plate to allow rotation of the tang relative to the clevis plate. When weight is applied to the joint, the load compresses the coiled spring, the serrations on each clevis plate dig into the bevelled shoes secured to the tang to prevent relative movement between the tang and clevis plates. A shoulder is provided on the tang and the spacer plate to prevent overextension of the joint.

  2. Automatic image cropping for republishing

    NASA Astrophysics Data System (ADS)

    Cheatle, Phil

    2010-02-01

    Image cropping is an important aspect of creating aesthetically pleasing web pages and repurposing content for different web or printed output layouts. Cropping provides both the possibility of improving the composition of the image, and also the ability to change the aspect ratio of the image to suit the layout design needs of different document or web page formats. This paper presents a method for aesthetically cropping images on the basis of their content. Underlying the approach is a novel segmentation-based saliency method which identifies some regions as "distractions", as an alternative to the conventional "foreground" and "background" classifications. Distractions are a particular problem with typical consumer photos found on social networking websites such as FaceBook, Flickr etc. Automatic cropping is achieved by identifying the main subject area of the image and then using an optimization search to expand this to form an aesthetically pleasing crop. Evaluation of aesthetic functions like auto-crop is difficult as there is no single correct solution. A further contribution of this paper is an automated evaluation method which goes some way towards handling the complexity of aesthetic assessment. This allows crop algorithms to be easily evaluated against a large test set.

  3. Text Mining for Protein Docking

    PubMed Central

    Badal, Varsha D.; Kundrotas, Petras J.; Vakser, Ilya A.

    2015-01-01

    The rapidly growing amount of publicly available information from biomedical research is readily accessible on the Internet, providing a powerful resource for predictive biomolecular modeling. The accumulated data on experimentally determined structures transformed structure prediction of proteins and protein complexes. Instead of exploring the enormous search space, predictive tools can simply proceed to the solution based on similarity to the existing, previously determined structures. A similar major paradigm shift is emerging due to the rapidly expanding amount of information, other than experimentally determined structures, which still can be used as constraints in biomolecular structure prediction. Automated text mining has been widely used in recreating protein interaction networks, as well as in detecting small ligand binding sites on protein structures. Combining and expanding these two well-developed areas of research, we applied the text mining to structural modeling of protein-protein complexes (protein docking). Protein docking can be significantly improved when constraints on the docking mode are available. We developed a procedure that retrieves published abstracts on a specific protein-protein interaction and extracts information relevant to docking. The procedure was assessed on protein complexes from Dockground (http://dockground.compbio.ku.edu). The results show that correct information on binding residues can be extracted for about half of the complexes. The amount of irrelevant information was reduced by conceptual analysis of a subset of the retrieved abstracts, based on the bag-of-words (features) approach. Support Vector Machine models were trained and validated on the subset. The remaining abstracts were filtered by the best-performing models, which decreased the irrelevant information for ~ 25% complexes in the dataset. The extracted constraints were incorporated in the docking protocol and tested on the Dockground unbound benchmark set, significantly increasing the docking success rate. PMID:26650466

  4. Exploratory Dijkstra forest based automatic vessel segmentation

    E-print Network

    Tomasi, Carlo

    Exploratory Dijkstra forest based automatic vessel segmentation: applications in video indirect's shortest-path algorithm. Our method preserves vessel thickness, requires no manual intervention, and follows vessel branching naturally and efficiently. To test our method, we constructed a retinal video

  5. Linear dynamic models for automatic speech recognition 

    E-print Network

    Frankel, Joe

    The majority of automatic speech recognition (ASR) systems rely on hidden Markov models (HMM), in which the output distribution associated with each state is modelled by a mixture of diagonal covariance Gaussians. Dynamic ...

  6. Automatic Evolution of Molecular Nanotechnology Designs

    NASA Technical Reports Server (NTRS)

    Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper describes strategies for automatically generating designs for analog circuits at the molecular level. Software maps out the edges and vertices of potential nanotechnology systems on graphs, then selects appropriate ones through evolutionary or genetic paradigms.

  7. LIMESTONE SCRUBBER SLURRY AUTOMATIC CONTROL SYSTEMS

    EPA Science Inventory

    The report utilizes current understanding of limestone scrubbers for flue gas desulfurization (FGD) to develop an effort into the optimization of automatic control for the recirculating slurry processes. The acknowledged methods of mathematical modeling, computer simulation, and ...

  8. Automatic intonation analysis using acoustic data. 

    E-print Network

    Dusterhoff, Kurt E

    1999-01-01

    In a research world where many human-hours are spent labelling, segmenting, checking, and rechecking various levels of linguistic information, it is obvious that automatic analysis can lower the costs (in time as well ...

  9. Learning to Automatically Solve Algebra Word Problems

    E-print Network

    Kushman, Nate

    We present an approach for automatically learning to solve algebra word problems. Our algorithm reasons across sentence boundaries to construct and solve a system of linear equations, while simultaneously recovering ...

  10. Variable load automatically tests dc power supplies

    NASA Technical Reports Server (NTRS)

    Burke, H. C., Jr.; Sullivan, R. M.

    1965-01-01

    Continuously variable load automatically tests dc power supplies over an extended current range. External meters monitor current and voltage, and multipliers at the outputs facilitate plotting the power curve of the unit.

  11. Automatic Nuchal Translucency Measurement from Ultrasonography

    E-print Network

    Translucency (NT) refers to the fluid-filled region under the skin of pos- terior neck of a fetus. Increased NT behind fetus neck. The NT detection is constrained by automatically found anchoring structure, fetal head

  12. Automatic Recognition of Class Blueprint Patterns Diplomarbeit

    E-print Network

    Lanza, Michele

    automatically in a software system. Our approach is based on the theory of graph pattern recognition, mainly to do something no one has done before, and it was a dream-come-true for me because I always wanted

  13. Data Mining Problems in Automatic Computer Diagnosis

    E-print Network

    Murphy, Robert F.

    , and association rule mining, dynamic information is used to facilitate hard problem diagnosis, but cannot achieve Data Mining Problems in Automatic Computer Diagnosis () : #12's dissatisfaction. Therefore, scientists from both system domain and data mining domain start to explore solving

  14. Automatic model construction with Gaussian processes

    E-print Network

    Duvenaud, David

    2014-11-11

    to a prior on functions which depend on both dimensions. ARD stands for automatic relevance determination, so named because estimating the lengthscale parameters ?1, ?2, . . . , ?D, implicitly determines the “relevance” of each dimension. Input...

  15. Automatic 5-axis NC toolpath generation

    E-print Network

    Balasubramaniam, Mahadevan, 1976-

    2001-01-01

    Despite over a decade of research, automatic toolpath generation has remained an elusive goal for 5-axis NC machining. This thesis describes the theoretical and practical issues associated with generating collision free ...

  16. Automatic Layout Design for Power Module

    SciTech Connect

    Ning, Puqi; Wang, Fei; Ngo, Khai

    2010-01-01

    The layout of power modules is one of the most important elements in power module design, especially for high power densities, where couplings are increased. In this paper, an automatic design process using a genetic algorithm is presented. Some practical considerations are introduced in the optimization of the layout design of the module. This paper presents a process for automatic layout design for high power density modules. Detailed GA implementations are introduced both for outer loop and inner loop. As verified by a design example, the results of the automatic design process presented here are better than those from manual design and also better than the results from a popular design software. This automatic design procedure could be a major step toward improving the overall performance of future layout design.

  17. Pronunciation learning for automatic speech recognition

    E-print Network

    Badr, Ibrahim

    2011-01-01

    In many ways, the lexicon remains the Achilles heel of modern automatic speech recognizers (ASRs). Unlike stochastic acoustic and language models that learn the values of their parameters from training data, the baseform ...

  18. A Versatile, Automatic Chromatographic Column Packing Device

    ERIC Educational Resources Information Center

    Barry, Eugene F.; And Others

    1977-01-01

    Describes an inexpensive apparatus for packing liquid and gas chromatographic columns of high efficiency. Consists of stainless steel support struts, an Automat Getriebmotor, and an associated three-pulley system capable of 10, 30, and 300 rpm. (MLH)

  19. The Importance of Automaticity for Developing Expertise in Reading.

    ERIC Educational Resources Information Center

    Samuels, S. Jay; Flor, Richard F.

    1997-01-01

    Discusses how students become automatic at reading sub-skills, the indicators that can be used to determine whether a student is automatic, and the psychological mechanisms that allow students to perform complex skills automatically. Discusses implications of automaticity research for teaching reading. (RS)

  20. 46 CFR 63.25-1 - Small automatic auxiliary boilers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 2 2013-10-01 2013-10-01 false Small automatic auxiliary boilers. 63.25-1 Section 63.25... AUXILIARY BOILERS Requirements for Specific Types of Automatic Auxiliary Boilers § 63.25-1 Small automatic auxiliary boilers. Small automatic auxiliary boilers defined as having heat-input ratings of 400,000...