Science.gov

Sample records for automatic text summarization

  1. Automatic Text Structuring and Summarization.

    ERIC Educational Resources Information Center

    Salton, Gerard; And Others

    1997-01-01

    Discussion of the use of information retrieval techniques for automatic generation of semantic hypertext links focuses on automatic text summarization. Topics include World Wide Web links, text segmentation, and evaluation of text summarization by comparing automatically generated abstracts with manually prepared abstracts. (Author/LRW)

  2. Keyphrase based Evaluation of Automatic Text Summarization

    NASA Astrophysics Data System (ADS)

    Elghannam, Fatma; El-Shishtawy, Tarek

    2015-05-01

    The development of methods to deal with the informative contents of the text units in the matching process is a major challenge in automatic summary evaluation systems that use fixed n-gram matching. The limitation causes inaccurate matching between units in a peer and reference summaries. The present study introduces a new Keyphrase based Summary Evaluator KpEval for evaluating automatic summaries. The KpEval relies on the keyphrases since they convey the most important concepts of a text. In the evaluation process, the keyphrases are used in their lemma form as the matching text unit. The system was applied to evaluate different summaries of Arabic multi-document data set presented at TAC2011. The results showed that the new evaluation technique correlates well with the known evaluation systems: Rouge1, Rouge2, RougeSU4, and AutoSummENG MeMoG. KpEval has the strongest correlation with AutoSummENG MeMoG, Pearson and spearman correlation coefficient measures are 0.8840, 0.9667 respectively.

  3. An Automatic Multidocument Text Summarization Approach Based on Naïve Bayesian Classifier Using Timestamp Strategy

    PubMed Central

    Ramanujam, Nedunchelian; Kaliappan, Manivannan

    2016-01-01

    Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach.

  4. Summarization as the base for text assessment

    NASA Astrophysics Data System (ADS)

    Karanikolas, Nikitas N.

    2015-02-01

    We present a model that apply shallow text summarization as a cheap (in resources needed) process for Automatic (machine based) free text answer Assessment (AA). The evaluation of the proposed method induces the inference that the Conventional Assessment (CA, man made assessment of free text answers) does not have an obvious mechanical replacement. However, this is a research challenge.

  5. Using Text Messaging to Summarize Text

    ERIC Educational Resources Information Center

    Williams, Angela Ruffin

    2012-01-01

    Summarizing is an academic task that students are expected to have mastered by the time they enter college. However, experience has revealed quite the contrary. Summarization is often difficult to master as well as teach, but instructors in higher education can benefit greatly from the rapid advancement in mobile wireless technology devices, by

  6. Figure-Associated Text Summarization and Evaluation

    PubMed Central

    Polepalli Ramesh, Balaji; Sethi, Ricky J.; Yu, Hong

    2015-01-01

    Biomedical literature incorporates millions of figures, which are a rich and important knowledge resource for biomedical researchers. Scientists need access to the figures and the knowledge they represent in order to validate research findings and to generate new hypotheses. By themselves, these figures are nearly always incomprehensible to both humans and machines and their associated texts are therefore essential for full comprehension. The associated text of a figure, however, is scattered throughout its full-text article and contains redundant information content. In this paper, we report the continued development and evaluation of several figure summarization systems, the FigSum+ systems, that automatically identify associated texts, remove redundant information, and generate a text summary for every figure in an article. Using a set of 94 annotated figures selected from 19 different journals, we conducted an intrinsic evaluation of FigSum+. We evaluate the performance by precision, recall, F1, and ROUGE scores. The best FigSum+ system is based on an unsupervised method, achieving F1 score of 0.66 and ROUGE-1 score of 0.97. The annotated data is available at figshare.com (http://figshare.com/articles/Figure_Associated_Text_Summarization_and_Evaluation/858903). PMID:25643357

  7. A Statistical Approach to Automatic Speech Summarization

    NASA Astrophysics Data System (ADS)

    Hori, Chiori; Furui, Sadaoki; Malkin, Rob; Yu, Hua; Waibel, Alex

    2003-12-01

    This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP) technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG). We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.

  8. Task-Driven Dynamic Text Summarization

    ERIC Educational Resources Information Center

    Workman, Terri Elizabeth

    2011-01-01

    The objective of this work is to examine the efficacy of natural language processing (NLP) in summarizing bibliographic text for multiple purposes. Researchers have noted the accelerating growth of bibliographic databases. Information seekers using traditional information retrieval techniques when searching large bibliographic databases are often

  9. Task-Driven Dynamic Text Summarization

    ERIC Educational Resources Information Center

    Workman, Terri Elizabeth

    2011-01-01

    The objective of this work is to examine the efficacy of natural language processing (NLP) in summarizing bibliographic text for multiple purposes. Researchers have noted the accelerating growth of bibliographic databases. Information seekers using traditional information retrieval techniques when searching large bibliographic databases are often…

  10. Automatic Soccer Video Analysis and Summarization

    NASA Astrophysics Data System (ADS)

    Ekin, Ahmet; Tekalp, A. Murat

    2003-01-01

    We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level soccer video processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game, ii) all goals in a game, and iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust for soccer video processing. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g. goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and the robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured at different countries and conditions.

  11. Information Extraction and Text Summarization Using Linguistic Knowledge Acquisition.

    ERIC Educational Resources Information Center

    Rau, Lisa F.; And Others

    1989-01-01

    Describes SCISOR (System for Conceptual Information Summarization, Organization and Retrieval), a prototype intelligent information retrieval system that extracts useful information from large bodies of text. It overcomes limitations of linguistic coverage by applying a text processing strategy that is tolerant of unknown words and gaps in

  12. Automatic Summarization of Mouse Gene Information by Clustering and Sentence Extraction from MEDLINE Abstracts

    PubMed Central

    Yang, Jianji; Cohen, Aaron M.; Hersh, William

    2007-01-01

    Tools to automatically summarize gene information from the literature have the potential to help genomics researchers better interpret gene expression data and investigate biological pathways. The task of finding information on sets of genes is common for genomic researchers, and PubMed is still the first choice because the most recent and original information can only be found in the unstructured, free text biomedical literature. However, finding information on a set of genes by manually searching and scanning the literature is a time-consuming and daunting task for scientists. We built and evaluated a query-based automatic summarizer of information on mouse genes studied in microarray experiments. The system clusters a set of genes by MeSH, GO and free text features and presents summaries for each gene by ranked sentences extracted from MEDLINE abstracts. Evaluation showed that the system seems to provide meaningful clusters and informative sentences are ranked higher by the algorithm. PMID:18693953

  13. An Automatic Multimedia Content Summarization System for Video Recommendation

    ERIC Educational Resources Information Center

    Yang, Jie Chi; Huang, Yi Ting; Tsai, Chi Cheng; Chung, Ching I.; Wu, Yu Chieh

    2009-01-01

    In recent years, using video as a learning resource has received a lot of attention and has been successfully applied to many learning activities. In comparison with text-based learning, video learning integrates more multimedia resources, which usually motivate learners more than texts. However, one of the major limitations of video learning is…

  14. Automatic Summarization of MEDLINE Citations for EvidenceBased Medical Treatment: A Topic-Oriented Evaluation

    PubMed Central

    Fiszman, Marcelo; Demner-Fushman, Dina; Kilicoglu, Halil; Rindflesch, Thomas C.

    2009-01-01

    As the number of electronic biomedical textual resources increases, it becomes harder for physicians to find useful answers at the point of care. Information retrieval applications provide access to databases; however, little research has been done on using automatic summarization to help navigate the documents returned by these systems. After presenting a semantic abstraction automatic summarization system for MEDLINE citations, we concentrate on evaluating its ability to identify useful drug interventions for fifty-three diseases. The evaluation methodology uses existing sources of evidence-based medicine as surrogates for a physician-annotated reference standard. Mean average precision (MAP) and a clinical usefulness score developed for this study were computed as performance metrics. The automatic summarization system significantly outperformed the baseline in both metrics. The MAP gain was 0.17 (p < 0.01) and the increase in the overall score of clinical usefulness was 0.39 (p < 0.05). PMID:19022398

  15. Studying the correlation between different word sense disambiguation methods and summarization effectiveness in biomedical texts

    PubMed Central

    2011-01-01

    Background Word sense disambiguation (WSD) attempts to solve lexical ambiguities by identifying the correct meaning of a word based on its context. WSD has been demonstrated to be an important step in knowledge-based approaches to automatic summarization. However, the correlation between the accuracy of the WSD methods and the summarization performance has never been studied. Results We present three existing knowledge-based WSD approaches and a graph-based summarizer. Both the WSD approaches and the summarizer employ the Unified Medical Language System (UMLS) Metathesaurus as the knowledge source. We first evaluate WSD directly, by comparing the prediction of the WSD methods to two reference sets: the NLM WSD dataset and the MSH WSD collection. We next apply the different WSD methods as part of the summarizer, to map documents onto concepts in the UMLS Metathesaurus, and evaluate the summaries that are generated. The results obtained by the different methods in both evaluations are studied and compared. Conclusions It has been found that the use of WSD techniques has a positive impact on the results of our graph-based summarizer, and that, when both the WSD and summarization tasks are assessed over large and homogeneous evaluation collections, there exists a correlation between the overall results of the WSD and summarization tasks. Furthermore, the best WSD algorithm in the first task tends to be also the best one in the second. However, we also found that the improvement achieved by the summarizer is not directly correlated with the WSD performance. The most likely reason is that the errors in disambiguation are not equally important but depend on the relative salience of the different concepts in the document to be summarized. PMID:21871110

  16. Text Summarization in the Biomedical Domain: A Systematic Review of Recent Research

    PubMed Central

    Mishra, Rashmi; Bian, Jiantao; Fiszman, Marcelo; Weir, Charlene R.; Jonnalagadda, Siddhartha; Mostafa, Javed; Fiol, Guilherme Del

    2014-01-01

    Objective The amount of information for clinicians and clinical researchers is growing exponentially. Text summarization reduces information as an attempt to enable users to find and understand relevant source texts more quickly and effortlessly. In recent years, substantial research has been conducted to develop and evaluate various summarization techniques in the biomedical domain. The goal of this study was to systematically review recent published research on summarization of textual documents in the biomedical domain. Materials and methods MEDLINE (2000 to October 2013), IEEE Digital Library, and the ACM Digital library were searched. Investigators independently screened and abstracted studies that examined text summarization techniques in the biomedical domain. Information is derived from selected articles on five dimensions: input, purpose, output, method and evaluation. Results Of 10,786 studies retrieved, 34 (0.3%) met the inclusion criteria. Natural Language processing (17; 50%) and a Hybrid technique comprising of statistical, Natural language processing and machine learning (15; 44%) were the most common summarization approaches. Most studies (28; 82%) conducted an intrinsic evaluation. Discussion This is the first systematic review of text summarization in the biomedical domain. The study identified research gaps and provides recommendations for guiding future research on biomedical text summarization. conclusion Recent research has focused on a Hybrid technique comprising statistical, language processing and machine learning techniques. Further research is needed on the application and evaluation of text summarization in real research or patient care settings. PMID:25016293

  17. A Study of Cognitive Mapping as a Means to Improve Summarization and Comprehension of Expository Text.

    ERIC Educational Resources Information Center

    Ruddell, Robert B.; Boyle, Owen F.

    1989-01-01

    Investigates the effects of cognitive mapping on written summarization and comprehension of expository text. Concludes that mapping appears to assist students in: (1) developing procedural knowledge resulting in more effective written summarization and (2) identifying and using supporting details in their essays. (MG)

  18. Science Text Comprehension: Drawing, Main Idea Selection, and Summarizing as Learning Strategies

    ERIC Educational Resources Information Center

    Leopold, Claudia; Leutner, Detlev

    2012-01-01

    The purpose of two experiments was to contrast instructions to generate drawings with two text-focused strategies--main idea selection (Exp. 1) and summarization (Exp. 2)--and to examine whether these strategies could help students learn from a chemistry science text. Both experiments followed a 2 x 2 design, with drawing strategy instructions

  19. A Comparison of Two Strategies for Teaching Third Graders to Summarize Information Text

    ERIC Educational Resources Information Center

    Dromsky, Ann Marie

    2011-01-01

    Summarizing text is one of the most effective comprehension strategies (National Institute of Child Health and Human Development, 2000) and an effective way to learn from information text (Dole, Duffy, Roehler, & Pearson, 1991; Pressley & Woloshyn, 1995). In addition, much research supports the explicit instruction of such strategies as

  20. Science Text Comprehension: Drawing, Main Idea Selection, and Summarizing as Learning Strategies

    ERIC Educational Resources Information Center

    Leopold, Claudia; Leutner, Detlev

    2012-01-01

    The purpose of two experiments was to contrast instructions to generate drawings with two text-focused strategies--main idea selection (Exp. 1) and summarization (Exp. 2)--and to examine whether these strategies could help students learn from a chemistry science text. Both experiments followed a 2 x 2 design, with drawing strategy instructions…

  1. A Comparison of Two Strategies for Teaching Third Graders to Summarize Information Text

    ERIC Educational Resources Information Center

    Dromsky, Ann Marie

    2011-01-01

    Summarizing text is one of the most effective comprehension strategies (National Institute of Child Health and Human Development, 2000) and an effective way to learn from information text (Dole, Duffy, Roehler, & Pearson, 1991; Pressley & Woloshyn, 1995). In addition, much research supports the explicit instruction of such strategies as…

  2. DiffNet: automatic differential functional summarization of dE-MAP networks.

    PubMed

    Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes

    2014-10-01

    The study of genetic interaction networks that respond to changing conditions is an emerging research problem. Recently, Bandyopadhyay et al. (2010) proposed a technique to construct a differential network (dE-MAPnetwork) from two static gene interaction networks in order to map the interaction differences between them under environment or condition change (e.g., DNA-damaging agent). This differential network is then manually analyzed to conclude that DNA repair is differentially effected by the condition change. Unfortunately, manual construction of differential functional summary from a dE-MAP network that summarizes all pertinent functional responses is time-consuming, laborious and error-prone, impeding large-scale analysis on it. To this end, we propose DiffNet, a novel data-driven algorithm that leverages Gene Ontology (go) annotations to automatically summarize a dE-MAP network to obtain a high-level map of functional responses due to condition change. We tested DiffNet on the dynamic interaction networks following MMS treatment and demonstrated the superiority of our approach in generating differential functional summaries compared to state-of-the-art graph clustering methods. We studied the effects of parameters in DiffNet in controlling the quality of the summary. We also performed a case study that illustrates its utility. PMID:25009128

  3. MeSH: a window into full text for document summarization

    PubMed Central

    Bhattacharya, Sanmitra; Ha?Thuc, Viet; Srinivasan, Padmini

    2011-01-01

    Motivation: Previous research in the biomedical text-mining domain has historically been limited to titles, abstracts and metadata available in MEDLINE records. Recent research initiatives such as TREC Genomics and BioCreAtIvE strongly point to the merits of moving beyond abstracts and into the realm of full texts. Full texts are, however, more expensive to process not only in terms of resources needed but also in terms of accuracy. Since full texts contain embellishments that elaborate, contextualize, contrast, supplement, etc., there is greater risk for false positives. Motivated by this, we explore an approach that offers a compromise between the extremes of abstracts and full texts. Specifically, we create reduced versions of full text documents that contain only important portions. In the long-term, our goal is to explore the use of such summaries for functions such as document retrieval and information extraction. Here, we focus on designing summarization strategies. In particular, we explore the use of MeSH terms, manually assigned to documents by trained annotators, as clues to select important text segments from the full text documents. Results: Our experiments confirm the ability of our approach to pick the important text portions. Using the ROUGE measures for evaluation, we were able to achieve maximum ROUGE-1, ROUGE-2 and ROUGE-SU4 F-scores of 0.4150, 0.1435 and 0.1782, respectively, for our MeSH term-based method versus the maximum baseline scores of 0.3815, 0.1353 and 0.1428, respectively. Using a MeSH profile-based strategy, we were able to achieve maximum ROUGE F-scores of 0.4320, 0.1497 and 0.1887, respectively. Human evaluation of the baselines and our proposed strategies further corroborates the ability of our method to select important sentences from the full texts. Contact: sanmitra-bhattacharya@uiowa.edu; padmini-srinivasan@uiowa.edu PMID:21685060

  4. Presentation video retrieval using automatically recovered slide and spoken text

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  5. Stemming Malay Text and Its Application in Automatic Text Categorization

    NASA Astrophysics Data System (ADS)

    Yasukawa, Michiko; Lim, Hui Tian; Yokoo, Hidetoshi

    In Malay language, there are no conjugations and declensions and affixes have important grammatical functions. In Malay, the same word may function as a noun, an adjective, an adverb, or, a verb, depending on its position in the sentence. Although extensively simple root words are used in informal conversations, it is essential to use the precise words in formal speech or written texts. In Malay, to make sentences clear, derivative words are used. Derivation is achieved mainly by the use of affixes. There are approximately a hundred possible derivative forms of a root word in written language of the educated Malay. Therefore, the composition of Malay words may be complicated. Although there are several types of stemming algorithms available for text processing in English and some other languages, they cannot be used to overcome the difficulties in Malay word stemming. Stemming is the process of reducing various words to their root forms in order to improve the effectiveness of text processing in information systems. It is essential to avoid both over-stemming and under-stemming errors. We have developed a new Malay stemmer (stemming algorithm) for removing inflectional and derivational affixes. Our stemmer uses a set of affix rules and two types of dictionaries: a root-word dictionary and a derivative-word dictionary. The use of set of rules is aimed at reducing the occurrence of under-stemming errors, while that of the dictionaries is believed to reduce the occurrence of over-stemming errors. We performed an experiment to evaluate the application of our stemmer in text mining software. For the experiment, text data used were actual web pages collected from the World Wide Web to demonstrate the effectiveness of our Malay stemming algorithm. The experimental results showed that our stemmer can effectively increase the precision of the extracted Boolean expressions for text categorization.

  6. Automatically generating extraction patterns from untagged text

    SciTech Connect

    Riloff, E.

    1996-12-31

    Many corpus-based natural language processing systems rely on text corpora that have been manually annotated with syntactic or semantic tags. In particular, all previous dictionary construction systems for information extraction have used an annotated training corpus or some form of annotated input. We have developed a system called AutoSlog-TS that creates dictionaries of extraction patterns using only untagged text. AutoSlog-TS is based on the AutoSlog system, which generated extraction patterns using annotated text and a set of heuristic rules. By adapting AutoSlog and combining it with statistical techniques, we eliminated its dependency on tagged text. In experiments with the MUC-4 terrorism domain, AutoSlog-TS created a dictionary of extraction patterns that performed comparably to a dictionary created by AutoSlog, using only preclassified texts as input.

  7. Effects of Presentation Mode and Computer Familiarity on Summarization of Extended Texts

    ERIC Educational Resources Information Center

    Yu, Guoxing

    2010-01-01

    Comparability studies on computer- and paper-based reading tests have focused on short texts and selected-response items via almost exclusively statistical modeling of test performance. The psychological effects of presentation mode and computer familiarity on individual students are under-researched. In this study, 157 students read extended…

  8. Information fusion for automatic text classification

    SciTech Connect

    Dasigi, V.; Mann, R.C.; Protopopescu, V.A.

    1996-08-01

    Analysis and classification of free text documents encompass decision-making processes that rely on several clues derived from text and other contextual information. When using multiple clues, it is generally not known a priori how these should be integrated into a decision. An algorithmic sensor based on Latent Semantic Indexing (LSI) (a recent successful method for text retrieval rather than classification) is the primary sensor used in our work, but its utility is limited by the {ital reference}{ital library} of documents. Thus, there is an important need to complement or at least supplement this sensor. We have developed a system that uses a neural network to integrate the LSI-based sensor with other clues derived from the text. This approach allows for systematic fusion of several information sources in order to determine a combined best decision about the category to which a document belongs.

  9. Text Structuration Leading to an Automatic Summary System: RAFI.

    ERIC Educational Resources Information Center

    Lehman, Abderrafih

    1999-01-01

    Describes the design and construction of Resume Automatique a Fragments Indicateurs (RAFI), a system of automatic text summary which sums up scientific and technical texts. The RAFI system transforms a long source text into several versions of more condensed texts, using discourse analysis, to make searching easier; it could be adapted to the

  10. Profiling School Shooters: Automatic Text-Based Analysis

    PubMed Central

    Neuman, Yair; Assaf, Dan; Cohen, Yochai; Knoll, James L.

    2015-01-01

    School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various characteristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by 6 school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters’ texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/prioritization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology. PMID:26089804

  11. Usability evaluation of an experimental text summarization system and three search engines: implications for the reengineering of health care interfaces.

    PubMed Central

    Kushniruk, Andre W.; Kan, Min-Yem; McKeown, Kathleen; Klavans, Judith; Jordan, Desmond; LaFlamme, Mark; Patel, Vimia L.

    2002-01-01

    This paper describes the comparative evaluation of an experimental automated text summarization system, Centrifuser and three conventional search engines - Google, Yahoo and About.com. Centrifuser provides information to patients and families relevant to their questions about specific health conditions. It then produces a multidocument summary of articles retrieved by a standard search engine, tailored to the user's question. Subjects, consisting of friends or family of hospitalized patients, were asked to "think aloud" as they interacted with the four systems. The evaluation involved audio- and video recording of subject interactions with the interfaces in situ at a hospital. Results of the evaluation show that subjects found Centrifuser's summarization capability useful and easy to understand. In comparing Centrifuser to the three search engines, subjects' ratings varied; however, specific interface features were deemed useful across interfaces. We conclude with a discussion of the implications for engineering Web-based retrieval systems. PMID:12463858

  12. A scheme for automatic text rectification in real scene images

    NASA Astrophysics Data System (ADS)

    Wang, Baokang; Liu, Changsong; Ding, Xiaoqing

    2015-03-01

    Digital camera is gradually replacing traditional flat-bed scanner as the main access to obtain text information for its usability, cheapness and high-resolution, there has been a large amount of research done on camera-based text understanding. Unfortunately, arbitrary position of camera lens related to text area can frequently cause perspective distortion which most OCR systems at present cannot manage, thus creating demand for automatic text rectification. Current rectification-related research mainly focused on document images, distortion of natural scene text is seldom considered. In this paper, a scheme for automatic text rectification in natural scene images is proposed. It relies on geometric information extracted from characters themselves as well as their surroundings. For the first step, linear segments are extracted from interested region, and a J-Linkage based clustering is performed followed by some customized refinement to estimate primary vanishing point(VP)s. To achieve a more comprehensive VP estimation, second stage would be performed by inspecting the internal structure of characters which involves analysis on pixels and connected components of text lines. Finally VPs are verified and used to implement perspective rectification. Experiments demonstrate increase of recognition rate and improvement compared with some related algorithms.

  13. Text replacement on cylindrical surfaces: a semi-automatic approach

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Bouman, Charles A.; Allebach, Jan P.

    2012-03-01

    Image-based customization that incorporates personalized text strings into photorealistic images in a natural and appealing way has been of great interest lately. We describe a semi-automatic approach for replacing text on cylindrical surfaces in images of natural scenes or objects. The user is requested to select a boundary for the existing text and align a pair of edges for the sides of the cylinder. The algorithm erases the existing text, and instantiates a 3-D cylinder forward projection model to render the new text. The parameters of the forward projection model are estimated by optimizing a carefully designed cost function. Experimental results show that the text-replaced images look natural and appealing.

  14. Automatic text extraction in news images using morphology

    NASA Astrophysics Data System (ADS)

    Jang, InYoung; Ko, ByoungChul; Byun, HyeRan; Choi, Yeongwoo

    2002-01-01

    In this paper we present a new method to extract both superimposed and embedded graphical texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, we convert a color image into a gray-level image and apply contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose + CloseOpen)/2 morphological operations, maintaining text components using (OpenClose + CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, our method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.

  15. Toward a multi-sensor-based approach to automatic text classification

    SciTech Connect

    Dasigi, V.R.; Mann, R.C.

    1995-10-01

    Many automatic text indexing and retrieval methods use a term-document matrix that is automatically derived from the text in question. Latent Semantic Indexing is a method, recently proposed in the Information Retrieval (IR) literature, for approximating a large and sparse term-document matrix with a relatively small number of factors, and is based on a solid mathematical foundation. LSI appears to be quite useful in the problem of text information retrieval, rather than text classification. In this report, we outline a method that attempts to combine the strength of the LSI method with that of neural networks, in addressing the problem of text classification. In doing so, we also indicate ways to improve performance by adding additional {open_quotes}logical sensors{close_quotes} to the neural network, something that is hard to do with the LSI method when employed by itself. The various programs that can be used in testing the system with TIPSTER data set are described. Preliminary results are summarized, but much work remains to be done.

  16. Supervised and traditional term weighting methods for automatic text categorization.

    PubMed

    Lan, Man; Tan, Chew Lim; Su, Jian; Lu, Yue

    2009-04-01

    In vector space model (VSM), text representation is the task of transforming the content of a textual document into a vector in the term space so that the document could be recognized and classified by a computer or a classifier. Different terms (i.e. words, phrases, or any other indexing units used to identify the contents of a text) have different importance in a text. The term weighting methods assign appropriate weights to the terms to improve the performance of text categorization. In this study, we investigate several widely-used unsupervised (traditional) and supervised term weighting methods on benchmark data collections in combination with SVM and kappa NN algorithms. In consideration of the distribution of relevant documents in the collection, we propose a new simple supervised term weighting method, i.e. tf.rf, to improve the terms' discriminating power for text categorization task. From the controlled experimental results, these supervised term weighting methods have mixed performance. Specifically, our proposed supervised term weighting method, tf.rf, has a consistently better performance than other term weighting methods while other supervised term weighting methods based on information theory or statistical metric perform the worst in all experiments. On the other hand, the popularly used tf.idf method has not shown a uniformly good performance in terms of different data sets. PMID:19229086

  17. Automatic Coding of Short Text Responses via Clustering in Educational Assessment

    ERIC Educational Resources Information Center

    Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank

    2016-01-01

    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…

  18. Automatic theory generation from analyst text files using coherence networks

    NASA Astrophysics Data System (ADS)

    Shaffer, Steven C.

    2014-05-01

    This paper describes a three-phase process of extracting knowledge from analyst textual reports. Phase 1 involves performing natural language processing on the source text to extract subject-predicate-object triples. In phase 2, these triples are then fed into a coherence network analysis process, using a genetic algorithm optimization. Finally, the highest-value sub networks are processed into a semantic network graph for display. Initial work on a well- known data set (a Wikipedia article on Abraham Lincoln) has shown excellent results without any specific tuning. Next, we ran the process on the SYNthetic Counter-INsurgency (SYNCOIN) data set, developed at Penn State, yielding interesting and potentially useful results.

  19. Combining MEDLINE and publisher data to create parallel corpora for the automatic translation of biomedical text

    PubMed Central

    2013-01-01

    Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733

  20. The Effects of Teaching a Text-Structure Based Reading Comprehension Strategy on Struggling Fifth Grade Students' Ability to Summarize and Analyze Written Arguments

    ERIC Educational Resources Information Center

    Haria, Priti; MacArthur, Charles; Santoro, Lana Edwards

    2010-01-01

    The purpose of this research was to examine the effectiveness of teaching fifth grade students with reading difficulties a genre-specific strategy for summarizing and critically analyzing written arguments. In addition, this research explored whether learning this particular reading strategy informed the students' ability to write effective and…

  1. Automatic Cataloguing and Searching for Retrospective Data by Use of OCR Text.

    ERIC Educational Resources Information Center

    Tseng, Yuen-Hsien

    2001-01-01

    Describes efforts in supporting information retrieval from OCR (optical character recognition) degraded text. Reports on approaches used in an automatic cataloging and searching contest for books in multiple languages, including a vector space retrieval model, an n-gram indexing method, and a weighting scheme; and discusses problems of Asian

  2. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction.

    PubMed

    Najafi, Elham; Darooneh, Amir H

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction. PMID:26091207

  3. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction

    PubMed Central

    Najafi, Elham; Darooneh, Amir H.

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction. PMID:26091207

  4. Using a MaxEnt Classifier for the Automatic Content Scoring of Free-Text Responses

    SciTech Connect

    Sukkarieh, Jana Z.

    2011-03-14

    Criticisms against multiple-choice item assessments in the USA have prompted researchers and organizations to move towards constructed-response (free-text) items. Constructed-response (CR) items pose many challenges to the education community - one of which is that they are expensive to score by humans. At the same time, there has been widespread movement towards computer-based assessment and hence, assessment organizations are competing to develop automatic content scoring engines for such items types - which we view as a textual entailment task. This paper describes how MaxEnt Modeling is used to help solve the task. MaxEnt has been used in many natural language tasks but this is the first application of the MaxEnt approach to textual entailment and automatic content scoring.

  5. Interpretable Probabilistic Latent Variable Models for Automatic Annotation of Clinical Text

    PubMed Central

    Kotov, Alexander; Hasan, Mehedi; Carcone, April; Dong, Ming; Naar-King, Sylvie; BroganHartlieb, Kathryn

    2015-01-01

    We propose Latent Class Allocation (LCA) and Discriminative Labeled Latent Dirichlet Allocation (DL-LDA), two novel interpretable probabilistic latent variable models for automatic annotation of clinical text. Both models separate the terms that are highly characteristic of textual fragments annotated with a given set of labels from other non-discriminative terms, but rely on generative processes with different structure of latent variables. LCA directly learns class-specific multinomials, while DL-LDA breaks them down into topics (clusters of semantically related words). Extensive experimental evaluation indicates that the proposed models outperform Naïve Bayes, a standard probabilistic classifier, and Labeled LDA, a state-of-the-art topic model for labeled corpora, on the task of automatic annotation of transcripts of motivational interviews, while the output of the proposed models can be easily interpreted by clinical practitioners. PMID:26958214

  6. Exploring the Effects of Multimedia Learning on Pre-Service Teachers' Perceived and Actual Learning Performance: The Use of Embedded Summarized Texts in Educational Media

    ERIC Educational Resources Information Center

    Wu, Leon Yufeng; Yamanaka, Akio

    2013-01-01

    In light of the increased usage of instructional media for teaching and learning, the design of these media as aids to convey the content for learning can be crucial for effective learning outcomes. In this vein, the literature has given attention to how concurrent on-screen text can be designed using these media to enhance learning performance.

  7. Extractive summarization using complex networks and syntactic dependency

    NASA Astrophysics Data System (ADS)

    Amancio, Diego R.; Nunes, Maria G. V.; Oliveira, Osvaldo N.; Costa, Luciano da F.

    2012-02-01

    The realization that statistical physics methods can be applied to analyze written texts represented as complex networks has led to several developments in natural language processing, including automatic summarization and evaluation of machine translation. Most importantly, so far only a few metrics of complex networks have been used and therefore there is ample opportunity to enhance the statistics-based methods as new measures of network topology and dynamics are created. In this paper, we employ for the first time the metrics betweenness, vulnerability and diversity to analyze written texts in Brazilian Portuguese. Using strategies based on diversity metrics, a better performance in automatic summarization is achieved in comparison to previous work employing complex networks. With an optimized method the Rouge score (an automatic evaluation method used in summarization) was 0.5089, which is the best value ever achieved for an extractive summarizer with statistical methods based on complex networks for Brazilian Portuguese. Furthermore, the diversity metric can detect keywords with high precision, which is why we believe it is suitable to produce good summaries. It is also shown that incorporating linguistic knowledge through a syntactic parser does enhance the performance of the automatic summarizers, as expected, but the increase in the Rouge score is only minor. These results reinforce the suitability of complex network methods for improving automatic summarizers in particular, and treating text in general.

  8. Automatic Entity Recognition and Typing from Massive Text Corpora: A Phrase and Network Mining Approach

    PubMed Central

    Ren, Xiang; El-Kishky, Ahmed; Wang, Chi; Han, Jiawei

    2015-01-01

    In todays computerized and information-based society, we are soaked with vast amounts of text data, ranging from news articles, scientific publications, product reviews, to a wide range of textual information from social media. To unlock the value of these unstructured text data from various domains, it is of great importance to gain an understanding of entities and their relationships. In this tutorial, we introduce data-driven methods to recognize typed entities of interest in massive, domain-specific text corpora. These methods can automatically identify token spans as entity mentions in documents and label their types (e.g., people, product, food) in a scalable way. We demonstrate on real datasets including news articles and tweets how these typed entities aid in knowledge discovery and management. PMID:26705508

  9. Portable Automatic Text Classification for Adverse Drug Reaction Detection via Multi-corpus Training

    PubMed Central

    Gonzalez, Graciela

    2014-01-01

    Objective Automatic detection of Adverse Drug Reaction (ADR) mentions from text has recently received significant interest in pharmacovigilance research. Current research focuses on various sources of text-based information, including social media where enormous amounts of user posted data is available, which have the potential for use in pharmacovigilance if collected and filtered accurately. The aims of this study are: (i) to explore natural language processing approaches for generating useful features from text, and utilizing them in optimized machine learning algorithms for automatic classification of ADR assertive text segments; (ii) to present two data sets that we prepared for the task of ADR detection from user posted internet data; and (iii) to investigate if combining training data from distinct corpora can improve automatic classification accuracies. Methods One of our three data sets contains annotated sentences from clinical reports, and the two other data sets, built in-house, consist of annotated posts from social media. Our text classification approach relies on generating a large set of features, representing semantic properties (e.g., sentiment, polarity, and topic), from short text nuggets. Importantly, using our expanded feature sets, we combine training data from different corpora in attempts to boost classification accuracies. Results Our feature-rich classification approach performs significantly better than previously published approaches with ADR class F-scores of 0.812 (previously reported best: 0.770), 0.538 and 0.678 for the three data sets. Combining training data from multiple compatible corpora further improves the ADR F-scores for the in-house data sets to 0.597 (improvement of 5.9 units) and 0.704 (improvement of 2.6 units) respectively. Conclusions Our research results indicate that using advanced NLP techniques for generating information rich features from text can significantly improve classification accuracies over existing benchmarks. Our experiments illustrate the benefits of incorporating various semantic features such as topics, concepts, sentiments, and polarities. Finally, we show that integration of information from compatible corpora can significantly improve classification performance. This form of multi-corpus training may be particularly useful in cases where data sets are heavily imbalanced (e.g., social media data), and may reduce the time and costs associated with the annotation of data in the future. PMID:25451103

  10. Automatically Detecting Failures in Natural Language Processing Tools for Online Community Text

    PubMed Central

    Hartzler, Andrea L; Huh, Jina; McDonald, David W; Pratt, Wanda

    2015-01-01

    Background The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical natural language processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. Objective The primary objective of this study is to explore an alternative approach—using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Methods Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap’s commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. Results From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed term failures, and (3) word ambiguity failures. Within these three failure types, we discovered 12 causes of inaccurate mappings of concepts. We used automated methods to detect almost half of 383,572 MetaMap’s mappings as problematic. Word sense ambiguity failure was the most widely occurring, comprising 82.22% of failures. Boundary failure was the second most frequent, amounting to 15.90% of failures, while missed term failures were the least common, making up 1.88% of failures. The automated failure detection achieved precision, recall, accuracy, and F1 score of 83.00%, 92.57%, 88.17%, and 87.52%, respectively. Conclusions We illustrate the challenges of processing patient-generated online health community text and characterize failures of NLP tools on this patient-generated health text, demonstrating the feasibility of our low-cost approach to automatically detect those failures. Our approach shows the potential for scalable and effective solutions to automatically assess the constantly evolving NLP tools and source vocabularies to process patient-generated text. PMID:26323337

  11. Automatic identification of ROI in figure images toward improving hybrid (text and image) biomedical document retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Rahman, Md Mahmudur; Govindaraju, Venu; Thoma, George R.

    2011-01-01

    Biomedical images are often referenced for clinical decision support (CDS), educational purposes, and research. They appear in specialized databases or in biomedical publications and are not meaningfully retrievable using primarily textbased retrieval systems. The task of automatically finding the images in an article that are most useful for the purpose of determining relevance to a clinical situation is quite challenging. An approach is to automatically annotate images extracted from scientific publications with respect to their usefulness for CDS. As an important step toward achieving the goal, we proposed figure image analysis for localizing pointers (arrows, symbols) to extract regions of interest (ROI) that can then be used to obtain meaningful local image content. Content-based image retrieval (CBIR) techniques can then associate local image ROIs with identified biomedical concepts in figure captions for improved hybrid (text and image) retrieval of biomedical articles. In this work we present methods that make robust our previous Markov random field (MRF)-based approach for pointer recognition and ROI extraction. These include use of Active Shape Models (ASM) to overcome problems in recognizing distorted pointer shapes and a region segmentation method for ROI extraction. We measure the performance of our methods on two criteria: (i) effectiveness in recognizing pointers in images, and (ii) improved document retrieval through use of extracted ROIs. Evaluation on three test sets shows 87% accuracy in the first criterion. Further, the quality of document retrieval using local visual features and text is shown to be better than using visual features alone.

  12. Automatic Evaluation of Voice Quality Using Text-Based Laryngograph Measurements and Prosodic Analysis

    PubMed Central

    Haderlein, Tino; Schwemmle, Cornelia; Döllinger, Michael; Matoušek, Václav; Ptok, Martin; Nöth, Elmar

    2015-01-01

    Due to low intra- and interrater reliability, perceptual voice evaluation should be supported by objective, automatic methods. In this study, text-based, computer-aided prosodic analysis and measurements of connected speech were combined in order to model perceptual evaluation of the German Roughness-Breathiness-Hoarseness (RBH) scheme. 58 connected speech samples (43 women and 15 men; 48.7 ± 17.8 years) containing the German version of the text “The North Wind and the Sun” were evaluated perceptually by 19 speech and voice therapy students according to the RBH scale. For the human-machine correlation, Support Vector Regression with measurements of the vocal fold cycle irregularities (CFx) and the closed phases of vocal fold vibration (CQx) of the Laryngograph and 33 features from a prosodic analysis module were used to model the listeners' ratings. The best human-machine results for roughness were obtained from a combination of six prosodic features and CFx (r = 0.71, ρ = 0.57). These correlations were approximately the same as the interrater agreement among human raters (r = 0.65, ρ = 0.61). CQx was one of the substantial features of the hoarseness model. For hoarseness and breathiness, the human-machine agreement was substantially lower. Nevertheless, the automatic analysis method can serve as the basis for a meaningful objective support for perceptual analysis. PMID:26136813

  13. Improving retrieval effectiveness by automatically creating multiscaled links between text and pictures

    NASA Astrophysics Data System (ADS)

    Malandain, Nicolas; Gaio, Mauro; Madelaine, Jacques

    2000-12-01

    This paper describes a method to improve retrieval of composite documents (text and graphic) by creating a set of internal links. We propose the concept of granularity to add structure for this given set of typed semantic links. That is obtained by a multi-scaling processing. We propose three major classes based on the capacity of a link to include or to be included by others one. Global links are obtained with a classical IR methods. A computational model allows an automatic extraction of the textual information units contained in the text source of the global links. In our geographic corpus the units denotes georeferenced entities. A semantic representation of these entities is proposed that allows further cooperation with processing of the graphical part of the document.

  14. Semi-automatic image personalization tool for variable text insertion and replacement

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-02-01

    Image personalization is a widely used technique in personalized marketing,1 in which a vendor attempts to promote new products or retain customers by sending marketing collateral that is tailored to the customers' demographics, needs, and interests. With current solutions of which we are aware such as XMPie,2 DirectSmile,3 and AlphaPicture,4 in order to produce this tailored marketing collateral, image templates need to be created manually by graphic designers, involving complex grid manipulation and detailed geometric adjustments. As a matter of fact, the image template design is highly manual, skill-demanding and costly, and essentially the bottleneck for image personalization. We present a semi-automatic image personalization tool for designing image templates. Two scenarios are considered: text insertion and text replacement, with the text replacement option not offered in current solutions. The graphical user interface (GUI) of the tool is described in detail. Unlike current solutions, the tool renders the text in 3-D, which allows easy adjustment of the text. In particular, the tool has been implemented in Java, which introduces flexible deployment and eliminates the need for any special software or know-how on the part of the end user.

  15. FigSum: Automatically Generating Structured Text Summaries for Figures in Biomedical Literature

    PubMed Central

    Agarwal, Shashank; Yu, Hong

    2009-01-01

    Figures are frequently used in biomedical articles to support research findings; however, they are often difficult to comprehend based on their legends alone and information from the full-text articles is required to fully understand them. Previously, we found that the information associated with a single figure is distributed throughout the full-text article the figure appears in. Here, we develop and evaluate a figure summarization system FigSum, which aggregates this scattered information to improve figure comprehension. For each figure in an article, FigSum generates a structured text summary comprising one sentence from each of the four rhetorical categories Introduction, Methods, Results and Discussion (IMRaD). The IMRaD category of sentences is predicted by an automated machine learning classifier. Our evaluation shows that FigSum captures 53% of the sentences in the gold standard summaries annotated by biomedical scientists and achieves an average ROUGE-1 score of 0.70, which is higher than a baseline system. PMID:20351812

  16. Assessing the Utility of Automatic Cancer Registry Notifications Data Extraction from Free-Text Pathology Reports.

    PubMed

    Nguyen, Anthony N; Moore, Julie; O'Dwyer, John; Philpot, Shoni

    2015-01-01

    Cancer Registries record cancer data by reading and interpreting pathology cancer specimen reports. For some Registries this can be a manual process, which is labour and time intensive and subject to errors. A system for automatic extraction of cancer data from HL7 electronic free-text pathology reports has been proposed to improve the workflow efficiency of the Cancer Registry. The system is currently processing an incoming trickle feed of HL7 electronic pathology reports from across the state of Queensland in Australia to produce an electronic cancer notification. Natural language processing and symbolic reasoning using SNOMED CT were adopted in the system; Queensland Cancer Registry business rules were also incorporated. A set of 220 unseen pathology reports selected from patients with a range of cancers was used to evaluate the performance of the system. The system achieved overall recall of 0.78, precision of 0.83 and F-measure of 0.80 over seven categories, namely, basis of diagnosis (3 classes), primary site (66 classes), laterality (5 classes), histological type (94 classes), histological grade (7 classes), metastasis site (19 classes) and metastatic status (2 classes). These results are encouraging given the large cross-section of cancers. The system allows for the provision of clinical coding support as well as indicative statistics on the current state of cancer, which is not otherwise available. PMID:26958232

  17. Assessing the Utility of Automatic Cancer Registry Notifications Data Extraction from Free-Text Pathology Reports

    PubMed Central

    Nguyen, Anthony N.; Moore, Julie; O’Dwyer, John; Philpot, Shoni

    2015-01-01

    Cancer Registries record cancer data by reading and interpreting pathology cancer specimen reports. For some Registries this can be a manual process, which is labour and time intensive and subject to errors. A system for automatic extraction of cancer data from HL7 electronic free-text pathology reports has been proposed to improve the workflow efficiency of the Cancer Registry. The system is currently processing an incoming trickle feed of HL7 electronic pathology reports from across the state of Queensland in Australia to produce an electronic cancer notification. Natural language processing and symbolic reasoning using SNOMED CT were adopted in the system; Queensland Cancer Registry business rules were also incorporated. A set of 220 unseen pathology reports selected from patients with a range of cancers was used to evaluate the performance of the system. The system achieved overall recall of 0.78, precision of 0.83 and F-measure of 0.80 over seven categories, namely, basis of diagnosis (3 classes), primary site (66 classes), laterality (5 classes), histological type (94 classes), histological grade (7 classes), metastasis site (19 classes) and metastatic status (2 classes). These results are encouraging given the large cross-section of cancers. The system allows for the provision of clinical coding support as well as indicative statistics on the current state of cancer, which is not otherwise available. PMID:26958232

  18. Texting

    ERIC Educational Resources Information Center

    Tilley, Carol L.

    2009-01-01

    With the increasing ranks of cell phone ownership is an increase in text messaging, or texting. During 2008, more than 2.5 trillion text messages were sent worldwide--that's an average of more than 400 messages for every person on the planet. Although many of the messages teenagers text each day are perhaps nothing more than "how r u?" or "c u

  19. SimSum: An Empirically Founded Simulation of Summarizing.

    ERIC Educational Resources Information Center

    Endres-Niggemeyer, Brigitte

    2000-01-01

    Describes SimSum (Simulation of Summarizing), which simulates 20 real-world working steps of expert summarizers. Presents an empirically founded cognitive model of summarizing and demonstrates that human summarization strategies can be simulated. Discusses current research in automatic summarization, summarizing in the World Wide Web, and

  20. Echocardiogram video summarization

    NASA Astrophysics Data System (ADS)

    Ebadollahi, Shahram; Chang, Shih-Fu; Wu, Henry D.; Takoma, Shin

    2001-05-01

    This work aims at developing innovative algorithms and tools for summarizing echocardiogram videos. Specifically, we summarize the digital echocardiogram videos by temporally segmenting them into the constituent views and representing each view by the most informative frame. For the segmentation we take advantage of the well-defined spatio- temporal structure of the echocardiogram videos. Two different criteria are used: presence/absence of color and the shape of the region of interest (ROI) in each frame of the video. The change in the ROI is due to different modes of echocardiograms present in one study. The representative frame is defined to be the frame corresponding to the end- diastole of the heart cycle. To locate the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the R-wave. The corresponding frame is chosen to be the key-frame. The entire echocardiogram video can be summarized into either a static summary, which is a storyboard type of summary and a dynamic summary, which is a concatenation of the selected segments of the echocardiogram video. To the best of our knowledge, this if the first automated system for summarizing the echocardiogram videos base don visual content.

  1. Unsupervised method for automatic construction of a disease dictionary from a large free text collection.

    PubMed

    Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan

    2008-01-01

    Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting con-textual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35-88%) over available, manually created disease terminologies. PMID:18999169

  2. Web-based UMLS concept retrieval by automatic text scanning: a comparison of two methods.

    PubMed

    Brandt, C; Nadkarni, P

    2001-01-01

    The Web is increasingly the medium of choice for multi-user application program delivery. Yet selection of an appropriate programming environment for rapid prototyping, code portability, and maintainability remain issues. We summarize our experience on the conversion of a LISP Web application, Search/SR to a new, functionally identical application, Search/SR-ASP using a relational database and active server pages (ASP) technology. Our results indicate that provision of easy access to database engines and external objects is almost essential for a development environment to be considered viable for rapid and robust application delivery. While LISP itself is a robust language, its use in Web applications may be hard to justify given that current vendor implementations do not provide such functionality. Alternative, currently available scripting environments for Web development appear to have most of LISP's advantages and few of its disadvantages. PMID:11084231

  3. Experimenting with Automatic Text-to-Diagram Conversion: A Novel Teaching Aid for the Blind People

    ERIC Educational Resources Information Center

    Mukherjee, Anirban; Garain, Utpal; Biswas, Arindam

    2014-01-01

    Diagram describing texts are integral part of science and engineering subjects including geometry, physics, engineering drawing, etc. In order to understand such text, one, at first, tries to draw or perceive the underlying diagram. For perception of the blind students such diagrams need to be drawn in some non-visual accessible form like tactile

  4. Experimenting with Automatic Text-to-Diagram Conversion: A Novel Teaching Aid for the Blind People

    ERIC Educational Resources Information Center

    Mukherjee, Anirban; Garain, Utpal; Biswas, Arindam

    2014-01-01

    Diagram describing texts are integral part of science and engineering subjects including geometry, physics, engineering drawing, etc. In order to understand such text, one, at first, tries to draw or perceive the underlying diagram. For perception of the blind students such diagrams need to be drawn in some non-visual accessible form like tactile…

  5. Summarizing drug information in Medline citations.

    PubMed

    Fiszman, Marcelo; Rindflesch, Thomas C; Kilicoglu, Halil

    2006-01-01

    Adverse drug events and drug-drug interactions are a major concern in patient care. Although databases exist to provide information about drugs, they are not always up-to-date and complete (particularly regarding pharmacogenetics). We propose a methodology based on automatic summarization to identify drug information in Medline citations and present results to the user in a convenient form. We evaluate the method on a selection of citations discussing ten drugs ranging from the proton pump inhibitor lansoprazole to the vasoconstrictor sumatriptan. We suggest that automatic summarization can provide a valuable adjunct to curated drug databases in supporting quality patient care. PMID:17238342

  6. Improved chemical text mining of patents with infinite dictionaries and automatic spelling correction.

    PubMed

    Sayle, Roger; Xie, Paul Hongxing; Muresan, Sorel

    2012-01-23

    The text mining of patents of pharmaceutical interest poses a number of unique challenges not encountered in other fields of text mining. Unlike fields, such as bioinformatics, where the number of terms of interest is enumerable and essentially static, systematic chemical nomenclature can describe an infinite number of molecules. Hence, the dictionary- and ontology-based techniques that are commonly used for gene names, diseases, species, etc., have limited utility when searching for novel therapeutic compounds in patents. Additionally, the length and the composition of IUPAC-like names make them more susceptible to typographic problems: OCR failures, human spelling errors, and hyphenation and line breaking issues. This work describes a novel technique, called CaffeineFix, designed to efficiently identify chemical names in free text, even in the presence of typographical errors. Corrected chemical names are generated as input for name-to-structure software. This forms a preprocessing pass, independent of the name-to-structure software used, and is shown to greatly improve the results of chemical text mining in our study. PMID:22148717

  7. The Automatic Assessment of Free Text Answers Using a Modified BLEU Algorithm

    ERIC Educational Resources Information Center

    Noorbehbahani, F.; Kardan, A. A.

    2011-01-01

    e-Learning plays an undoubtedly important role in today's education and assessment is one of the most essential parts of any instruction-based learning process. Assessment is a common way to evaluate a student's knowledge regarding the concepts related to learning objectives. In this paper, a new method for assessing the free text answers of

  8. Semi-Automatic Grading of Students' Answers Written in Free Text

    ERIC Educational Resources Information Center

    Escudeiro, Nuno; Escudeiro, Paula; Cruz, Augusto

    2011-01-01

    The correct grading of free text answers to exam questions during an assessment process is time consuming and subject to fluctuations in the application of evaluation criteria, particularly when the number of answers is high (in the hundreds). In consequence of these fluctuations, inherent to human nature, and largely determined by emotional

  9. EnvMine: A text-mining system for the automatic extraction of contextual information

    PubMed Central

    2010-01-01

    Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles). So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations) from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved) of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude), thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical variables of sampling sites, thus facilitating the performance of ecological analyses. EnvMine can also help in the development of standards for the annotation of environmental features. PMID:20515448

  10. ABNER: an open source tool for automatically tagging genes, proteins and other entity names in text.

    PubMed

    Settles, Burr

    2005-07-15

    ABNER (A Biomedical Named Entity Recognizer) is an open source software tool for molecular biology text mining. At its core is a machine learning system using conditional random fields with a variety of orthographic and contextual features. The latest version is 1.5, which has an intuitive graphical interface and includes two modules for tagging entities (e.g. protein and cell line) trained on standard corpora, for which performance is roughly state of the art. It also includes a Java application programming interface allowing users to incorporate ABNER into their own systems and train models on new corpora. PMID:15860559

  11. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems

    PubMed Central

    GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.

    2012-01-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916

  12. TEXTINFO: a tool for automatic determination of patient clinical profiles using text analysis.

    PubMed

    Borst, F; Lyman, M; Nhn, N T; Tick, L J; Sager, N; Scherrer, J R

    1991-01-01

    The clinical data contained in narrative patient documents is made available via grammatical and semantic processing. Retrievals from the resulting relational database tables are matched against a set of clinical descriptors to obtain clinical profiles of the patients in terms of the descriptors present in the documents. Discharge summaries of 57 Dept. of Digestive Surgery patients were processed in this manner. Factor analysis and discriminant analysis procedures were then applied, showing the profiles to be useful for diagnosis definitions (by establishing relations between diagnoses and clinical findings), for diagnosis assessment (by viewing the match between a definition and observed events recorded in a patient text), and potentially for outcome evaluation based on the classification abilities of clinical signs. PMID:1807679

  13. An automatic system to identify heart disease risk factors in clinical texts over time.

    PubMed

    Chen, Qingcai; Li, Haodi; Tang, Buzhou; Wang, Xiaolong; Liu, Xin; Liu, Zengjian; Liu, Shu; Wang, Weida; Deng, Qiwen; Zhu, Suisong; Chen, Yangxin; Wang, Jingfeng

    2015-12-01

    Despite recent progress in prediction and prevention, heart disease remains a leading cause of death. One preliminary step in heart disease prediction and prevention is risk factor identification. Many studies have been proposed to identify risk factors associated with heart disease; however, none have attempted to identify all risk factors. In 2014, the National Center of Informatics for Integrating Biology and Beside (i2b2) issued a clinical natural language processing (NLP) challenge that involved a track (track 2) for identifying heart disease risk factors in clinical texts over time. This track aimed to identify medically relevant information related to heart disease risk and track the progression over sets of longitudinal patient medical records. Identification of tags and attributes associated with disease presence and progression, risk factors, and medications in patient medical history were required. Our participation led to development of a hybrid pipeline system based on both machine learning-based and rule-based approaches. Evaluation using the challenge corpus revealed that our system achieved an F1-score of 92.68%, making it the top-ranked system (without additional annotations) of the 2014 i2b2 clinical NLP challenge. PMID:26362344

  14. Automatism

    PubMed Central

    McCaldon, R. J.

    1964-01-01

    Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”. PMID:14199824

  15. Progress towards an unassisted element identification from Laser Induced Breakdown Spectra with automatic ranking techniques inspired by text retrieval

    NASA Astrophysics Data System (ADS)

    Amato, G.; Cristoforetti, G.; Legnaioli, S.; Lorenzetti, G.; Palleschi, V.; Sorrentino, F.; Tognoni, E.

    2010-08-01

    In this communication, we will illustrate an algorithm for automatic element identification in LIBS spectra which takes inspiration from the vector space model applied to text retrieval techniques. The vector space model prescribes that text documents and text queries are represented as vectors of weighted terms (words). Document ranking, with respect to relevance to a query, is obtained by comparing the vectors representing the documents with the vector representing the query. In our case, we represent elements and samples as vectors of weighted peaks, obtained from their spectra. The likelihood of the presence of an element in a sample is computed by comparing the corresponding vectors of weighted peaks. The weight of a peak is proportional to its intensity and to the inverse of the number of peaks, in the database, in its wavelength neighboring. We suppose to have a database containing the peaks of all elements we want to recognize, where each peak is represented by a wavelength and it is associated with its expected relative intensity and the corresponding element. Detection of elements in a sample is obtained by ranking the elements according to the distance of the associated vectors from the vector representing the sample. The application of this approach to elements identification using LIBS spectra obtained from several kinds of metallic alloys will be also illustrated. The possible extension of this technique towards an algorithm for fully automated LIBS analysis will be discussed.

  16. Text Mining and Natural Language Processing Approaches for Automatic Categorization of Lay Requests to Web-Based Expert Forums

    PubMed Central

    Reincke, Ulrich; Michelmann, Hans Wilhelm

    2009-01-01

    Background Both healthy and sick people increasingly use electronic media to obtain medical information and advice. For example, Internet users may send requests to Web-based expert forums, or so-called “ask the doctor” services. Objective To automatically classify lay requests to an Internet medical expert forum using a combination of different text-mining strategies. Methods We first manually classified a sample of 988 requests directed to a involuntary childlessness forum on the German website “Rund ums Baby” (“Everything about Babies”) into one or more of 38 categories belonging to two dimensions (“subject matter” and “expectations”). After creating start and synonym lists, we calculated the average Cramer’s V statistic for the association of each word with each category. We also used principle component analysis and singular value decomposition as further text-mining strategies. With these measures we trained regression models and determined, on the basis of best regression models, for any request the probability of belonging to each of the 38 different categories, with a cutoff of 50%. Recall and precision of a test sample were calculated as a measure of quality for the automatic classification. Results According to the manual classification of 988 documents, 102 (10%) documents fell into the category “in vitro fertilization (IVF),” 81 (8%) into the category “ovulation,” 79 (8%) into “cycle,” and 57 (6%) into “semen analysis.” These were the four most frequent categories in the subject matter dimension (consisting of 32 categories). The expectation dimension comprised six categories; we classified 533 documents (54%) as “general information” and 351 (36%) as a wish for “treatment recommendations.” The generation of indicator variables based on the chi-square analysis and Cramer’s V proved to be the best approach for automatic classification in about half of the categories. In combination with the two other approaches, 100% precision and 100% recall were realized in 18 (47%) out of the 38 categories in the test sample. For 35 (92%) categories, precision and recall were better than 80%. For some categories, the input variables (ie, “words”) also included variables from other categories, most often with a negative sign. For example, absence of words predictive for “menstruation” was a strong indicator for the category “pregnancy test.” Conclusions Our approach suggests a way of automatically classifying and analyzing unstructured information in Internet expert forums. The technique can perform a preliminary categorization of new requests and help Internet medical experts to better handle the mass of information and to give professional feedback. PMID:19632978

  17. Large-scale automatic extraction of side effects associated with targeted anticancer drugs from full-text oncological articles.

    PubMed

    Xu, Rong; Wang, QuanQiu

    2015-06-01

    Targeted anticancer drugs such as imatinib, trastuzumab and erlotinib dramatically improved treatment outcomes in cancer patients, however, these innovative agents are often associated with unexpected side effects. The pathophysiological mechanisms underlying these side effects are not well understood. The availability of a comprehensive knowledge base of side effects associated with targeted anticancer drugs has the potential to illuminate complex pathways underlying toxicities induced by these innovative drugs. While side effect association knowledge for targeted drugs exists in multiple heterogeneous data sources, published full-text oncological articles represent an important source of pivotal, investigational, and even failed trials in a variety of patient populations. In this study, we present an automatic process to extract targeted anticancer drug-associated side effects (drug-SE pairs) from a large number of high profile full-text oncological articles. We downloaded 13,855 full-text articles from the Journal of Oncology (JCO) published between 1983 and 2013. We developed text classification, relationship extraction, signaling filtering, and signal prioritization algorithms to extract drug-SE pairs from downloaded articles. We extracted a total of 26,264 drug-SE pairs with an average precision of 0.405, a recall of 0.899, and an F1 score of 0.465. We show that side effect knowledge from JCO articles is largely complementary to that from the US Food and Drug Administration (FDA) drug labels. Through integrative correlation analysis, we show that targeted drug-associated side effects positively correlate with their gene targets and disease indications. In conclusion, this unique database that we built from a large number of high-profile oncological articles could facilitate the development of computational models to understand toxic effects associated with targeted anticancer drugs. PMID:25817969

  18. Automatic recognition of disorders, findings, pharmaceuticals and body structures from clinical text: an annotation and machine learning study.

    PubMed

    Skeppstedt, Maria; Kvist, Maria; Nilsson, Gunnar H; Dalianis, Hercules

    2014-06-01

    Automatic recognition of clinical entities in the narrative text of health records is useful for constructing applications for documentation of patient care, as well as for secondary usage in the form of medical knowledge extraction. There are a number of named entity recognition studies on English clinical text, but less work has been carried out on clinical text in other languages. This study was performed on Swedish health records, and focused on four entities that are highly relevant for constructing a patient overview and for medical hypothesis generation, namely the entities: Disorder, Finding, Pharmaceutical Drug and Body Structure. The study had two aims: to explore how well named entity recognition methods previously applied to English clinical text perform on similar texts written in Swedish; and to evaluate whether it is meaningful to divide the more general category Medical Problem, which has been used in a number of previous studies, into the two more granular entities, Disorder and Finding. Clinical notes from a Swedish internal medicine emergency unit were annotated for the four selected entity categories, and the inter-annotator agreement between two pairs of annotators was measured, resulting in an average F-score of 0.79 for Disorder, 0.66 for Finding, 0.90 for Pharmaceutical Drug and 0.80 for Body Structure. A subset of the developed corpus was thereafter used for finding suitable features for training a conditional random fields model. Finally, a new model was trained on this subset, using the best features and settings, and its ability to generalise to held-out data was evaluated. This final model obtained an F-score of 0.81 for Disorder, 0.69 for Finding, 0.88 for Pharmaceutical Drug, 0.85 for Body Structure and 0.78 for the combined category Disorder+Finding. The obtained results, which are in line with or slightly lower than those for similar studies on English clinical text, many of them conducted using a larger training data set, show that the approaches used for English are also suitable for Swedish clinical text. However, a small proportion of the errors made by the model are less likely to occur in English text, showing that results might be improved by further tailoring the system to clinical Swedish. The entity recognition results for the individual entities Disorder and Finding show that it is meaningful to separate the general category Medical Problem into these two more granular entity types, e.g. for knowledge mining of co-morbidity relations and disorder-finding relations. PMID:24508177

  19. TEXT CLASSIFICATION FOR AUTOMATIC DETECTION OF E-CIGARETTE USE AND USE FOR SMOKING CESSATION FROM TWITTER: A FEASIBILITY PILOT.

    PubMed

    Aphinyanaphongs, Yin; Lulejian, Armine; Brown, Duncan Penfold; Bonneau, Richard; Krebs, Paul

    2016-01-01

    Rapid increases in e-cigarette use and potential exposure to harmful byproducts have shifted public health focus to e-cigarettes as a possible drug of abuse. Effective surveillance of use and prevalence would allow appropriate regulatory responses. An ideal surveillance system would collect usage data in real time, focus on populations of interest, include populations unable to take the survey, allow a breadth of questions to answer, and enable geo-location analysis. Social media streams may provide this ideal system. To realize this use case, a foundational question is whether we can detect e-cigarette use at all. This work reports two pilot tasks using text classification to identify automatically Tweets that indicate e-cigarette use and/or e-cigarette use for smoking cessation. We build and define both datasets and compare performance of 4 state of the art classifiers and a keyword search for each task. Our results demonstrate excellent classifier performance of up to 0.90 and 0.94 area under the curve in each category. These promising initial results form the foundation for further studies to realize the ideal surveillance solution. PMID:26776211

  20. Text Classification for Automatic Detection of E-Cigarette Use and Use for Smoking Cessation from Twitter: A Feasibility Pilot

    PubMed Central

    Aphinyanaphongs, Yin; Lulejian, Armine; Brown, Duncan Penfold; Bonneau, Richard; Krebs, Paul

    2015-01-01

    Rapid increases in e-cigarette use and potential exposure to harmful byproducts have shifted public health focus to e-cigarettes as a possible drug of abuse. Effective surveillance of use and prevalence would allow appropriate regulatory responses. An ideal surveillance system would collect usage data in real time, focus on populations of interest, include populations unable to take the survey, allow a breadth of questions to answer, and enable geo-location analysis. Social media streams may provide this ideal system. To realize this use case, a foundational question is whether we can detect ecigarette use at all. This work reports two pilot tasks using text classification to identify automatically Tweets that indicate e-cigarette use and/or e-cigarette use for smoking cessation. We build and define both datasets and compare performance of 4 state of the art classifiers and a keyword search for each task. Our results demonstrate excellent classifier performance of up to 0.90 and 0.94 area under the curve in each category. These promising initial results form the foundation for further studies to realize the ideal surveillance solution. PMID:26776211

  1. QCS: a system for querying, clustering and summarizing documents.

    SciTech Connect

    Dunlavy, Daniel M.; Schlesinger, Judith D. (Center for Computing Sciences, Bowie, MD); O'Leary, Dianne P.; Conroy, John M.

    2006-10-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.

  2. QCS : a system for querying, clustering, and summarizing documents.

    SciTech Connect

    Dunlavy, Daniel M.

    2006-08-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence ''trimming'', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.

  3. Degree centrality for semantic abstraction summarization of therapeutic studies

    PubMed Central

    Zhang, Han; Fiszman, Marcelo; Shin, Dongwook; Miller, Christopher M.; Rosemblat, Graciela; Rindflesch, Thomas C.

    2011-01-01

    Automatic summarization has been proposed to help manage the results of biomedical information retrieval systems. Semantic MEDLINE, for example, summarizes semantic predications representing assertions in MEDLINE citations. Results are presented as a graph which maintains links to the original citations. Graphs summarizing more than 500 citations are hard to read and navigate, however. We exploit graph theory for focusing these large graphs. The method is based on degree centrality, which measures connectedness in a graph. Four categories of clinical concepts related to treatment of disease were identified and presented as a summary of input text. A baseline was created using term frequency of occurrence. The system was evaluated on summaries for treatment of five diseases compared to a reference standard produced manually by two physicians. The results showed that recall for system results was 72%, precision was 73%, and F-score was 0.72. The system F-score was considerably higher than that for the baseline (0.47). PMID:21575741

  4. User and Device Adaptation in Summarizing Sports Videos

    NASA Astrophysics Data System (ADS)

    Nitta, Naoko; Babaguchi, Noboru

    Video summarization is defined as creating a video summary which includes only important scenes in the original video streams. In order to realize automatic video summarization, the significance of each scene needs to be determined. When targeted especially on broadcast sports videos, a play scene, which corresponds to a play, can be considered as a scene unit. The significance of every play scene can generally be determined based on the importance of the play in the game. Furthermore, the following two issues should be considered: 1) what is important depends on each user's preferences, and 2) the summaries should be tailored for media devices that each user has. Considering the above issues, this paper proposes a unified framework for user and device adaptation in summarizing broadcast sports videos. The proposed framework summarizes sports videos by selecting play scenes based on not only the importance of each play itself but also the users' preferences by using the metadata, which describes the semantic content of videos with keywords, and user profiles, which describe users' preference degrees for the keywords. The selected scenes are then presented in a proper way using various types of media such as video, image, or text according to device profiles which describe the device type. We experimentally verified the effectiveness of user adaptation by examining how the generated summaries are changed by different preference degrees and by comparing our results with/without using user profiles. The validity of device adaptation is also evaluated by conducting questionnaires using PCs and mobile phones as the media devices.

  5. Adaptive Maximum Marginal Relevance Based Multi-email Summarization

    NASA Astrophysics Data System (ADS)

    Wang, Baoxun; Liu, Bingquan; Sun, Chengjie; Wang, Xiaolong; Li, Bo

    By analyzing the inherent relationship between the maximum marginal relevance (MMR) model and the content cohesion of emails with the same subject, this paper presents an adaptive maximum marginal relevance based multi-email summarization method. Due to the adoption of approximate computing of email content cohesion, the adaptive MMR is able to automatically adjust the parameters according to the changing of the email sets. The experimental results have shown that the email summarizing system based on this technique can increase the precision while reducing the redundancy of the automatic summary results, consequently improve the average quality of email summaries.

  6. Hierarchical video summarization based on context clustering

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Smith, John R.

    2003-11-01

    A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.

  7. Summarizing Social Disparities in Health

    PubMed Central

    Asada, Yukiko; Yoshida, Yoko; Whipp, Alyce M

    2013-01-01

    Context Reporting on health disparities is fundamental for meeting the goal of reducing health disparities. One often overlooked challenge is determining the best way to report those disparities associated with multiple attributes such as income, education, sex, and race/ethnicity. This article proposes an analytical approach to summarizing social disparities in health, and we demonstrate its empirical application by comparing the degrees and patterns of health disparities in all fifty states and the District of Columbia (DC). Methods We used the 2009 American Community Survey, and our measure of health was functional limitation. For each state and DC, we calculated the overall disparity and attribute-specific disparities for income, education, sex, and race/ethnicity in functional limitation. Along with the state rankings of these health disparities, we developed health disparity profiles according to the attribute making the largest contribution to overall disparity in each state. Findings Our results show a general lack of consistency in the rankings of overall and attribute-specific disparities in functional limitation across the states. Wyoming has the smallest overall disparity and West Virginia the largest. In each of the four attribute-specific health disparity rankings, however, most of the best- and worst-performing states in regard to overall health disparity are not consistently good or bad. Our analysis suggests the following three disparity profiles across states: (1) the largest contribution from race/ethnicity (thirty-four states), (2) roughly equal contributions of race/ethnicity and socioeconomic factor(s) (ten states), and (3) the largest contribution from socioeconomic factor(s) (seven states). Conclusions Our proposed approach offers policy-relevant health disparity information in a comparable and interpretable manner, and currently publicly available data support its application. We hope this approach will spark discussion regarding how best to systematically track health disparities across communities or within a community over time in relation to the health disparity goal of Healthy People 2020. PMID:23488710

  8. Combining automatic table classification and relationship extraction in extracting anticancer drug-side effect pairs from full-text articles.

    PubMed

    Xu, Rong; Wang, QuanQiu

    2015-02-01

    Anticancer drug-associated side effect knowledge often exists in multiple heterogeneous and complementary data sources. A comprehensive anticancer drug-side effect (drug-SE) relationship knowledge base is important for computation-based drug target discovery, drug toxicity predication and drug repositioning. In this study, we present a two-step approach by combining table classification and relationship extraction to extract drug-SE pairs from a large number of high-profile oncological full-text articles. The data consists of 31,255 tables downloaded from the Journal of Oncology (JCO). We first trained a statistical classifier to classify tables into SE-related and -unrelated categories. We then extracted drug-SE pairs from SE-related tables. We compared drug side effect knowledge extracted from JCO tables to that derived from FDA drug labels. Finally, we systematically analyzed relationships between anti-cancer drug-associated side effects and drug-associated gene targets, metabolism genes, and disease indications. The statistical table classifier is effective in classifying tables into SE-related and -unrelated (precision: 0.711; recall: 0.941; F1: 0.810). We extracted a total of 26,918 drug-SE pairs from SE-related tables with a precision of 0.605, a recall of 0.460, and a F1 of 0.520. Drug-SE pairs extracted from JCO tables is largely complementary to those derived from FDA drug labels; as many as 84.7% of the pairs extracted from JCO tables have not been included a side effect database constructed from FDA drug labels. Side effects associated with anticancer drugs positively correlate with drug target genes, drug metabolism genes, and disease indications. PMID:25445920

  9. Summarize to Get the Gist

    ERIC Educational Resources Information Center

    Collins, John

    2012-01-01

    As schools prepare for the common core state standards in literacy, they'll be confronted with two challenges: first, helping students comprehend complex texts, and, second, training students to write arguments supported by factual evidence. A teacher's response to these challenges might be to lead class discussions about complex reading or assign

  10. Algorithm for Video Summarization of Bronchoscopy Procedures

    PubMed Central

    2011-01-01

    Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the challenge of generating summaries of bronchoscopy video recordings. PMID:22185344

  11. Summarizing qualitative behavior from measurements of nonlinear circuits, revision

    NASA Astrophysics Data System (ADS)

    Lee, Michelle K.

    1989-05-01

    The process of exploring the behavior of nonlinear, dynamical systems can be a time-consuming and tedious process. A program was written which automates much of the work of an experimental dynamicist. In particular, the program automatically characterizes the behavior of any driven, nonlinear electrical circuit exhibiting interesting behavior below the 10 MHz range. In order to accomplish this task, the program can autonomously select interesting input parameters, drive the circuit, measure its response, perform a set of numeric computations on the measured data, interpret the results and decompose the circuit's parameter space into regions of qualitatively distinct behavior. The output is a two-dimensional portrait summarizing the high-level, qualitative behavior of the nonlinear circuit for every point in the graph as well as an accompanying textual explanation describing any interesting patterns observed in the diagram. In addition to the graph and the text, the program generates a symbolic description of the circuit's behavior. This intermediate data structure can then be passed onto other programs for further analysis.

  12. Statistical Methods for Summarizing Independent Correlational Results.

    ERIC Educational Resources Information Center

    Viana, Marlos A. G.

    1980-01-01

    Statistical techniques for summarizing results from independent correlational studies are presented. The case in which only the sample correlation coefficients are available and the case in which the original paired data are available are both considered. (Author/JKS)

  13. Highlight summarization in golf videos using audio signals

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Kim, Jin Young

    2008-01-01

    In this paper, we present an automatic summarization of highlights in golf videos based on audio information alone without video information. The proposed highlight summarization system is carried out based on semantic audio segmentation and detection on action units from audio signals. Studio speech, field speech, music, and applause are segmented by means of sound classification. Swing is detected by the methods of impulse onset detection. Sounds like swing and applause form a complete action unit, while studio speech and music parts are used to anchor the program structure. With the advantage of highly precise detection of applause, highlights are extracted effectively. Our experimental results obtain high classification precision on 18 golf games. It proves that the proposed system is very effective and computationally efficient to apply the technology to embedded consumer electronic devices.

  14. Macroprocesses and Microprocesses in the Development of Summarization Skill.

    ERIC Educational Resources Information Center

    Kintsch, Eileen

    A study investigated how students' mental representation of an expository text and the inferences they used in summarizing varied as a function of text difficulty and of differences in the task. Subjects, 96 college students and students from grades 6 and 10, wrote summaries of expository texts and answered orally several probe questions about the

  15. Tracking Visible Targets Automatically

    NASA Technical Reports Server (NTRS)

    Armstrong, R. W.

    1984-01-01

    Report summarizes techniques for automatic pointing of scientific instruments by reference to visible targets. Applications foreseen in industrial robotics. Measurement done by image analysis based on gradient edge location, image-centroid location and/or outline matching.

  16. MPEG content summarization based on compressed domain feature analysis

    NASA Astrophysics Data System (ADS)

    Sugano, Masaru; Nakajima, Yasuyuki; Yanagihara, Hiromasa

    2003-11-01

    This paper addresses automatic summarization of MPEG audiovisual content on compressed domain. By analyzing semantically important low-level and mid-level audiovisual features, our method universally summarizes the MPEG-1/-2 contents in the form of digest or highlight. The former is a shortened version of an original, while the latter is an aggregation of important or interesting events. In our proposal, first, the incoming MPEG stream is segmented into shots and the above features are derived from each shot. Then the features are adaptively evaluated in an integrated manner, and finally the qualified shots are aggregated into a summary. Since all the processes are performed completely on compressed domain, summarization is achieved at very low computational cost. The experimental results show that news highlights and sports highlights in TV baseball games can be successfully extracted according to simple shot transition models. As for digest extraction, subjective evaluation proves that meaningful shots are extracted from content without a priori knowledge, even if it contains multiple genres of programs. Our method also has the advantage of generating an MPEG-7 based description such as summary and audiovisual segments in the course of summarization.

  17. On the Application of Generic Summarization Algorithms to Music

    NASA Astrophysics Data System (ADS)

    Raposo, Francisco; Ribeiro, Ricardo; de Matos, David Martins

    2015-01-01

    Several generic summarization algorithms were developed in the past and successfully applied in fields such as text and speech summarization. In this paper, we review and apply these algorithms to music. To evaluate this summarization's performance, we adopt an extrinsic approach: we compare a Fado Genre Classifier's performance using truncated contiguous clips against the summaries extracted with those algorithms on 2 different datasets. We show that Maximal Marginal Relevance (MMR), LexRank and Latent Semantic Analysis (LSA) all improve classification performance in both datasets used for testing.

  18. Summarization of Multiple Documents with Rhetorical Annotation

    NASA Astrophysics Data System (ADS)

    Aya, Sohei; Matsuo, Yutaka; Okazaki, Naoaki; Hasida, Kôiti; Ishizuka, Mitsuru

    In this paper, we propose a new algorithm of summarization which targets a new kind of structured contents. The structured content, which is to be created by semantic authoring, consists of sentenses and rhetorical relation among sentences: It is represented by a graph, where a node is a sentence and an edge is a rhetorical relation. We simulate creating this content graph by using news paper articles that are annotated rhetorical relations by a GDA tagset. Our summarization method basically uses spreading activation over the content graph, followed by particular postprocesses to increase readability of the resultant summary. Experimental evaluation shows our method is at least equal to or better than Lead method for summarizing news paper articles.

  19. A coherent graph-based semantic clustering and summarization approach for biomedical literature and a new summarization evaluation method

    PubMed Central

    Yoo, Illhoi; Hu, Xiaohua; Song, Il-Yeol

    2007-01-01

    Background A huge amount of biomedical textual information has been produced and collected in MEDLINE for decades. In order to easily utilize biomedical information in the free text, document clustering and text summarization together are used as a solution for text information overload problem. In this paper, we introduce a coherent graph-based semantic clustering and summarization approach for biomedical literature. Results Our extensive experimental results show the approach shows 45% cluster quality improvement and 72% clustering reliability improvement, in terms of misclassification index, over Bisecting K-means as a leading document clustering approach. In addition, our approach provides concise but rich text summary in key concepts and sentences. Conclusion Our coherent biomedical literature clustering and summarization approach that takes advantage of ontology-enriched graphical representations significantly improves the quality of document clusters and understandability of documents through summaries. PMID:18047705

  20. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2010-04-06

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  1. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2008-01-01

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  2. System for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2006-07-04

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  3. Disease Related Knowledge Summarization Based on Deep Graph Search

    PubMed Central

    Wu, Xiaofang; Yang, Zhihao; Li, ZhiHeng; Lin, Hongfei; Wang, Jian

    2015-01-01

    The volume of published biomedical literature on disease related knowledge is expanding rapidly. Traditional information retrieval (IR) techniques, when applied to large databases such as PubMed, often return large, unmanageable lists of citations that do not fulfill the searcher's information needs. In this paper, we present an approach to automatically construct disease related knowledge summarization from biomedical literature. In this approach, firstly Kullback-Leibler Divergence combined with mutual information metric is used to extract disease salient information. Then deep search based on depth first search (DFS) is applied to find hidden (indirect) relations between biomedical entities. Finally random walk algorithm is exploited to filter out the weak relations. The experimental results show that our approach achieves a precision of 60% and a recall of 61% on salient information extraction for Carcinoma of bladder and outperforms the method of Combo. PMID:26413521

  4. Disease Related Knowledge Summarization Based on Deep Graph Search.

    PubMed

    Wu, Xiaofang; Yang, Zhihao; Li, ZhiHeng; Lin, Hongfei; Wang, Jian

    2015-01-01

    The volume of published biomedical literature on disease related knowledge is expanding rapidly. Traditional information retrieval (IR) techniques, when applied to large databases such as PubMed, often return large, unmanageable lists of citations that do not fulfill the searcher's information needs. In this paper, we present an approach to automatically construct disease related knowledge summarization from biomedical literature. In this approach, firstly Kullback-Leibler Divergence combined with mutual information metric is used to extract disease salient information. Then deep search based on depth first search (DFS) is applied to find hidden (indirect) relations between biomedical entities. Finally random walk algorithm is exploited to filter out the weak relations. The experimental results show that our approach achieves a precision of 60% and a recall of 61% on salient information extraction for Carcinoma of bladder and outperforms the method of Combo. PMID:26413521

  5. Effective Replays and Summarization of Virtual Experiences

    PubMed Central

    Ponto, Kevin; Kohlmann, Joe; Gleicher, Michael

    2012-01-01

    Direct replays of the experience of a user in a virtual environment are difficult for others to watch due to unnatural camera motions. We present methods for replaying and summarizing these egocentric experiences that effectively communicate the users observations while reducing unwanted camera movements. Our approach summarizes the viewpoint path as a concise sequence of viewpoints that cover the same parts of the scene. The core of our approach is a novel content dependent metric that can be used to identify similarities between viewpoints. This enables viewpoints to be grouped by similar contextual view information and provides a means to generate novel viewpoints that can encapsulate a series of views. These resulting encapsulated viewpoints are used to synthesize new camera paths that convey the content of the original viewers experience. Projecting the initial movement of the user back on the scene can be used to convey the details of their observations, and the extracted viewpoints can serve as bookmarks for control or analysis. Finally we present performance analysis along with two forms of validation to test whether the extracted viewpoints are representative of the viewers original observations and to test for the overall effectiveness of the presented replay methods. PMID:22402688

  6. Review Mining for Feature based Opinion Summarization and Visualization

    NASA Astrophysics Data System (ADS)

    Kamal, Ahmad

    2015-06-01

    The application and usage of opinion mining, especially for business intelligence, product recommendation, targeted marketing etc. have fascinated many research attentions around the globe. Various research efforts attempted to mine opinions from customer reviews at different levels of granularity, including word-, sentence-, and document-level. However, development of a fully automatic opinion mining and sentiment analysis system is still elusive. Though the development of opinion mining and sentiment analysis systems are getting momentum, most of them attempt to perform document-level sentiment analysis, classifying a review document as positive, negative, or neutral. Such document-level opinion mining approaches fail to provide insight about users sentiment on individual features of a product or service. Therefore, it seems to be a great help for both customers and manufacturers, if the reviews could be processed at a finer-grained level and presented in a summarized form through some visual means, highlighting individual features of a product and users sentiment expressed over them. In this paper, the design of a unified opinion mining and sentiment analysis framework is presented at the intersection of both machine learning and natural language processing approaches. Also, design of a novel feature-level review summarization scheme is proposed to visualize mined features, opinions and their polarity values in a comprehendible way.

  7. An unsupervised method for summarizing egocentric sport videos

    NASA Astrophysics Data System (ADS)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.

  8. Adaptive detection of missed text areas in OCR outputs: application to the automatic assessment of OCR quality in mass digitization projects

    NASA Astrophysics Data System (ADS)

    Ben Salah, Ahmed; Ragot, Nicolas; Paquet, Thierry

    2013-01-01

    The French National Library (BnF*) has launched many mass digitization projects in order to give access to its collection. The indexation of digital documents on Gallica (digital library of the BnF) is done through their textual content obtained thanks to service providers that use Optical Character Recognition softwares (OCR). OCR softwares have become increasingly complex systems composed of several subsystems dedicated to the analysis and the recognition of the elements in a page. However, the reliability of these systems is always an issue at stake. Indeed, in some cases, we can find errors in OCR outputs that occur because of an accumulation of several errors at different levels in the OCR process. One of the frequent errors in OCR outputs is the missed text components. The presence of such errors may lead to severe defects in digital libraries. In this paper, we investigate the detection of missed text components to control the OCR results from the collections of the French National Library. Our verification approach uses local information inside the pages based on Radon transform descriptors and Local Binary Patterns descriptors (LBP) coupled with OCR results to control their consistency. The experimental results show that our method detects 84.15% of the missed textual components, by comparing the OCR ALTO files outputs (produced by the service providers) to the images of the document.

  9. Person-based video summarization and retrieval by tracking and clustering temporal face sequences

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Wen, Di; Ding, Xiaoqing

    2013-03-01

    People are often the most important subjects in videos. It is highly desired to automatically summarize the occurrences of different people in a large collection of video and quickly find the video clips containing a particular person among them. In this paper, we present a person-based video summarization and retrieval system named VideoWho which extracts temporal face sequences in videos and groups them into clusters, with each cluster containing video clips of the same person. This is accomplished based on advanced face detection and tracking algorithms, together with a semisupervised face clustering approach. The system achieved good clustering accuracy when tested on a hybrid video set including home video, TV plays and movies. On top of this technology, a number of applications can be built, such as automatic summarization of major characters in videos, person-related video search on the Internet and personalized UI systems etc.

  10. Contextual Text Mining

    ERIC Educational Resources Information Center

    Mei, Qiaozhu

    2009-01-01

    With the dramatic growth of text information, there is an increasing need for powerful text mining systems that can automatically discover useful knowledge from text. Text is generally associated with all kinds of contextual information. Those contexts can be explicit, such as the time and the location where a blog article is written, and the…

  11. Contextual Text Mining

    ERIC Educational Resources Information Center

    Mei, Qiaozhu

    2009-01-01

    With the dramatic growth of text information, there is an increasing need for powerful text mining systems that can automatically discover useful knowledge from text. Text is generally associated with all kinds of contextual information. Those contexts can be explicit, such as the time and the location where a blog article is written, and the

  12. The Relations among Summarizing Instruction, Support for Student Choice, Reading Engagement and Expository Text Comprehension

    ERIC Educational Resources Information Center

    Littlefield, Amy Root

    2011-01-01

    Research on early adolescence reveals significant declines in intrinsic motivation for reading and points out the need for metacognitive strategy use among middle school students. Research indicates that explicit instruction involving motivation and metacognitive support for reading strategy use in the context of a discipline is an efficient and

  13. Medical Textbook Summarization and Guided Navigation using Statistical Sentence Extraction

    PubMed Central

    Whalen, Gregory

    2005-01-01

    We present a method for automated medical textbook and encyclopedia summarization. Using statistical sentence extraction and semantic relationships, we extract sentences from text returned as part of an existing textbook search (similar to a book index). Our system guides users to the information they desire by summarizing the content of each relevant chapter or section returned through the search. The summary is tailored to contain sentences that specifically address the users search terms. Our clustering method selects sentences that contain concepts specifically addressing the context of the query term in each of the returned sections. Our method examines conceptual relationships from the UMLS and selects clusters of concepts using Expectation Maximization (EM). Sentences associated with the concept clusters are shown to the user. We evaluated whether our extracted summary provides a suitable answer to the users question. PMID:16779153

  14. Blind summarization: content-adaptive video summarization using time-series analysis

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Radhakrishnan, Regunathan; Peker, Kadir A.

    2006-01-01

    Severe complexity constraints on consumer electronic devices motivate us to investigate general-purpose video summarization techniques that are able to apply a common hardware setup to multiple content genres. On the other hand, we know that high quality summaries can only be produced with domain-specific processing. In this paper, we present a time-series analysis based video summarization technique that provides a general core to which we are able to add small content-specific extensions for each genre. The proposed time-series analysis technique consists of unsupervised clustering of samples taken through sliding windows from the time series of features obtained from the content. We classify content into two broad categories, scripted content such as news and drama, and unscripted content such as sports and surveillance. The summarization problem then reduces to finding either finding semantic boundaries of the scripted content or detecting highlights in the unscripted content. The proposed technique is essentially an event detection technique and is thus best suited to unscripted content, however, we also find applications to scripted content. We thoroughly examine the trade-off between content-neutral and content-specific processing for effective summarization for a number of genres, and find that our core technique enables us to minimize the complexity of the content-specific processing and to postpone it to the final stage. We achieve the best results with unscripted content such as sports and surveillance video in terms of quality of summaries and minimizing content-specific processing. For other genres such as drama, we find that more content-specific processing is required. We also find that judicious choice of key audio-visual object detectors enables us to minimize the complexity of the content-specific processing while maintaining its applicability to a broad range of genres. We will present a demonstration of our proposed technique at the conference.

  15. Reorganized text.

    PubMed

    2015-05-01

    Reorganized Text: In the Original Investigation titled “Patterns of Hospital Utilization for Head and Neck Cancer Care: Changing Demographics” posted online in the January 29, 2015, issue of JAMA Otolaryngology–Head & Neck Surgery (doi:10.1001 /jamaoto.2014.3603), information was copied within sections and text rearranged to accommodate Continuing Medical Education quiz formatting. The information from the topic statements of each paragraph in the Hypothesis Testing subsection of the Methods section was collected in a new first paragraph for that subsection, which reads as follows: “Several hypotheses regarding the causes of regionalization of HNCA care were tested using the NIS data: (1) increasing patient comorbidities over time, causing a shift in care to teaching institutions that would theoretically be better equipped to handle such increased comorbidities; (2) shifting of payer status; (3) increased proportion of prior radiation therapy; and (4) a higher fraction of more complex procedures being referred and performed at teaching institutions.” In addition, the phrase "As summarized in Table3," was added to the beginning of paragraph 6 of the Discussion section, and the call-out to Table 3 in the middle of that paragraph was deleted. Finally, paragraphs 6 and 7 of the Discussion section were combined. PMID:25996397

  16. Automatic Imitation

    ERIC Educational Resources Information Center

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  17. Automatic Imitation

    ERIC Educational Resources Information Center

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"

  18. Machine Translation from Text

    NASA Astrophysics Data System (ADS)

    Habash, Nizar; Olive, Joseph; Christianson, Caitlin; McCary, John

    Machine translation (MT) from text, the topic of this chapter, is perhaps the heart of the GALE project. Beyond being a well defined application that stands on its own, MT from text is the link between the automatic speech recognition component and the distillation component. The focus of MT in GALE is on translating from Arabic or Chinese to English. The three languages represent a wide range of linguistic diversity and make the GALE MT task rather challenging and exciting.

  19. More than a "Basic Skill": Breaking down the Complexities of Summarizing for ABE/ESL Learners

    ERIC Educational Resources Information Center

    Ouellette-Schramm, Jennifer

    2015-01-01

    This article describes the complex cognitive and linguistic challenges of summarizing expository text at vocabulary, syntactic, and rhetorical levels. It then outlines activities to help ABE/ESL learners develop corresponding skills.

  20. DeTEXT: A Database for Evaluating Text Extraction from Biomedical Literature Figures

    PubMed Central

    Yin, Xu-Cheng; Yang, Chun; Pei, Wei-Yi; Man, Haixia; Zhang, Jun; Learned-Miller, Erik; Yu, Hong

    2015-01-01

    Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information. A high-quality ground truth standard can greatly facilitate the development of an automated system. This article describes DeTEXT: A database for evaluating text extraction from biomedical literature figures. It is the first publicly available, human-annotated, high quality, and large-scale figure-text dataset with 288 full-text articles, 500 biomedical figures, and 9308 text regions. This article describes how figures were selected from open-access full-text biomedical articles and how annotation guidelines and annotation tools were developed. We also discuss the inter-annotator agreement and the reliability of the annotations. We summarize the statistics of the DeTEXT data and make available evaluation protocols for DeTEXT. Finally we lay out challenges we observed in the automated detection and recognition of figure text and discuss research directions in this area. DeTEXT is publicly available for downloading at http://prir.ustb.edu.cn/DeTEXT/. PMID:25951377

  1. WOLF; automatic typing program

    USGS Publications Warehouse

    Evenden, G.I.

    1982-01-01

    A FORTRAN IV program for the Hewlett-Packard 1000 series computer provides for automatic typing operations and can, when employed with manufacturer's text editor, provide a system to greatly facilitate preparation of reports, letters and other text. The input text and imbedded control data can perform nearly all of the functions of a typist. A few of the features available are centering, titles, footnotes, indentation, page numbering (including Roman numerals), automatic paragraphing, and two forms of tab operations. This documentation contains both user and technical description of the program.

  2. Automatic transmission

    SciTech Connect

    Miura, M.; Aoki, H.

    1988-02-02

    An automatic transmission is described comprising: an automatic transmission mechanism portion comprising a single planetary gear unit and a dual planetary gear unit; carriers of both of the planetary gear units that are integral with one another; an input means for inputting torque to the automatic transmission mechanism, clutches for operatively connecting predetermined ones of planetary gear elements of both of the planetary gear units to the input means and braking means for restricting the rotation of predetermined ones of planetary gear elements of both of the planetary gear units. The clutches are disposed adjacent one another at an end portion of the transmission for defining a clutch portion of the transmission; a first clutch portion which is attachable to the automatic transmission mechanism portion for comprising the clutch portion when attached thereto; a second clutch portion that is attachable to the automatic transmission mechanism portion in place of the first clutch portion for comprising the clutch portion when so attached. The first clutch portion comprising first clutch for operatively connecting the input means to a ring gear of the single planetary gear unit and a second clutch for operatively connecting the input means to a single gear of the automatic transmission mechanism portion. The second clutch portion comprising a the first clutch, the second clutch, and a third clutch for operatively connecting the input member to a ring gear of the dual planetary gear unit.

  3. Text Sets.

    ERIC Educational Resources Information Center

    Giorgis, Cyndi; Johnson, Nancy J.

    2002-01-01

    Presents annotations of approximately 30 titles grouped in text sets. Defines a text set as five to ten books on a particular topic or theme. Discusses books on the following topics: living creatures; pirates; physical appearance; natural disasters; and the Irish potato famine. (SG)

  4. Video Analytics for Indexing, Summarization and Searching of Video Archives

    SciTech Connect

    Trease, Harold E.; Trease, Lynn L.

    2009-08-01

    This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful" content from image and video data.

  5. Upper-Intermediate-Level ESL Students' Summarizing in English

    ERIC Educational Resources Information Center

    Vorobel, Oksana; Kim, Deoksoon

    2011-01-01

    This qualitative instrumental case study explores various factors that might influence upper-intermediate-level English as a second language (ESL) students' summarizing from a sociocultural perspective. The study was conducted in a formal classroom setting, during a reading and writing class in the English Language Institute at a university in the

  6. Investigation of Learners' Perceptions for Video Summarization and Recommendation

    ERIC Educational Resources Information Center

    Yang, Jie Chi; Chen, Sherry Y.

    2012-01-01

    Recently, multimedia-based learning is widespread in educational settings. A number of studies investigate how to develop effective techniques to manage a huge volume of video sources, such as summarization and recommendation. However, few studies examine how these techniques affect learners' perceptions in multimedia learning systems. This

  7. Investigation of Learners' Perceptions for Video Summarization and Recommendation

    ERIC Educational Resources Information Center

    Yang, Jie Chi; Chen, Sherry Y.

    2012-01-01

    Recently, multimedia-based learning is widespread in educational settings. A number of studies investigate how to develop effective techniques to manage a huge volume of video sources, such as summarization and recommendation. However, few studies examine how these techniques affect learners' perceptions in multimedia learning systems. This…

  8. A fuzzy ontology and its application to news summarization.

    PubMed

    Lee, Chang-Shing; Jian, Zhi-Wei; Huang, Lin-Kai

    2005-10-01

    In this paper, a fuzzy ontology and its application to news summarization are presented. The fuzzy ontology with fuzzy concepts is an extension of the domain ontology with crisp concepts. It is more suitable to describe the domain knowledge than domain ontology for solving the uncertainty reasoning problems. First, the domain ontology with various events of news is predefined by domain experts. The document preprocessing mechanism will generate the meaningful terms based on the news corpus and the Chinese news dictionary defined by the domain expert. Then, the meaningful terms will be classified according to the events of the news by the term classifier. The fuzzy inference mechanism will generate the membership degrees for each fuzzy concept of the fuzzy ontology. Every fuzzy concept has a set of membership degrees associated with various events of the domain ontology. In addition, a news agent based on the fuzzy ontology is also developed for news summarization. The news agent contains five modules, including a retrieval agent, a document preprocessing mechanism, a sentence path extractor, a sentence generator, and a sentence filter to perform news summarization. Furthermore, we construct an experimental website to test the proposed approach. The experimental results show that the news agent based on the fuzzy ontology can effectively operate for news summarization. PMID:16240764

  9. Gaze-enabled Egocentric Video Summarization via Constrained Submodular Maximization

    PubMed Central

    Xut, Jia; Mukherjee, Lopamudra; Li, Yin; Warner, Jamieson; Rehg, James M.; Singht, Vikas

    2016-01-01

    With the proliferation of wearable cameras, the number of videos of users documenting their personal lives using such devices is rapidly increasing. Since such videos may span hours, there is an important need for mechanisms that represent the information content in a compact form (i.e., shorter videos which are more easily browsable/sharable). Motivated by these applications, this paper focuses on the problem of egocentric video summarization. Such videos are usually continuous with significant camera shake and other quality issues. Because of these reasons, there is growing consensus that direct application of standard video summarization tools to such data yields unsatisfactory performance. In this paper, we demonstrate that using gaze tracking information (such as fixation and saccade) significantly helps the summarization task. It allows meaningful comparison of different image frames and enables deriving personalized summaries (gaze provides a sense of the camera wearer's intent). We formulate a summarization model which captures common-sense properties of a good summary, and show that it can be solved as a submodular function maximization with partition matroid constraints, opening the door to a rich body of work from combinatorial optimization. We evaluate our approach on a new gaze-enabled egocentric video dataset (over 15 hours), which will be a valuable standalone resource. PMID:26973428

  10. Abstractive Summarization of Drug Dosage Regimens for Supporting Drug Comparison.

    PubMed

    Ugon, Adrien; Berthelot, Hélène; Venot, Alain; Favre, Madeleine; Duclos, Catherine; Lamy, Jean-Baptiste

    2015-01-01

    Complicated dosage regimens often reduce adherence to drug treatments. The ease-of-administration must thus be taken into account when prescribing. Given one drug, there exists often several dosage regimens. Hence, comparison to similar drugs is difficult. Simplifying and summarizing them appears to be a required task for supporting General Practitioners to find the drug with the simplest regimen for the patient. We propose a summarization in two steps: first prunes out all low-importance information, and second proceed to fusion of remaining information. Rules for pruning and fusion strategies were designed by an expert in drug models. Evaluation was conducted on a dataset of 169 drugs. The agreement rate was 27.2%. We demonstrate that applying rules leads to a result that is correct by a computational point of view, but the result is often meaningless for the GP. We conclude with recommendations for further work. PMID:26152958

  11. Video summarization and personalization for pervasive mobile devices

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.

    2001-12-01

    We have designed and implemented a video semantic summarization system, which includes an MPEG-7 compliant annotation interface, a semantic summarization middleware, a real-time MPEG-1/2 video transcoder on PCs, and an application interface on color/black-and-white Palm-OS PDAs. We designed a video annotation tool, VideoAnn, to annotate semantic labels associated with video shots. Videos are first segmentated into shots based on their visual-audio characteristics. They are played back using an interactive interface, which facilitate and fasten the annotation process. Users can annotate the video content with the units of temporal shots or spatial regions. The annotated results are stored in the MPEG-7 XML format. We also designed and implemented a video transmission system, Universal Tuner, for wireless video streaming. This system transcodes MPEG-1/2 videos or live TV broadcasting videos to the BW or indexed color Palm OS devices. In our system, the complexity of multimedia compression and decompression algorithms is adaptively partitioned between the encoder and decoder. In the client end, users can access the summarized video based on their preferences, time, keywords, as well as the transmission bandwidth and the remaining battery power on the pervasive devices.

  12. Personalized summarization using user preference for m-learning

    NASA Astrophysics Data System (ADS)

    Lee, Sihyoung; Yang, Seungji; Ro, Yong Man; Kim, Hyoung Joong

    2008-02-01

    As the Internet and multimedia technology is becoming advanced, the number of digital multimedia contents is also becoming abundant in learning area. In order to facilitate the access of digital knowledge and to meet the need of a lifelong learning, e-learning could be the helpful alternative way to the conventional learning paradigms. E-learning is known as a unifying term to express online, web-based and technology-delivered learning. Mobile-learning (m-learning) is defined as e-learning through mobile devices using wireless transmission. In a survey, more than half of the people remarked that the re-consumption was one of the convenient features in e-learning. However, it is not easy to find user's preferred segmentation from a full version of lengthy e-learning content. Especially in m-learning, a content-summarization method is strongly required because mobile devices are limited to low processing power and battery capacity. In this paper, we propose a new user preference model for re-consumption to construct personalized summarization for re-consumption. The user preference for re-consumption is modeled based on user actions with statistical model. Based on the user preference model for re-consumption with personalized user actions, our method discriminates preferred parts over the entire content. Experimental results demonstrated successful personalized summarization.

  13. Scientific Text Processing

    NASA Astrophysics Data System (ADS)

    Goossens, Michel; Herwijnen, Eric Van

    Aspects of text processing important for the scientific community are discussed, and an overview of currently available software is presented. Progress on standardization efforts in the area of document exchange (SGML), document formatting (DSSSL), document presentation (SPDL), fonts (ISO 9541) and character codes (Unicode and ISO 10646) is described. An elementary particle naming scheme for use with LATEX and SGML is proposed. LATEX, PostScript, SGML and desk-top publishing allow electronic submission of articles to publishers, and printing on demand. Advantages of standardization are illustrated by the description of a system which can exchange documents between different word processors and automatically extract bibliographic data for a library database.

  14. Applying a sunburst visualization to summarize user navigation sequences.

    PubMed

    Rodden, Kerry

    2014-01-01

    For many Web-based applications, it's important to be able to analyze the paths users have taken through a site--for example, to understand how they're discovering engaging content. These paths are difficult to summarize visually because of the underlying data's complexity. A Google researcher applied a sunburst visualization to this problem, after simplifying the data into a hierarchical format. The resulting visualization was successful in YouTube and is widely referenced and accessed. The code for the visualization is available as open source. PMID:25248198

  15. Capturing User Reading Behaviors for Personalized Document Summarization

    SciTech Connect

    Xu, Songhua; Jiang, Hao; Lau, Francis

    2011-01-01

    We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.

  16. A Qualitative Study on the Use of Summarizing Strategies in Elementary Education

    ERIC Educational Resources Information Center

    Susar Kirmizi, Fatma; Akkaya, Nevin

    2011-01-01

    The objective of this study is to reveal how well summarizing strategies are used by Grade 4 and Grade 5 students as a reading comprehension strategy. This study was conducted in Buca, Izmir and the document analysis method, a qualitative research strategy, was employed. The study used a text titled "Environmental Pollution" and an "Evaluation

  17. A Qualitative Study on the Use of Summarizing Strategies in Elementary Education

    ERIC Educational Resources Information Center

    Susar Kirmizi, Fatma; Akkaya, Nevin

    2011-01-01

    The objective of this study is to reveal how well summarizing strategies are used by Grade 4 and Grade 5 students as a reading comprehension strategy. This study was conducted in Buca, Izmir and the document analysis method, a qualitative research strategy, was employed. The study used a text titled "Environmental Pollution" and an "Evaluation…

  18. A Graph Summarization Algorithm Based on RFID Logistics

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Hu, Kongfa; Lu, Zhipeng; Zhao, Li; Chen, Ling

    Radio Frequency Identification (RFID) applications are set to play an essential role in object tracking and supply chain management systems. The volume of data generated by a typical RFID application will be enormous as each item will generate a complete history of all the individual locations that it occupied at every point in time. The movement trails of such RFID data form gigantic commodity flowgraph representing the locations and durations of the path stages traversed by each item. In this paper, we use graph to construct a warehouse of RFID commodity flows, and introduce a database-style operation to summarize graphs, which produces a summary graph by grouping nodes based on user-selected node attributes, further allows users to control the hierarchy of summaries. It can cut down the size of graphs, and provide convenience for users to study just on the shrunk graph which they interested. Through extensive experiments, we demonstrate the effectiveness and efficiency of the proposed method.

  19. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a minmax based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity. PMID:24801112

  20. Automatic transmission

    SciTech Connect

    Ohkubo, M.

    1988-02-16

    An automatic transmission is described combining a stator reversing type torque converter and speed changer having first and second sun gears comprising: (a) a planetary gear train composed of first and second planetary gears sharing one planetary carrier in common; (b) a clutch and requisite brakes to control the planetary gear train; and (c) a speed-increasing or speed-decreasing mechanism is installed both in between a turbine shaft coupled to a turbine of the stator reversing type torque converter and the first sun gear of the speed changer, and in between a stator shaft coupled to a reversing stator and the second sun gear of the speed changer.

  1. Automatic transmission

    SciTech Connect

    Miki, N.

    1988-10-11

    This patent describes an automatic transmission including a fluid torque converter, a first gear unit having three forward-speed gears and a single reverse gear, a second gear unit having a low-speed gear and a high-speed gear, and a hydraulic control system, the hydraulic control system comprising: a source of pressurized fluid; a first shift valve for controlling the shifting between the first-speed gear and the second-speed gear of the first gear unit; a second shift valve for controlling the shifting between the second-speed gear and the third-speed gear of the first gear unit; a third shift valve equipped with a spool having two positions for controlling the shifting between the low-speed gear and the high-speed gear of the second gear unit; a manual selector valve having a plurality of shift positions for distributing the pressurized fluid supply from the source of pressurized fluid to the first, second and third shift valves respectively; first, second and third solenoid valves corresponding to the first, second and third shift valves, respectively for independently controlling the operation of the respective shift valves, thereby establishing a six forward-speed automatic transmission by combining the low-speed gear and the high-speed gear of the second gear unit with each of the first-speed gear, the second speed gear and the third-speed gear of the first gear unit; and means to fixedly position the spool of the third shift valve at one of the two positions by supplying the pressurized fluid to the third shift valve when the manual selector valve is shifted to a particular shift position, thereby locking the second gear unit in one of low-speed gear and the high-speed gear, whereby the six forward-speed automatic transmission is converted to a three forward-speed automatic transmission when the manual selector valve is shifted to the particular shift position.

  2. Automatic transmission

    SciTech Connect

    Aoki, H.

    1989-03-21

    An automatic transmission is described, comprising: a torque converter including an impeller having a connected member, a turbine having an input member and a reactor; and an automatic transmission mechanism having first to third clutches and plural gear units including a single planetary gear unit with a ring gear and a dual planetary gear unit with a ring gear. The single and dual planetary gear units have respective carriers integrally coupled with each other and respective sun gears integrally coupled with each other, the input member of the turbine being coupled with the ring gear of the single planetary gear unit through the first clutch, and being coupled with the sun gear through the second clutch. The connected member of the impeller is coupled with the ring gear of the dual planetary gear of the dual planetary gear unit is made to be and ring gear of the dual planetary gear unit is made to be restrained as required, and the carrier is coupled with an output member.

  3. Summarization and visualization of target trajectories from massive video archives

    NASA Astrophysics Data System (ADS)

    Yue, Zhanfeng; Narasimha, Pramod L.; Topiwala, Pankaj

    2009-05-01

    Video, especially massive video archives, is by nature dense information medium. Compactly presenting the activities of targets of interest provides an efficient and cost saving way to analyze the content of the video. In this paper, we propose a video content analysis system to summarize and visualize the trajectories of targets from massive video archives. We first present an adaptive appearance-based algorithm to robustly track the targets in a particle filtering framework. It provides high performance while facilitating implementation of this algorithm in hardware with parallel processing. Phase correlation algorithm is used to estimate the motion of the observation platform which is then compensated in order to extract the independent trajectories of the targets. Based on the trajectory information, we develop the interface for browsing the videos which enables us to directly manipulate the video. The user could scroll over objects to view their trajectories. If interested, he/she could click on the object and drag it along the displayed path. The actual video will be played in synchronous to the mouse movement.

  4. Improving text recognition by distinguishing scene and overlay text

    NASA Astrophysics Data System (ADS)

    Quehl, Bernhard; Yang, Haojin; Sack, Harald

    2015-02-01

    Video texts are closely related to the content of a video. They provide a valuable source for indexing and interpretation of video data. Text detection and recognition task in images or videos typically distinguished between overlay and scene text. Overlay text is artificially superimposed on the image at the time of editing and scene text is text captured by the recording system. Typically, OCR systems are specialized on one kind of text type. However, in video images both types of text can be found. In this paper, we propose a method to automatically distinguish between overlay and scene text to dynamically control and optimize post processing steps following text detection. Based on a feature combination a Support Vector Machine (SVM) is trained to classify scene and overlay text. We show how this distinction in overlay and scene text improves the word recognition rate. Accuracy of the proposed methods has been evaluated by using publicly available test data sets.

  5. Automatic transmission

    SciTech Connect

    Hamane, M.; Ohri, H.

    1989-03-21

    This patent describes an automatic transmission connected between a drive shaft and a driven shaft and comprising: a planetary gear mechanism including a first gear driven by the drive shaft, a second gear operatively engaged with the first gear to transmit speed change output to the driven shaft, and a third gear operatively engaged with the second gear to control the operation thereof; centrifugally operated clutch means for driving the first gear and the second gear. It also includes a ratchet type one-way clutch for permitting rotation of the third gear in the same direction as that of the drive shaft but preventing rotation in the reverse direction; the clutch means comprising a ratchet pawl supporting plate coaxially disposed relative to the drive shaft and integrally connected to the third gear, the ratchet pawl supporting plate including outwardly projection radial projections united with one another at base portions thereof.

  6. Applying Semantics in Dataset Summarization for Solar Data Ingest Pipelines

    NASA Astrophysics Data System (ADS)

    Michaelis, J.; McGuinness, D. L.; Zednik, S.; West, P.; Fox, P. A.

    2012-12-01

    One goal in studying phenomena of the solar corona (e.g., flares, coronal mass ejections) is to create and refine predictive models of space weather - which have broad implications for terrestrial activity (e.g., communication grid reliability). The High Altitude Observatory (HAO) [1] presently maintains an infrastructure for generating time-series visualizations of the solar corona. Through raw data gathered at the Mauna Loa Solar Observatory (MLSO) in Hawaii, HAO performs follow-up processing and quality control steps to derive visualization sets consumable by scientists. Individual visualizations will acquire several properties during their derivation, including: (i) the source instrument at MLSO used to obtain the raw data, (ii) the time the data was gathered, (iii) processing steps applied by HAO to generate the visualization, and (iv) quality metrics applied over both the raw and processed data. In parallel to MLSO's standard data gathering, time stamped observation logs are maintained by MLSO staff, which covers content of potential relevance to data gathered (such as local weather and instrument conditions). In this setting, while a significant amount of solar data is gathered, only small sections will typically be of interest to consuming parties. Additionally, direct presentation of solar data collections could overwhelm consumers (particularly those with limited background in the data structuring). This work explores how multidimensional analysis based navigation can be used to generate summary views of data collections, based on two operations: (i) grouping visualization entries based on similarity metrics (e.g., data gathered between 23:15-23:30 6-21-2012), or (ii) filtering entries (e.g., data with a quality score of UGLY, on a scale of GOOD, BAD, or UGLY). Here, semantic encodings of solar visualization collections (based on the Resource Description Framework (RDF) Datacube vocabulary [2]) are being utilized, based on the flexibility of the RDF model for supporting the following use cases: (i) Temporal alignment of time-stamped MLSO observations with raw data gathered at MLSO. (ii) Linking of multiple visualization entries to common (and structurally complex) workflow structures - designed to capture the visualization generation process. To provide real-world use cases for the described approach, a semantic summarization system is being developed for data gathered from HAO's Coronal Multi-channel Polarimeter (CoMP) and Chromospheric Helium-I Imaging Photometer (CHIP) pipelines. Web Links: [1] http://mlso.hao.ucar.edu/ [2] http://www.w3.org/TR/vocab-data-cube/

  7. A novel tool for assessing and summarizing the built environment

    PubMed Central

    2012-01-01

    Background A growing corpus of research focuses on assessing the quality of the local built environment and also examining the relationship between the built environment and health outcomes and indicators in communities. However, there is a lack of research presenting a highly resolved, systematic, and comprehensive spatial approach to assessing the built environment over a large geographic extent. In this paper, we contribute to the built environment literature by describing a tool used to assess the residential built environment at the tax parcel-level, as well as a methodology for summarizing the data into meaningful indices for linkages with health data. Methods A database containing residential built environment variables was constructed using the existing body of literature, as well as input from local community partners. During the summer of 2008, a team of trained assessors conducted an on-foot, curb-side assessment of approximately 17,000 tax parcels in Durham, North Carolina, evaluating the built environment on over 80 variables using handheld Global Positioning System (GPS) devices. The exercise was repeated again in the summer of 2011 over a larger geographic area that included roughly 30,700 tax parcels; summary data presented here are from the 2008 assessment. Results Built environment data were combined with Durham crime data and tax assessor data in order to construct seven built environment indices. These indices were aggregated to US Census blocks, as well as to primary adjacency communities (PACs) and secondary adjacency communities (SACs) which better described the larger neighborhood context experienced by local residents. Results were disseminated to community members, public health professionals, and government officials. Conclusions The assessment tool described is both easily-replicable and comprehensive in design. Furthermore, our construction of PACs and SACs introduces a novel concept to approximate varying scales of community and describe the built environment at those scales. Our collaboration with community partners at all stages of the tool development, data collection, and dissemination of results provides a model for engaging the community in an active research program. PMID:23075269

  8. Recent progress in automatically extracting information from the pharmacogenomic literature

    PubMed Central

    Garten, Yael; Coulet, Adrien; Altman, Russ B

    2011-01-01

    The biomedical literature holds our understanding of pharmacogenomics, but it is dispersed across many journals. In order to integrate our knowledge, connect important facts across publications and generate new hypotheses we must organize and encode the contents of the literature. By creating databases of structured pharmocogenomic knowledge, we can make the value of the literature much greater than the sum of the individual reports. We can, for example, generate candidate gene lists or interpret surprising hits in genome-wide association studies. Text mining automatically adds structure to the unstructured knowledge embedded in millions of publications, and recent years have seen a surge in work on biomedical text mining, some specific to pharmacogenomics literature. These methods enable extraction of specific types of information and can also provide answers to general, systemic queries. In this article, we describe the main tasks of text mining in the context of pharmacogenomics, summarize recent applications and anticipate the next phase of text mining applications. PMID:21047206

  9. Automatic transmission

    SciTech Connect

    Miura, M.; Inuzuka, T.

    1986-08-26

    1. An automatic transmission with four forward speeds and one reverse position, is described which consists of: an input shaft; an output member; first and second planetary gear sets each having a sun gear, a ring gear and a carrier supporting a pinion in mesh with the sun gear and ring gear; the carrier of the first gear set, the ring gear of the second gear set and the output member all being connected; the ring gear of the first gear set connected to the carrier of the second gear set; a first clutch means for selectively connecting the input shaft to the sun gear of the first gear set, including friction elements, a piston selectively engaging the friction elements and a fluid servo in which hydraulic fluid is selectively supplied to the piston; a second clutch means for selectively connecting the input shaft to the sun gear of the second gear set a third clutch means for selectively connecting the input shaft to the carrier of the second gear set including friction elements, a piston selectively engaging the friction elements and a fluid servo in which hydraulic fluid is selectively supplied to the piston; a first drive-establishing means for selectively preventing rotation of the ring gear of the first gear set and the carrier of the second gear set in only one direction and, alternatively, in any direction; a second drive-establishing means for selectively preventing rotation of the sun gear of the second gear set; and a drum being open to the first planetary gear set, with a cylindrical intermediate wall, an inner peripheral wall and outer peripheral wall and forming the hydraulic servos of the first and third clutch means between the intermediate wall and the inner peripheral wall and between the intermediate wall and the outer peripheral wall respectively.

  10. Automatic Informative Abstracting and Extracting. Annual Report.

    ERIC Educational Resources Information Center

    Earl, L.L.; Robison, H.R.

    This fourth annual report summarizes the investigation of (1) a "sentence dictionary" and (2) a "word government dictionary" for use in automatic abstracting and extracting systems. The theory behind the sentence dictionary and its compilation is that a separation of significant from nonsignificant sentences can be accomplished on the basis of

  11. Text Mining for Neuroscience

    NASA Astrophysics Data System (ADS)

    Tirupattur, Naveen; Lapish, Christopher C.; Mukhopadhyay, Snehasis

    2011-06-01

    Text mining, sometimes alternately referred to as text analytics, refers to the process of extracting high-quality knowledge from the analysis of textual data. Text mining has wide variety of applications in areas such as biomedical science, news analysis, and homeland security. In this paper, we describe an approach and some relatively small-scale experiments which apply text mining to neuroscience research literature to find novel associations among a diverse set of entities. Neuroscience is a discipline which encompasses an exceptionally wide range of experimental approaches and rapidly growing interest. This combination results in an overwhelmingly large and often diffuse literature which makes a comprehensive synthesis difficult. Understanding the relations or associations among the entities appearing in the literature not only improves the researchers current understanding of recent advances in their field, but also provides an important computational tool to formulate novel hypotheses and thereby assist in scientific discoveries. We describe a methodology to automatically mine the literature and form novel associations through direct analysis of published texts. The method first retrieves a set of documents from databases such as PubMed using a set of relevant domain terms. In the current study these terms yielded a set of documents ranging from 160,909 to 367,214 documents. Each document is then represented in a numerical vector form from which an Association Graph is computed which represents relationships between all pairs of domain terms, based on co-occurrence. Association graphs can then be subjected to various graph theoretic algorithms such as transitive closure and cycle (circuit) detection to derive additional information, and can also be visually presented to a human researcher for understanding. In this paper, we present three relatively small-scale problem-specific case studies to demonstrate that such an approach is very successful in replicating a neuroscience expert's mental model of object-object associations entirely by means of text mining. These preliminary results provide the confidence that this type of text mining based research approach provides an extremely powerful tool to better understand the literature and drive novel discovery for the neuroscience community.

  12. Evaluation Methods of The Text Entities

    ERIC Educational Resources Information Center

    Popa, Marius

    2006-01-01

    The paper highlights some evaluation methods to assess the quality characteristics of the text entities. The main concepts used in building and evaluation processes of the text entities are presented. Also, some aggregated metrics for orthogonality measurements are presented. The evaluation process for automatic evaluation of the text entities is

  13. Comprehending Expository Text.

    ERIC Educational Resources Information Center

    DeLisi, Mary Beth

    This study examined the effects of training community college students in two reading strategies, self-questioning and summarization, on their comprehension and retention of expository material. Eight developmental reading students from a community college in central New Jersey were taught the strategies of summarization and self-questioning and

  14. Traduction automatique et terminologie automatique (Automatic Translation and Automatic Terminology

    ERIC Educational Resources Information Center

    Dansereau, Jules

    1978-01-01

    An exposition of reasons why a system of automatic translation could not use a terminology bank except as a source of information. The fundamental difference between the two tools is explained and examples of translation and mistranslation are given as evidence of the limits and possibilities of each process. (Text is in French.) (AMH)

  15. Automatic transmission adapter kit

    SciTech Connect

    Stich, R.L.; Neal, W.D.

    1987-02-10

    This patent describes, in a four-wheel-drive vehicle apparatus having a power train including an automatic transmission and a transfer case, an automatic transmission adapter kit for installation of a replacement automatic transmission of shorter length than an original automatic transmission in the four-wheel-drive vehicle. The adapter kit comprises: an extension housing interposed between the replacement automatic transmission and the transfer case; an output shaft, having a first end which engages the replacement automatic transmission and a second end which engages the transfer case; first sealing means for sealing between the extension housing and the replacement automatic transmission; second sealing means for sealing between the extension housing and the transfer case; and fastening means for connecting the extension housing between the replacement automatic transmission and the transfer case.

  16. LED automatic grader

    NASA Astrophysics Data System (ADS)

    Jin, Shangzhong

    1998-08-01

    An automatic grader with testing light intensity and forward voltage of LED is presented in this paper. It mainly includes three parts: automatic conveying, automatic testing and automatic classification of LED. Automatic conveying of LED is operated by industry controlling computer controlling vibration salver. The light intensity and forward voltage of lighting LED is measured, they are compared with set grade value and the LED's grade is determined. Then the LED is sent in relative bin by computer controlling pneumatic components and synchronizing motor.

  17. Thesaurus-Based Automatic Book Indexing.

    ERIC Educational Resources Information Center

    Dillon, Martin

    1982-01-01

    Describes technique for automatic book indexing requiring dictionary of terms with text strings that count as instances of term and text in form suitable for processing by text formatter. Results of experimental application to portion of book text are presented, including measures of precision and recall. Ten references are noted. (EJS)

  18. An anatomy of automatism.

    PubMed

    Mackay, R D

    2015-07-01

    The automatism defence has been described as a quagmire of law and as presenting an intractable problem. Why is this so? This paper will analyse and explore the current legal position on automatism. In so doing, it will identify the problems which the case law has created, including the distinction between sane and insane automatism and the status of the 'external factor doctrine', and comment briefly on recent reform proposals. PMID:26378105

  19. Automatic differentiation bibliography

    SciTech Connect

    Corliss, G.F.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  20. Application of nonlinear transformations to automatic flight control

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Su, R.; Hunt, L. R.

    1984-01-01

    The theory of transformations of nonlinear systems to linear ones is applied to the design of an automatic flight controller for the UH-1H helicopter. The helicopter mathematical model is described and it is shown to satisfy the necessary and sufficient conditions for transformability. The mapping is constructed, taking the nonlinear model to canonical form. The performance of the automatic control system in a detailed simulation on the flight computer is summarized.

  1. Autoclass: An automatic classification system

    NASA Technical Reports Server (NTRS)

    Stutz, John; Cheeseman, Peter; Hanson, Robin

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.

  2. Text documents as social networks

    NASA Astrophysics Data System (ADS)

    Balinsky, Helen; Balinsky, Alexander; Simske, Steven J.

    2012-03-01

    The extraction of keywords and features is a fundamental problem in text data mining. Document processing applications directly depend on the quality and speed of the identification of salient terms and phrases. Applications as disparate as automatic document classification, information visualization, filtering and security policy enforcement all rely on the quality of automatically extracted keywords. Recently, a novel approach to rapid change detection in data streams and documents has been developed. It is based on ideas from image processing and in particular on the Helmholtz Principle from the Gestalt Theory of human perception. By modeling a document as a one-parameter family of graphs with its sentences or paragraphs defining the vertex set and with edges defined by Helmholtz's principle, we demonstrated that for some range of the parameters, the resulting graph becomes a small-world network. In this article we investigate the natural orientation of edges in such small world networks. For two connected sentences, we can say which one is the first and which one is the second, according to their position in a document. This will make such a graph look like a small WWW-type network and PageRank type algorithms will produce interesting ranking of nodes in such a document.

  3. Automatic fireplace damper

    SciTech Connect

    Szwartz, H.S.

    1981-06-16

    This device provides a means of automatically detecting smoke in a room, through an ionization chamber, prior to it being noticed by humans. It consists primarily of an automatic control unit, which through a small servomotor will open the fireplace damper in step fashion, or close it to conserve heat. It further includes a manual over-ride switch for use when desired.

  4. Structuring Lecture Videos by Automatic Projection Screen Localization and Analysis.

    PubMed

    Li, Kai; Wang, Jue; Wang, Haoqian; Dai, Qionghai

    2015-06-01

    We present a fully automatic system for extracting the semantic structure of a typical academic presentation video, which captures the whole presentation stage with abundant camera motions such as panning, tilting, and zooming. Our system automatically detects and tracks both the projection screen and the presenter whenever they are visible in the video. By analyzing the image content of the tracked screen region, our system is able to detect slide progressions and extract a high-quality, non-occluded, geometrically-compensated image for each slide, resulting in a list of representative images that reconstruct the main presentation structure. Afterwards, our system recognizes text content and extracts keywords from the slides, which can be used for keyword-based video retrieval and browsing. Experimental results show that our system is able to generate more stable and accurate screen localization results than commonly-used object tracking methods. Our system also extracts more accurate presentation structures than general video summarization methods, for this specific type of video. PMID:26357345

  5. Writing Home/Decolonizing Text(s)

    ERIC Educational Resources Information Center

    Asher, Nina

    2009-01-01

    The article draws on postcolonial and feminist theories, combined with critical reflection and autobiography, and argues for generating decolonizing texts as one way to write and reclaim home in a postcolonial world. Colonizers leave home to seek power and control elsewhere, and the colonized suffer loss of home as they know it. This dislocation

  6. Metadata extraction using text mining.

    PubMed

    Seth, Shivani; Rping, Stefan; Wrobel, Stefan

    2009-01-01

    Grid technologies have proven to be very successful in the area of eScience, and healthcare in particular, because they allow to easily combine proven solutions for data querying, integration, and analysis into a secure, scalable framework. In order to integrate the services that implement these solutions into a given Grid architecture, some metadata is required, for example information about the low-level access to these services, security information, and some documentation for the user. In this paper, we investigate how relevant metadata can be extracted from a semi-structured textual documentation of the algorithm that is underlying the service, by the use of text mining methods. In particular, we investigate the semi-automatic conversion of functions of the statistical environment R into Grid services as implemented by the GridR tool by the generation of appropriate metadata. PMID:19593048

  7. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  8. Automatic amino acid analyzer

    NASA Technical Reports Server (NTRS)

    Berdahl, B. J.; Carle, G. C.; Oyama, V. I.

    1971-01-01

    Analyzer operates unattended or up to 15 hours. It has an automatic sample injection system and can be programmed. All fluid-flow valve switching is accomplished pneumatically from miniature three-way solenoid pilot valves.

  9. Automatic switching matrix

    DOEpatents

    Schlecht, Martin F.; Kassakian, John G.; Caloggero, Anthony J.; Rhodes, Bruce; Otten, David; Rasmussen, Neil

    1982-01-01

    An automatic switching matrix that includes an apertured matrix board containing a matrix of wires that can be interconnected at each aperture. Each aperture has associated therewith a conductive pin which, when fully inserted into the associated aperture, effects electrical connection between the wires within that particular aperture. Means is provided for automatically inserting the pins in a determined pattern and for removing all the pins to permit other interconnecting patterns.

  10. Calibrating Item Families and Summarizing the Results Using Family Expected Response Functions

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Johnson, Matthew S.; Williamson, David M.

    2003-01-01

    Item families, which are groups of related items, are becoming increasingly popular in complex educational assessments. For example, in automatic item generation (AIG) systems, a test may consist of multiple items generated from each of a number of item models. Item calibration or scoring for such an assessment requires fitting models that can

  11. Reviewing Text Mining : Textual Data Mining

    NASA Astrophysics Data System (ADS)

    Yasuda, Akio

    The objective of this paper is to give overviews of text mining or textual data mining in Japan from the practical aspects. Text mining is the technology utilized for analyzing large volumes of textual data applying various parameters for purpose of withdrawing useful knowledge and information. The essence of Mining is "the discovery of knowledge or information." And target of text mining is to objectively discover and extract knowledge, facts, and meaningful relationships from the text documents. This paper summarizes the related disciplines and application fields which are applied in text mining, and introduces features and application examples of text mining tools.

  12. Theory and implementation of summarization: Improving sensor interpretation for spacecraft operations

    NASA Astrophysics Data System (ADS)

    Swartwout, Michael Alden

    New paradigms in space missions require radical changes in spacecraft operations. In the past, operations were insulated from competitive pressures of cost, quality and time by system infrastructures, technological limitations and historical precedent. However, modern demands now require that operations meet competitive performance goals. One target for improvement is the telemetry downlink, where significant resources are invested to acquire thousands of measurements for human interpretation. This cost-intensive method is used because conventional operations are not based on formal methodologies but on experiential reasoning and incrementally adapted procedures. Therefore, to improve the telemetry downlink it is first necessary to invent a rational framework for discussing operations. This research explores operations as a feedback control problem, develops the conceptual basis for the use of spacecraft telemetry, and presents a method to improve performance. The method is called summarization, a process to make vehicle data more useful to operators. Summarization enables rational trades for telemetry downlink by defining and quantitatively ranking these elements: all operational decisions, the knowledge needed to inform each decision, and all possible sensor mappings to acquire that knowledge. Summarization methods were implemented for the Sapphire microsatellite; conceptual health management and system models were developed and a degree-of-observability metric was defined. An automated tool was created to generate summarization methods from these models. Methods generated using a Sapphire model were compared against the conventional operations plan. Summarization was shown to identify the key decisions and isolate the most appropriate sensors. Secondly, a form of summarization called beacon monitoring was experimentally verified. Beacon monitoring automates the anomaly detection and notification tasks and migrates these responsibilities to the space segment. A set of experiments using Sapphire demonstrated significant cost and time savings compared to conventional operations. Summarization is based on rational concepts for defining and understanding operations. Therefore, it enables additional trade studies that were formerly not possible and also can form the basis for future detailed research into spacecraft operations.

  13. Clinicians' evaluation of computer-assisted medication summarization of electronic medical records.

    PubMed

    Zhu, Xinxin; Cimino, James J

    2015-04-01

    Each year thousands of patients die of avoidable medication errors. When a patient is admitted to, transferred within, or discharged from a clinical facility, clinicians should review previous medication orders, current orders and future plans for care, and reconcile differences if there are any. If medication reconciliation is not accurate and systematic, medication errors such as omissions, duplications, dosing errors, or drug interactions may occur and cause harm. Computer-assisted medication applications showed promise as an intervention to reduce medication summarization inaccuracies and thus avoidable medication errors. In this study, a computer-assisted medication summarization application, designed to abstract and represent multi-source time-oriented medication data, was introduced to assist clinicians with their medication reconciliation processes. An evaluation study was carried out to assess clinical usefulness and analyze potential impact of such application. Both quantitative and qualitative methods were applied to measure clinicians' performance efficiency and inaccuracy in medication summarization process with and without the intervention of computer-assisted medication application. Clinicians' feedback indicated the feasibility of integrating such a medication summarization tool into clinical practice workflow as a complementary addition to existing electronic health record systems. The result of the study showed potential to improve efficiency and reduce inaccuracy in clinician performance of medication summarization, which could in turn improve care efficiency, quality of care, and patient safety. PMID:24393492

  14. Clinicians’ Evaluation of Computer-Assisted Medication Summarization of Electronic Medical Records

    PubMed Central

    Zhu, Xinxin; Cimin, James J.

    2014-01-01

    Each year thousands of patients die of avoidable medication errors. When a patient is admitted to, transferred within, or discharged from a clinical facility, clinicians should review previous medication orders, current orders and future plans for care, and reconcile differences if there are any. If medication reconciliation is not accurate and systematic, medication errors such as omissions, duplications, dosing errors, or drug interactions may occur and cause harm. Computer-assisted medication applications showed promise as an intervention to reduce medication summarization inaccuracies and thus avoidable medication errors. In this study, a computer-assisted medication summarization application, designed to abstract and represent multi-source time-oriented medication data, was introduced to assist clinicians with their medication reconciliation processes. An evaluation study was carried out to assess clinical usefulness and analyze potential impact of such application. Both quantitative and qualitative methods were applied to measure clinicians' performance efficiency and inaccuracy in medication summarization process with and without the intervention of computer-assisted medication application. Clinicians' feedback indicated the feasibility of integrating such a medication summarization tool into clinical practice workflow as a complementary addition to existing electronic health record systems. The result of the study showed potential to improve efficiency and reduce inaccuracy in clinician performance of medication summarization, which could in turn improve care efficiency, quality of care, and patient safety. PMID:24393492

  15. Texting on the Move

    MedlinePLUS

    ... texting is more likely to contribute to car crashes. We know this because police and other authorities ... in the seconds and minutes before a fatal crash. When people text while behind the wheel, they' ...

  16. Text Coherence in Translation

    ERIC Educational Resources Information Center

    Zheng, Yanping

    2009-01-01

    In the thesis a coherent text is defined as a continuity of senses of the outcome of combining concepts and relations into a network composed of knowledge space centered around main topics. And the author maintains that in order to obtain the coherence of a target language text from a source text during the process of translation, a translator can

  17. Texting on the Move

    MedlinePLUS

    ... reckless driving. That may mean a ticket, a lost license, or even jail time if you cause a fatal crash. Tips for Texting It's hard to live without texting. So the best thing to do is manage how and when we text, choosing the right time and place. Here are three ways to make sure your ...

  18. Creating Vocative Texts

    ERIC Educational Resources Information Center

    Nicol, Jennifer J.

    2008-01-01

    Vocative texts are expressive poetic texts that strive to show rather than tell, that communicate felt knowledge, and that appeal to the senses. They are increasingly used by researchers to present qualitative findings, but little has been written about how to create such texts. To this end, excerpts from an inquiry into the experience and meaning

  19. Linguistic Summarization of Video for Fall Detection Using Voxel Person and Fuzzy Logic

    PubMed Central

    Anderson, Derek; Luke, Robert H.; Keller, James M.; Skubic, Marjorie; Rantz, Marilyn; Aud, Myra

    2009-01-01

    In this paper, we present a method for recognizing human activity from linguistic summarizations of temporal fuzzy inference curves representing the states of a three-dimensional object called voxel person. A hierarchy of fuzzy logic is used, where the output from each level is summarized and fed into the next level. We present a two level model for fall detection. The first level infers the states of the person at each image. The second level operates on linguistic summarizations of voxel person’s states and inference regarding activity is performed. The rules used for fall detection were designed under the supervision of nurses to ensure that they reflect the manner in which elders perform these activities. The proposed framework is extremely flexible. Rules can be modified, added, or removed, allowing for per-resident customization based on knowledge about their cognitive and physical ability. PMID:20046216

  20. Voice disguise and automatic speaker recognition.

    PubMed

    Zhang, Cuiling; Tan, Tiejun

    2008-03-01

    In this paper a newly developed Forensic Automatic Speaker Recognition System (FASRS) was introduced and the effect of 10 types of voice disguises that are common in forensic casework on the performance of this system was studied. In this study 10 types of disguised voices and normal voices from 20 male college students were used as test samples. Each disguised voice was compared with all normal voices in the database to make speaker identification and speaker verification. The result of speaker recognition is summarized and the influence of voice disguises on the FASRS is evaluated. PMID:17646071

  1. Improving Text Recall with Multiple Summaries

    ERIC Educational Resources Information Center

    van der Meij, Hans; van der Meij, Jan

    2012-01-01

    Background. QuikScan (QS) is an innovative design that aims to improve accessibility, comprehensibility, and subsequent recall of expository text by means of frequent within-document summaries that are formatted as numbered list items. The numbers in the QS summaries correspond to numbers placed in the body of the document where the summarized

  2. Automatic recording spectroradiometer system.

    PubMed

    Heaps, W L

    1971-09-01

    A versatile, mobile, automatic recording spectroradiometer of high precision and accuracy has been developed. The instrument is a single-beam device with an alternate reference beam intended primarily for measurements of spectral irradiance. However, it is equally useful for measurement of spectral radiance, transmittance, or reflectance. The system is programmed for automatic operation. The output is in the form of an automatic digital recording of both measurements and control data. Instrument operation integrates the following characteristics: wavelength-by-wavelength operation in intervals of 0.1 nm to 50 nm; time-integrated measurements of spectral flux; internal calibration reference source; and monitored signals for wavelength position, test source total output, and photodetector dark current. The system's operating characteristics and specifications have been determined and are set forth here. Performance for three types of sources and correction of measurements to zero-bandpass equivalence is demonstrated. PMID:20111268

  3. Utilizing Marzano's Summarizing and Note Taking Strategies on Seventh Grade Students' Mathematics Performance

    ERIC Educational Resources Information Center

    Jeanmarie-Gardner, Charmaine

    2013-01-01

    A quasi-experimental research study was conducted that investigated the academic impact of utilizing Marzano's summarizing and note taking strategies on mathematic achievement. A sample of seventh graders from a middle school located on Long Island's North Shore was tested to determine whether significant differences existed in mathematic test…

  4. Empirical Analysis of Exploiting Review Helpfulness for Extractive Summarization of Online Reviews

    ERIC Educational Resources Information Center

    Xiong, Wenting; Litman, Diane

    2014-01-01

    We propose a novel unsupervised extractive approach for summarizing online reviews by exploiting review helpfulness ratings. In addition to using the helpfulness ratings for review-level filtering, we suggest using them as the supervision of a topic model for sentence-level content scoring. The proposed method is metadata-driven, requiring no…

  5. ERIC Annual Report--1989. Summarizing the Accomplishments of the Educational Resources Information Center.

    ERIC Educational Resources Information Center

    Krekeler, Nancy; And Others

    This is the third in a series of annual reports summarizing the activities and accomplishments of the Educational Resources Information Center (ERIC) program, which is funded and managed by the Office of Educational Research and Improvement in the U.S. Department of Education. One of the highlights of 1989 was the establishment of ACCESS ERIC, the

  6. Legal Provisions on Expanded Functions for Dental Hygienists and Assistants. Summarized by State. Second Edition.

    ERIC Educational Resources Information Center

    Johnson, Donald W.; Holz, Frank M.

    This second edition summarizes and interprets, from the pertinent documents of each state, those provisions which establish and regulate the tasks of hygienists and assistants, with special attention given to expanded functions. Information is updated for all jurisdictions through the end of 1973, based chiefly on materials received in response to

  7. Empirical Analysis of Exploiting Review Helpfulness for Extractive Summarization of Online Reviews

    ERIC Educational Resources Information Center

    Xiong, Wenting; Litman, Diane

    2014-01-01

    We propose a novel unsupervised extractive approach for summarizing online reviews by exploiting review helpfulness ratings. In addition to using the helpfulness ratings for review-level filtering, we suggest using them as the supervision of a topic model for sentence-level content scoring. The proposed method is metadata-driven, requiring no

  8. Utilizing Marzano's Summarizing and Note Taking Strategies on Seventh Grade Students' Mathematics Performance

    ERIC Educational Resources Information Center

    Jeanmarie-Gardner, Charmaine

    2013-01-01

    A quasi-experimental research study was conducted that investigated the academic impact of utilizing Marzano's summarizing and note taking strategies on mathematic achievement. A sample of seventh graders from a middle school located on Long Island's North Shore was tested to determine whether significant differences existed in mathematic test

  9. Multi-document Summarization of Dissertation Abstracts Using a Variable-Based Framework.

    ERIC Educational Resources Information Center

    Ou, Shiyan; Khoo, Christopher S. G.; Goh, Dion H.

    2003-01-01

    Proposes a variable-based framework for multi-document summarization of dissertation abstracts in the fields of sociology and psychology that makes use of the macro- and micro-level discourse structure of dissertation abstracts as well as cross-document structure. Provides a list of indicator phrases that denote different aspects of the problem…

  10. Effects on Science Summarization of a Reading Comprehension Intervention for Adolescents with Behavior and Attention Disorders

    ERIC Educational Resources Information Center

    Rogevich, Mary E.; Perin, Dolores

    2008-01-01

    Sixty-three adolescent boys with behavioral disorders (BD), 31 of whom had comorbid attention deficit hyperactivity disorder (ADHD), participated in a self-regulated strategy development intervention called Think Before Reading, Think While Reading, Think After Reading, With Written Summarization (TWA-WS). TWA-WS adapted Linda Mason's TWA

  11. iBIOMES Lite: Summarizing Biomolecular Simulation Data in Limited Settings

    PubMed Central

    2015-01-01

    As the amount of data generated by biomolecular simulations dramatically increases, new tools need to be developed to help manage this data at the individual investigator or small research group level. In this paper, we introduce iBIOMES Lite, a lightweight tool for biomolecular simulation data indexing and summarization. The main goal of iBIOMES Lite is to provide a simple interface to summarize computational experiments in a setting where the user might have limited privileges and limited access to IT resources. A command-line interface allows the user to summarize, publish, and search local simulation data sets. Published data sets are accessible via static hypertext markup language (HTML) pages that summarize the simulation protocols and also display data analysis graphically. The publication process is customized via extensible markup language (XML) descriptors while the HTML summary template is customized through extensible stylesheet language (XSL). iBIOMES Lite was tested on different platforms and at several national computing centers using various data sets generated through classical and quantum molecular dynamics, quantum chemistry, and QM/MM. The associated parsers currently support AMBER, GROMACS, Gaussian, and NWChem data set publication. The code is available at https://github.com/jcvthibault/ibiomes. PMID:24830957

  12. iBIOMES Lite: summarizing biomolecular simulation data in limited settings.

    PubMed

    Thibault, Julien C; Cheatham, Thomas E; Facelli, Julio C

    2014-06-23

    As the amount of data generated by biomolecular simulations dramatically increases, new tools need to be developed to help manage this data at the individual investigator or small research group level. In this paper, we introduce iBIOMES Lite, a lightweight tool for biomolecular simulation data indexing and summarization. The main goal of iBIOMES Lite is to provide a simple interface to summarize computational experiments in a setting where the user might have limited privileges and limited access to IT resources. A command-line interface allows the user to summarize, publish, and search local simulation data sets. Published data sets are accessible via static hypertext markup language (HTML) pages that summarize the simulation protocols and also display data analysis graphically. The publication process is customized via extensible markup language (XML) descriptors while the HTML summary template is customized through extensible stylesheet language (XSL). iBIOMES Lite was tested on different platforms and at several national computing centers using various data sets generated through classical and quantum molecular dynamics, quantum chemistry, and QM/MM. The associated parsers currently support AMBER, GROMACS, Gaussian, and NWChem data set publication. The code is available at https://github.com/jcvthibault/ibiomes . PMID:24830957

  13. Summarizing Monte Carlo Results in Methodological Research: The Single-Factor, Fixed-Effects ANCOVA Case.

    ERIC Educational Resources Information Center

    Harwell, Michael

    2003-01-01

    Used meta analytic methods to summarize results of Monte Carlo studies of test size and power of the F test in the single-factor, fixed-effects analysis of covariance model, updating and extending narrative reviews of this literature. (SLD)

  14. Making Sense of Texts

    ERIC Educational Resources Information Center

    Harper, Rebecca G.

    2014-01-01

    This article addresses the triadic nature regarding meaning construction of texts. Grounded in Rosenblatt's (1995; 1998; 2004) Transactional Theory, research conducted in an undergraduate Language Arts curriculum course revealed that when presented with unfamiliar texts, students used prior experiences, social interactions, and literary…

  15. Text File Comparator

    NASA Technical Reports Server (NTRS)

    Kotler, R. S.

    1983-01-01

    File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

  16. The Perfect Text.

    ERIC Educational Resources Information Center

    Russo, Ruth

    1998-01-01

    A chemistry teacher describes the elements of the ideal chemistry textbook. The perfect text is focused and helps students draw a coherent whole out of the myriad fragments of information and interpretation. The text would show chemistry as the central science necessary for understanding other sciences and would also root chemistry firmly in the

  17. Solar Energy Project: Text.

    ERIC Educational Resources Information Center

    Tullock, Bruce, Ed.; And Others

    The text is a compilation of background information which should be useful to teachers wishing to obtain some technical information on solar technology. Twenty sections are included which deal with topics ranging from discussion of the sun's composition to the legal implications of using solar energy. The text is intended to provide useful…

  18. Automatic soldering machine

    NASA Technical Reports Server (NTRS)

    Stein, J. A.

    1974-01-01

    Fully-automatic tube-joint soldering machine can be used to make leakproof joints in aluminum tubes of 3/16 to 2 in. in diameter. Machine consists of temperature-control unit, heater transformer and heater head, vibrator, and associated circuitry controls, and indicators.

  19. Automaticity of Conceptual Magnitude.

    PubMed

    Gliksman, Yarden; Itamar, Shai; Leibovich, Tali; Melman, Yonatan; Henik, Avishai

    2016-01-01

    What is bigger, an elephant or a mouse? This question can be answered without seeing the two animals, since these objects elicit conceptual magnitude. How is an object's conceptual magnitude processed? It was suggested that conceptual magnitude is automatically processed; namely, irrelevant conceptual magnitude can affect performance when comparing physical magnitudes. The current study further examined this question and aimed to expand the understanding of automaticity of conceptual magnitude. Two different objects were presented and participants were asked to decide which object was larger on the screen (physical magnitude) or in the real world (conceptual magnitude), in separate blocks. By creating congruent (the conceptually larger object was physically larger) and incongruent (the conceptually larger object was physically smaller) pairs of stimuli it was possible to examine the automatic processing of each magnitude. A significant congruity effect was found for both magnitudes. Furthermore, quartile analysis revealed that the congruity was affected similarly by processing time for both magnitudes. These results suggest that the processing of conceptual and physical magnitudes is automatic to the same extent. The results support recent theories suggested that different types of magnitude processing and representation share the same core system. PMID:26879153

  20. Automaticity of Conceptual Magnitude

    PubMed Central

    Gliksman, Yarden; Itamar, Shai; Leibovich, Tali; Melman, Yonatan; Henik, Avishai

    2016-01-01

    What is bigger, an elephant or a mouse? This question can be answered without seeing the two animals, since these objects elicit conceptual magnitude. How is an object’s conceptual magnitude processed? It was suggested that conceptual magnitude is automatically processed; namely, irrelevant conceptual magnitude can affect performance when comparing physical magnitudes. The current study further examined this question and aimed to expand the understanding of automaticity of conceptual magnitude. Two different objects were presented and participants were asked to decide which object was larger on the screen (physical magnitude) or in the real world (conceptual magnitude), in separate blocks. By creating congruent (the conceptually larger object was physically larger) and incongruent (the conceptually larger object was physically smaller) pairs of stimuli it was possible to examine the automatic processing of each magnitude. A significant congruity effect was found for both magnitudes. Furthermore, quartile analysis revealed that the congruity was affected similarly by processing time for both magnitudes. These results suggest that the processing of conceptual and physical magnitudes is automatic to the same extent. The results support recent theories suggested that different types of magnitude processing and representation share the same core system. PMID:26879153

  1. AUTOmatic Message PACKing Facility

    Energy Science and Technology Software Center (ESTSC)

    2004-07-01

    AUTOPACK is a library that provides several useful features for programs using the Message Passing Interface (MPI). Features included are: 1. automatic message packing facility 2. management of send and receive requests. 3. management of message buffer memory. 4. determination of the number of anticipated messages from a set of arbitrary sends, and 5. deterministic message delivery for testing purposes.

  2. Automatic Dance Lesson Generation

    ERIC Educational Resources Information Center

    Yang, Yang; Leung, H.; Yue, Lihua; Deng, LiQun

    2012-01-01

    In this paper, an automatic lesson generation system is presented which is suitable in a learning-by-mimicking scenario where the learning objects can be represented as multiattribute time series data. The dance is used as an example in this paper to illustrate the idea. Given a dance motion sequence as the input, the proposed lesson generation

  3. Automatic sweep circuit

    DOEpatents

    Keefe, Donald J. (Lemont, IL)

    1980-01-01

    An automatically sweeping circuit for searching for an evoked response in an output signal in time with respect to a trigger input. Digital counters are used to activate a detector at precise intervals, and monitoring is repeated for statistical accuracy. If the response is not found then a different time window is examined until the signal is found.

  4. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  5. Summarizing scale-free networks based on virtual and real links

    NASA Astrophysics Data System (ADS)

    Bei, Yijun; Lin, Zhen; Chen, Deren

    2016-02-01

    Techniques to summarize and cluster graphs are indispensable to understand the internal characteristics of large complex networks. However, existing methods that analyze graphs mainly focus on aggregating strong-interaction vertices into the same group without considering the node properties, particularly multi-valued attributes. This study aims to develop a unified framework based on the concept of a virtual graph by integrating attributes and structural similarities. We propose a summarizing graph based on virtual and real links (SGVR) approach to aggregate similar nodes in a scale-free graph into k non-overlapping groups based on user-selected attributes considering both virtual links (attributes) and real links (graph structures). An effective data structure called HB-Graph is adopted to adjust the subgroups and optimize the grouping results. Extensive experiments are carried out on actual and synthetic datasets. Results indicate that our proposed method is both effective and efficient.

  6. XTRN - Automatic Code Generator For C Header Files

    NASA Technical Reports Server (NTRS)

    Pieniazek, Lester A.

    1990-01-01

    Computer program XTRN, Automatic Code Generator for C Header Files, generates "extern" declarations for all globally visible identifiers contained in input C-language code. Generates external declarations by parsing input text according to syntax derived from C. Automatically provides consistent and up-to-date "extern" declarations and alleviates tedium and errors involved in manual approach. Written in C and Unix Shell.

  7. Disciplinary Variation in Automatic Sublanguage Term Identification.

    ERIC Educational Resources Information Center

    Haas, Stephanie W.

    1997-01-01

    Describes a method for automatically identifying sublanguage (SL) domain terms and revealing the patterns in which they occur in text. By applying this method to abstracts from a variety of disciplines, differences in how SL domain terminology occurs can be discerned. Findings indicate relatively consistent differences between the hard sciences

  8. Automatic Discrimination of Emotion from Spoken Finnish

    ERIC Educational Resources Information Center

    Toivanen, Juhani; Vayrynen, Eero; Seppanen, Tapio

    2004-01-01

    In this paper, experiments on the automatic discrimination of basic emotions from spoken Finnish are described. For the purpose of the study, a large emotional speech corpus of Finnish was collected; 14 professional actors acted as speakers, and simulated four primary emotions when reading out a semantically neutral text. More than 40 prosodic

  9. Fully automatic telemetry data processor

    NASA Technical Reports Server (NTRS)

    Cox, F. B.; Keipert, F. A.; Lee, R. C.

    1968-01-01

    Satellite Telemetry Automatic Reduction System /STARS 2/, a fully automatic computer-controlled telemetry data processor, maximizes data recovery, reduces turnaround time, increases flexibility, and improves operational efficiency. The system incorporates a CDC 3200 computer as its central element.

  10. A hierarchical structure for automatic meshing and adaptive FEM analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Saxena, Mukul; Perucchio, Renato

    1987-01-01

    A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.

  11. Texts On-Line.

    ERIC Educational Resources Information Center

    Thomas, Jean-Jacques

    1993-01-01

    Maintains that the study of signs is divided between those scholars who use the Saussurian binary sign (semiology) and those who prefer the Peirce tripartite sign (semiotics). Concludes that neither the Saussurian nor Peircian analysis methods can produce a semiotic interpretation based on a hierarchy of the text's various components. (CFR)

  12. Taming the Wild Text

    ERIC Educational Resources Information Center

    Allyn, Pam

    2012-01-01

    As a well-known advocate for promoting wider reading and reading engagement among all children--and founder of a reading program for foster children--Pam Allyn knows that struggling readers often face any printed text with fear and confusion, like Max in the book Where the Wild Things Are. She argues that teachers need to actively create a

  13. Text as Image.

    ERIC Educational Resources Information Center

    Woal, Michael; Corn, Marcia Lynn

    As electronically mediated communication becomes more prevalent, print is regaining the original pictorial qualities which graphemes (written signs) lost when primitive pictographs (or picture writing) and ideographs (simplified graphemes used to communicate ideas as well as to represent objects) evolved into first written, then printed, texts of

  14. Content Based Text Handling.

    ERIC Educational Resources Information Center

    Schwarz, Christoph

    1990-01-01

    Gives an overview of various linguistic software tools in the field of intelligent text handling that are being developed in Germany utilizing artificial intelligence techniques in the field of natural language processing. Syntactical analysis of documents is described and application areas are discussed. (10 references) (LRW)

  15. Taming the Wild Text

    ERIC Educational Resources Information Center

    Allyn, Pam

    2012-01-01

    As a well-known advocate for promoting wider reading and reading engagement among all children--and founder of a reading program for foster children--Pam Allyn knows that struggling readers often face any printed text with fear and confusion, like Max in the book Where the Wild Things Are. She argues that teachers need to actively create a…

  16. Polymorphous Perversity in Texts

    ERIC Educational Resources Information Center

    Johnson-Eilola, Johndan

    2012-01-01

    Here's the tricky part: If we teach ourselves and our students that texts are made to be broken apart, remixed, remade, do we lose the polymorphous perversity that brought us pleasure in the first place? Does the pleasure of transgression evaporate when the borders are opened?

  17. Reflections of Older Texts.

    ERIC Educational Resources Information Center

    Reid, Loren

    An overseas teaching assignment in 1961 led one educator to visit St. Patrick's Cathedral in Dublin where he came upon an effigy of Richard Whately and realized that Whately had written a text used in many American universities. The educator especially recalled that Whately had said "Encourage your students." He also wrote that the audience

  18. Health information text characteristics.

    PubMed

    Leroy, Gondy; Eryilmaz, Evren; Laroya, Benjamin T

    2006-01-01

    Millions of people search online for medical text, but these texts are often too complicated to understand. Readability evaluations are mostly based on surface metrics such as character or words counts and sentence syntax, but content is ignored. We compared four types of documents, easy and difficult WebMD documents, patient blogs, and patient educational material, for surface and content-based metrics. The documents differed significantly in reading grade levels and vocabulary used. WebMD pages with high readability also used terminology that was more consumer-friendly. Moreover, difficult documents are harder to understand due to their grammar and word choice and because they discuss more difficult topics. This indicates that we can simplify many documents by focusing on word choice in addition to sentence structure, however, for difficult documents this may be insufficient. PMID:17238387

  19. Automatic inference of indexing rules for MEDLINE

    PubMed Central

    Nvol, Aurlie; Shooshan, Sonya E; Claveau, Vincent

    2008-01-01

    Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP) to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI), a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI. PMID:19025687

  20. The Texting Principal

    ERIC Educational Resources Information Center

    Kessler, Susan Stone

    2009-01-01

    The author was appointed principal of a large, urban comprehensive high school in spring 2008. One of the first things she had to figure out was how she would develop a connection with her students when there were so many of them--nearly 2,000--and only one of her. Texts may be exchanged more quickly than having a conversation over the phone,

  1. Happiness in texting times

    PubMed Central

    Hevey, David; Hand, Karen; MacLachlan, Malcolm

    2015-01-01

    Assessing national levels of happiness has become an important research and policy issue in recent years. We examined happiness and satisfaction in Ireland using phone text messaging to collect large-scale longitudinal data from 3,093 members of the general Irish population. For six consecutive weeks, participants happiness and satisfaction levels were assessed. For four consecutive weeks (weeks 25) a different random third of the sample got feedback on the previous weeks mean happiness and satisfaction ratings. Text messaging proved a feasible means of assessing happiness and satisfaction, with almost three quarters (73%) of participants completing all assessments. Those who received feedback on the previous weeks mean ratings were eight times more likely to complete the subsequent assessments than those not receiving feedback. Providing such feedback data on mean levels of happiness and satisfaction did not systematically bias subsequent ratings either toward or away from these normative anchors. Texting is a simple and effective means to collect population level happiness and satisfaction data. PMID:26441804

  2. Automatic transmission control method

    SciTech Connect

    Hasegawa, H.; Ishiguro, T.

    1989-07-04

    This patent describes a method of controlling an automatic transmission of an automotive vehicle. The transmission has a gear train which includes a brake for establishing a first lowest speed of the transmission, the brake acting directly on a ring gear which meshes with a pinion, the pinion meshing with a sun gear in a planetary gear train, the ring gear connected with an output member, the sun gear being engageable and disengageable with an input member of the transmission by means of a clutch. The method comprises the steps of: detecting that a shift position of the automatic transmission has been shifted to a neutral range; thereafter introducing hydraulic pressure to the brake if present vehicle velocity is below a predetermined value, whereby the brake is engaged to establish the first lowest speed; and exhausting hydraulic pressure from the brake if present vehicle velocity is higher than a predetermined value, whereby the brake is disengaged.

  3. Automatic Retinal Oximetry

    NASA Astrophysics Data System (ADS)

    Halldorsson, G. H.; Karlsson, R. A.; Hardarson, S. H.; Mura, M. Dalla; Eysteinsson, T.; Beach, J. M.; Stefansson, E.; Benediktsson, J. A.

    2007-10-01

    This paper presents a method for automating the evaluation of hemoglobin oxygen saturation in the retina. This method should prove useful for monitoring ischemic retinal diseases and the effect of treatment. In order to obtain saturation values automatically, spectral images must be registered in pairs, the vessels of the retina located and measurement points must be selected. The registration algorithm is based on a data driven approach that circumvents many of the problems that have plagued previous methods. The vessels are extracted using an algorithm based on morphological profiles and supervised classifiers. Measurement points on retinal arterioles and venules as well as reference points on the adjacent fundus are automatically selected. Oxygen saturation values along vessels are averaged to arrive at a more accurate estimate of the retinal vessel oxygen saturation. The system yields reproducible results as well as being sensitive to changes in oxygen saturation.

  4. Automatism and driving offences.

    PubMed

    Rumbold, John

    2013-10-01

    Automatism is a rarely used defence, but it is particularly used for driving offences because many are strict liability offences. Medical evidence is almost always crucial to argue the defence, and it is important to understand the bars that limit the use of automatism so that the important medical issues can be identified. The issue of prior fault is an important public safeguard to ensure that reasonable precautions are taken to prevent accidents. The total loss of control definition is more problematic, especially with disorders of more gradual onset like hypoglycaemic episodes. In these cases the alternative of 'effective loss of control' would be fairer. This article explores several cases, how the criteria were applied to each, and the types of medical assessment required. PMID:24112330

  5. Automatic Abstraction in Planning

    NASA Technical Reports Server (NTRS)

    Christensen, J.

    1991-01-01

    Traditionally, abstraction in planning has been accomplished by either state abstraction or operator abstraction, neither of which has been fully automatic. We present a new method, predicate relaxation, for automatically performing state abstraction. PABLO, a nonlinear hierarchical planner, implements predicate relaxation. Theoretical, as well as empirical results are presented which demonstrate the potential advantages of using predicate relaxation in planning. We also present a new definition of hierarchical operators that allows us to guarantee a limited form of completeness. This new definition is shown to be, in some ways, more flexible than previous definitions of hierarchical operators. Finally, a Classical Truth Criterion is presented that is proven to be sound and complete for a planning formalism that is general enough to include most classical planning formalisms that are based on the STRIPS assumption.

  6. Automatic carrier acquisition system

    NASA Technical Reports Server (NTRS)

    Bunce, R. C. (Inventor)

    1973-01-01

    An automatic carrier acquisition system for a phase locked loop (PLL) receiver is disclosed. It includes a local oscillator, which sweeps the receiver to tune across the carrier frequency uncertainty range until the carrier crosses the receiver IF reference. Such crossing is detected by an automatic acquisition detector. It receives the IF signal from the receiver as well as the IF reference. It includes a pair of multipliers which multiply the IF signal with the IF reference in phase and in quadrature. The outputs of the multipliers are filtered through bandpass filters and power detected. The output of the power detector has a signal dc component which is optimized with respect to the noise dc level by the selection of the time constants of the filters as a function of the sweep rate of the local oscillator.

  7. Automatic thermal switches

    NASA Technical Reports Server (NTRS)

    Cunningham, J. W.; Wing, L. D.

    1980-01-01

    Two automatic switches control heat flow from one thermally conductive plate to another. One switch permits heat flow to outside; other limits heat flow. In one switch, heat on conductive plate activates piston that forces saddle against plate. Heat carriers then conduct heat to second plate that radiates it away. After temperature is first plate drops, piston contracts and spring breaks thermal contact with plate. In second switch, action is reversed.

  8. Automatic flexible endoscope reprocessors.

    PubMed

    Muscarella, L F

    2000-04-01

    Reprocessing medical instruments is a complex and controversial discipline. If all instruments were constructed of materials not damaged by heat, pressure, and moisture, instrument reprocessing would be greatly simplified. As the number of novel and complex instruments entering the market continues to increase, periodic review of the health care facility's instrument reprocessing protocols to ensure their safety and effectiveness is important. This article reviews the advantages and the limitations of automatic flexible endoscope reprocessors. PMID:10683211

  9. Automatic digital image registration

    NASA Technical Reports Server (NTRS)

    Goshtasby, A.; Jain, A. K.; Enslin, W. R.

    1982-01-01

    This paper introduces a general procedure for automatic registration of two images which may have translational, rotational, and scaling differences. This procedure involves (1) segmentation of the images, (2) isolation of dominant objects from the images, (3) determination of corresponding objects in the two images, and (4) estimation of transformation parameters using the center of gravities of objects as control points. An example is given which uses this technique to register two images which have translational, rotational, and scaling differences.

  10. Comment on se rappelle et on resume des histoires (How We Remember and Summarize Stories)

    ERIC Educational Resources Information Center

    Kintsch, Walter; Van Dijk, Teun A.

    1975-01-01

    Working from theories of text grammar and logic, the authors suggest and tentatively confirm several hypotheses concerning the role of micro- and macro-structures in comprehension and recall of texts. (Text is in French.) (DB)

  11. Formalization and separation: A systematic basis for interpreting approaches to summarizing science for climate policy.

    PubMed

    Sundqvist, Gran; Bohlin, Ingemar; Hermansen, Erlend A T; Yearley, Steven

    2015-06-01

    In studies of environmental issues, the question of how to establish a productive interplay between science and policy is widely debated, especially in relation to climate change. The aim of this article is to advance this discussion and contribute to a better understanding of how science is summarized for policy purposes by bringing together two academic discussions that usually take place in parallel: the question of how to deal with formalization (structuring the procedures for assessing and summarizing research, e.g. by protocols) and separation (maintaining a boundary between science and policy in processes of synthesizing science for policy). Combining the two dimensions, we draw a diagram onto which different initiatives can be mapped. A high degree of formalization and separation are key components of the canonical image of scientific practice. Influential Science and Technology Studies analysts, however, are well known for their critiques of attempts at separation and formalization. Three examples that summarize research for policy purposes are presented and mapped onto the diagram: the Intergovernmental Panel on Climate Change, the European Union's Science for Environment Policy initiative, and the UK Committee on Climate Change. These examples bring out salient differences concerning how formalization and separation are dealt with. Discussing the space opened up by the diagram, as well as the limitations of the attraction to its endpoints, we argue that policy analyses, including much Science and Technology Studies work, are in need of a more nuanced understanding of the two crucial dimensions of formalization and separation. Accordingly, two analytical claims are presented, concerning trajectories, how organizations represented in the diagram move over time, and mismatches, how organizations fail to handle the two dimensions well in practice. PMID:26477199

  12. [The psychopathological text].

    PubMed

    Sauri, J J

    1976-01-01

    Starting from the different epistemological status of looking and hearing in the clinical field, the author stresses the importance of the modalities of approaching the psychopathological text, embodied in its context, and conceived as an intersubjective production rather than an individual phenomenon. The author contends that the psychopathological text can be read in three different ways: 1. Informative. First-hand reading provides cumulative information, conditioned by reader's limitations and imaginary endowment. Likelihood and truth provide its framework, but colliding with each other, being likelihood the first way of organizing perceptual data, whereas truth is to be seeked behind apparent phenomena. 2; Hermeneutical. Reading through interpretation entails deciphering the text according to certain definite rules. The logic of hermeneutics differs from oridinary linear logic in its stemming from joint intersubjective sintactic and semantic recreation. Interpretation provides a joint signification for the discourse, but, because of this very fact, it also makes the codes used to hinge and to create a different universe of signification, and allows for starting the whole process anew. This new universe gives rise to a different link between likelihood and truth, making at least possible, if not necessary, their simultaneous positive values. 3. Maieutical. When the new universes of significations built up by hermeneutical reading are articulated among them, and the system of articulations are set forth, another level is attained, in which meaning is created. Meaning, according to the author's proposal, is generated by the systems of transformations of former universes, both in their synchronic and diachronic processes. Thus, meaning, being open to any possible combinatory of transformations, provides the widest gamut of possibilities to produce sense-and change-in their widest acception. PMID:937042

  13. How to Summarize a 6,000-Word Paper in a Six-Minute Video Clip

    PubMed Central

    Vachon, Patrick; Daudelin, Genevieve; Hivon, Myriam

    2013-01-01

    As part of our research team's knowledge transfer and exchange (KTE) efforts, we created a six-minute video clip that summarizes, in plain language, a scientific paper that describes why and how three teams of academic entrepreneurs developed new health technologies. Recognizing that video-based KTE strategies can be a valuable tool for health services and policy researchers, this paper explains the constraints and sources of inspiration that shaped our video production process. Aiming to provide practical guidance, we describe the steps and tools that we used to identify, refine and package the key content of the scientific paper into an original video format. PMID:23968634

  14. Linguistically informed digital fingerprints for text

    NASA Astrophysics Data System (ADS)

    Uzuner, zlem

    2006-02-01

    Digital fingerprinting, watermarking, and tracking technologies have gained importance in the recent years in response to growing problems such as digital copyright infringement. While fingerprints and watermarks can be generated in many different ways, use of natural language processing for these purposes has so far been limited. Measuring similarity of literary works for automatic copyright infringement detection requires identifying and comparing creative expression of content in documents. In this paper, we present a linguistic approach to automatically fingerprinting novels based on their expression of content. We use natural language processing techniques to generate "expression fingerprints". These fingerprints consist of both syntactic and semantic elements of language, i.e., syntactic and semantic elements of expression. Our experiments indicate that syntactic and semantic elements of expression enable accurate identification of novels and their paraphrases, providing a significant improvement over techniques used in text classification literature for automatic copy recognition. We show that these elements of expression can be used to fingerprint, label, or watermark works; they represent features that are essential to the character of works and that remain fairly consistent in the works even when works are paraphrased. These features can be directly extracted from the contents of the works on demand and can be used to recognize works that would not be correctly identified either in the absence of pre-existing labels or by verbatim-copy detectors.

  15. Recognizing musical text

    NASA Astrophysics Data System (ADS)

    Clarke, Alastair T.; Brown, B. M.; Thorne, M. P.

    1993-08-01

    This paper reports on some recent developments in a software product that recognizes printed music notation. There are a number of computer systems available which assist in the task of printing music; however the full potential of these systems cannot be realized until the musical text has been entered into the computer. It is this problem that we address in this paper. The software we describe, which uses computationally inexpensive methods, is designed to analyze a music score, previously read by a flat bed scanner, and to extract the musical information that it contains. The paper discusses the methods used to recognize the musical text: these involve sampling the image at strategic points and using this information to estimate the musical symbol. It then discusses some hard problems that have been encountered during the course of the research; for example the recognition of chords and note clusters. It also reports on the progress that has been made in solving these problems and concludes with a discussion of work that needs to be undertaken over the next five years in order to transform this research prototype into a commercial product.

  16. Semi-Supervised Data Summarization: Using Spectral Libraries to Improve Hyperspectral Clustering

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Shu, H. P.; Mazzoni, D.; Castano, R.

    2005-01-01

    Hyperspectral imagers produce very large images, with each pixel recorded at hundreds or thousands of different wavelengths. The ability to automatically generate summaries of these data sets enables several important applications, such as quickly browsing through a large image repository or determining the best use of a limited bandwidth link (e.g., determining which images are most critical for full transmission). Clustering algorithms can be used to generate these summaries, but traditional clustering methods make decisions based only on the information contained in the data set. In contrast, we present a new method that additionally leverages existing spectral libraries to identify materials that are likely to be present in the image target area. We find that this approach simultaneously reduces runtime and produces summaries that are more relevant to science goals.

  17. Toward a Model of Text Comprehension and Production.

    ERIC Educational Resources Information Center

    Kintsch, Walter; Van Dijk, Teun A.

    1978-01-01

    Described is the system of mental operations occurring in text comprehension and in recall and summarization. A processing model is outlined: 1) the meaning elements of a text become organized into a coherent whole, 2) the full meaning of the text is condensed into its gist, and 3) new texts are generated from the comprehension processes.

  18. Monitoring the Implementation of Consultation Planning, Recording, and Summarizing in a Breast Care Center

    PubMed Central

    Belkora, Jeffrey K.; Loth, Meredith K.; Chen, Daniel F.; Chen, Jennifer Y.; Volz, Shelley; Esserman, Laura J.

    2008-01-01

    OBJECTIVE We implemented and monitored a clinical service, Consultation Planning, Recording and Summarizing (CPRS), in which trained facilitators elicit patient questions for doctors, and then audio-record, and summarize the doctor-patient consultations. METHODS We trained 8 schedulers to offer CPRS to breast cancer patients making treatment decisions, and trained 14 premedical interns to provide the service. We surveyed a convenience sample of patients regarding their self-efficacy and decisional conflict. We solicited feedback from physicians, schedulers, and CPRS staff on our implementation of CPRS. RESULTS 278 patients used CPRS over the 22 month study period, an exploitation rate of 32% compared to our capacity. Thirty-seven patients responded to surveys, providing pilot data showing improvements in self-efficacy and decisional conflict. Physicians, schedulers, and premedical interns recommended changes in the programs locations; delivery; products; and screening, recruitment and scheduling processes. CONCLUSION Our monitoring of this implementation found elements of success while surfacing recommendations for improvement. PRACTICE IMPLICATIONS We made changes based on study findings. We moved Consultation Planning to conference rooms or telephone sessions; shortened the documents produced by CPRS staff; diverted slack resources to increase recruitment efforts; and obtained a waiver of consent in order to streamline and improve ongoing evaluation. PMID:18755564

  19. Hysteroscopy video summarization and browsing by estimating the physician's attention on video segments.

    PubMed

    Gavião, Wilson; Scharcanski, Jacob; Frahm, Jan-Michael; Pollefeys, Marc

    2012-01-01

    Specialists often need to browse through libraries containing many diagnostic hysteroscopy videos searching for similar cases, or even to review the video of one particular case. Video searching and browsing can be used in many situations, like in case-based diagnosis when videos of previously diagnosed cases are compared, in case referrals, in reviewing the patient records, as well as for supporting medical research (e.g. in human reproduction). However, in terms of visual content, diagnostic hysteroscopy videos contain lots of information, but only a reduced number of frames are actually useful for diagnosis/prognosis purposes. In order to facilitate the browsing task, we propose in this paper a technique for estimating the clinical relevance of video segments in diagnostic hysteroscopies. Basically, the proposed technique associates clinical relevance with the attention attracted by a diagnostic hysteroscopy video segment during the video acquisition (i.e. during the diagnostic hysteroscopy conducted by a specialist). We show that the resulting video summary allows specialists to browse the video contents nonlinearly, while avoiding spending time on spurious visual information. In this work, we review state-of-art methods for summarizing general videos and how they apply to diagnostic hysteroscopy videos (considering their specific characteristics), and conclude that our proposed method contributes to the field with a summarization and representation method specific for video hysteroscopies. The experimental results indicate that our method tends to produce compact video summaries without discarding clinically relevant information. PMID:21920798

  20. Terminology extraction from medical texts in Polish

    PubMed Central

    2014-01-01

    Background Hospital documents contain free text describing the most important facts relating to patients and their illnesses. These documents are written in specific language containing medical terminology related to hospital treatment. Their automatic processing can help in verifying the consistency of hospital documentation and obtaining statistical data. To perform this task we need information on the phrases we are looking for. At the moment, clinical Polish resources are sparse. The existing terminologies, such as Polish Medical Subject Headings (MeSH), do not provide sufficient coverage for clinical tasks. It would be helpful therefore if it were possible to automatically prepare, on the basis of a data sample, an initial set of terms which, after manual verification, could be used for the purpose of information extraction. Results Using a combination of linguistic and statistical methods for processing over 1200 children hospital discharge records, we obtained a list of single and multiword terms used in hospital discharge documents written in Polish. The phrases are ordered according to their presumed importance in domain texts measured by the frequency of use of a phrase and the variety of its contexts. The evaluation showed that the automatically identified phrases cover about 84% of terms in domain texts. At the top of the ranked list, only 4% out of 400 terms were incorrect while out of the final 200, 20% of expressions were either not domain related or syntactically incorrect. We also observed that 70% of the obtained terms are not included in the Polish MeSH. Conclusions Automatic terminology extraction can give results which are of a quality high enough to be taken as a starting point for building domain related terminological dictionaries or ontologies. This approach can be useful for preparing terminological resources for very specific subdomains for which no relevant terminologies already exist. The evaluation performed showed that none of the tested ranking procedures were able to filter out all improperly constructed noun phrases from the top of the list. Careful choice of noun phrases is crucial to the usefulness of the created terminological resource in applications such as lexicon construction or acquisition of semantic relations from texts. PMID:24976943

  1. Reading Text While Driving

    PubMed Central

    Horrey, William J.; Hoffman, Joshua D.

    2015-01-01

    Objective In this study, we investigated how drivers adapt secondary-task initiation and time-sharing behavior when faced with fluctuating driving demands. Background Reading text while driving is particularly detrimental; however, in real-world driving, drivers actively decide when to perform the task. Method In a test track experiment, participants were free to decide when to read messages while driving along a straight road consisting of an area with increased driving demands (demand zone) followed by an area with low demands. A message was made available shortly before the vehicle entered the demand zone. We manipulated the type of driving demands (baseline, narrow lane, pace clock, combined), message format (no message, paragraph, parsed), and the distance from the demand zone when the message was available (near, far). Results In all conditions, drivers started reading messages (drivers first glance to the display) before entering or before leaving the demand zone but tended to wait longer when faced with increased driving demands. While reading messages, drivers looked more or less off road, depending on types of driving demands. Conclusions For task initiation, drivers avoid transitions from low to high demands; however, they are not discouraged when driving demands are already elevated. Drivers adjust time-sharing behavior according to driving demands while performing secondary tasks. Nonetheless, such adjustment may be less effective when total demands are high. Application This study helps us to understand a drivers role as an active controller in the context of distracted driving and provides insights for developing distraction interventions. PMID:25850162

  2. Motor automaticity in Parkinson's disease.

    PubMed

    Wu, Tao; Hallett, Mark; Chan, Piu

    2015-10-01

    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson's disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  3. Automatic Vortex Core Detection

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Haimes, Robert; Gearld-Yamasaki, Michael (Technical Monitor)

    1998-01-01

    An eigenvector method for vortex identification has been applied to recent numerical and experimental studies in external flow aerodynamics. This paper shows that it is an effective way to extract and visualize features such as vortex cores, spiral vortex breakdowns, and vortex bursts. The algorithm has also been incorporated in a finite element flow solver to guide an automatic mesh refinement program. Results show that this approach can resolve small scale vortical structures in helicopter rotor simulations which are not captured on coarse meshes.

  4. Evidence Summarized in Attorneys' Closing Arguments Predicts Acquittals in Criminal Trials of Child Sexual Abuse

    PubMed Central

    Stolzenberg, Stacia N.; Lyon, Thomas D.

    2014-01-01

    Evidence summarized in attorney's closing arguments of criminal child sexual abuse cases (N = 189) was coded to predict acquittal rates. Ten variables were significant bivariate predictors; five variables significant at p < .01 were entered into a multivariate model. Cases were likely to result in an acquittal when the defendant was not charged with force, the child maintained contact with the defendant after the abuse occurred, or the defense presented a hearsay witness regarding the victim's statements, a witness regarding the victim's character, or a witness regarding another witnesses' character (usually the mother). The findings suggest that jurors might believe that child molestation is akin to a stereotype of violent rape and that they may be swayed by defense challenges to the victim's credibility and the credibility of those close to the victim. PMID:24920247

  5. Interactive exploration of surveillance video through action shot summarization and trajectory visualization.

    PubMed

    Meghdadi, Amir H; Irani, Pourang

    2013-12-01

    We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems. PMID:24051778

  6. Automatic readout micrometer

    DOEpatents

    Lauritzen, Ted (Lafayette, CA)

    1982-01-01

    A measuring system is disclosed for surveying and very accurately positioning objects with respect to a reference line. A principal use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse or fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  7. Mining for Surprise Events within Text Streams

    SciTech Connect

    Whitney, Paul D.; Engel, David W.; Cramer, Nicholas O.

    2009-04-30

    This paper summarizes algorithms and analysis methodology for mining the evolving content in text streams. Text streams include news, press releases from organizations, speeches, Internet blogs, etc. These data are a fundamental source for detecting and characterizing strategic intent of individuals and organizations as well as for detecting abrupt or surprising events within communities. Specifically, an analyst may need to know if and when the topic within a text stream changes. Much of the current text feature methodology is focused on understanding and analyzing a single static collection of text documents. Corresponding analytic activities include summarizing the contents of the collection, grouping the documents based on similarity of content, and calculating concise summaries of the resulting groups. The approach reported here focuses on taking advantage of the temporal characteristics in a text stream to identify relevant features (such as change in content), and also on the analysis and algorithmic methodology to communicate these characteristics to a user. We present a variety of algorithms for detecting essential features within a text stream. A critical finding is that the characteristics used to identify features in a text stream are uncorrelated with the characteristics used to identify features in a static document collection. Our approach for communicating the information back to the user is to identify feature (word/phrase) groups. These resulting algorithms form the basis of developing software tools for a user to analyze and understand the content of text streams. We present analysis using both news information and abstracts from technical articles, and show how these algorithms provide understanding of the contents of these text streams.

  8. Comparison of automatic control systems

    NASA Technical Reports Server (NTRS)

    Oppelt, W

    1941-01-01

    This report deals with a reciprocal comparison of an automatic pressure control, an automatic rpm control, an automatic temperature control, and an automatic directional control. It shows the difference between the "faultproof" regulator and the actual regulator which is subject to faults, and develops this difference as far as possible in a parallel manner with regard to the control systems under consideration. Such as analysis affords, particularly in its extension to the faults of the actual regulator, a deep insight into the mechanism of the regulator process.

  9. Automatic sets and Delone sets

    NASA Astrophysics Data System (ADS)

    Barb, A.; von Haeseler, F.

    2004-04-01

    Automatic sets D\\subset{\\bb Z}^m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D\\subset{\\bb Z}^m to be a Delone set in {\\bb R}^m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples.

  10. Automatic document navigation for digital content remastering

    NASA Astrophysics Data System (ADS)

    Lin, Xiaofan; Simske, Steven J.

    2003-12-01

    This paper presents a novel method of automatically adding navigation capabilities to re-mastered electronic books. We first analyze the need for a generic and robust system to automatically construct navigation links into re-mastered books. We then introduce the core algorithm based on text matching for building the links. The proposed method utilizes the tree-structured dictionary and directional graph of the table of contents to efficiently conduct the text matching. Information fusion further increases the robustness of the algorithm. The experimental results on the MIT Press digital library project are discussed and the key functional features of the system are illustrated. We have also investigated how the quality of the OCR engine affects the linking algorithm. In addition, the analogy between this work and Web link mining has been pointed out.

  11. Text Mining the History of Medicine.

    PubMed

    Thompson, Paul; Batista-Navarro, Riza Theresa; Kontonatsios, Georgios; Carter, Jacob; Toon, Elizabeth; McNaught, John; Timmermann, Carsten; Worboys, Michael; Ananiadou, Sophia

    2016-01-01

    Historical text archives constitute a rich and diverse source of information, which is becoming increasingly readily accessible, due to large-scale digitisation efforts. However, it can be difficult for researchers to explore and search such large volumes of data in an efficient manner. Text mining (TM) methods can help, through their ability to recognise various types of semantic information automatically, e.g., instances of concepts (places, medical conditions, drugs, etc.), synonyms/variant forms of concepts, and relationships holding between concepts (which drugs are used to treat which medical conditions, etc.). TM analysis allows search systems to incorporate functionality such as automatic suggestions of synonyms of user-entered query terms, exploration of different concepts mentioned within search results or isolation of documents in which concepts are related in specific ways. However, applying TM methods to historical text can be challenging, according to differences and evolutions in vocabulary, terminology, language structure and style, compared to more modern text. In this article, we present our efforts to overcome the various challenges faced in the semantic analysis of published historical medical text dating back to the mid 19th century. Firstly, we used evidence from diverse historical medical documents from different periods to develop new resources that provide accounts of the multiple, evolving ways in which concepts, their variants and relationships amongst them may be expressed. These resources were employed to support the development of a modular processing pipeline of TM tools for the robust detection of semantic information in historical medical documents with varying characteristics. We applied the pipeline to two large-scale medical document archives covering wide temporal ranges as the basis for the development of a publicly accessible semantically-oriented search system. The novel resources are available for research purposes, while the processing pipeline and its modules may be used and configured within the Argo TM platform. PMID:26734936

  12. Text Mining the History of Medicine

    PubMed Central

    Thompson, Paul; Batista-Navarro, Riza Theresa; Kontonatsios, Georgios; Carter, Jacob; Toon, Elizabeth; McNaught, John; Timmermann, Carsten; Worboys, Michael; Ananiadou, Sophia

    2016-01-01

    Historical text archives constitute a rich and diverse source of information, which is becoming increasingly readily accessible, due to large-scale digitisation efforts. However, it can be difficult for researchers to explore and search such large volumes of data in an efficient manner. Text mining (TM) methods can help, through their ability to recognise various types of semantic information automatically, e.g., instances of concepts (places, medical conditions, drugs, etc.), synonyms/variant forms of concepts, and relationships holding between concepts (which drugs are used to treat which medical conditions, etc.). TM analysis allows search systems to incorporate functionality such as automatic suggestions of synonyms of user-entered query terms, exploration of different concepts mentioned within search results or isolation of documents in which concepts are related in specific ways. However, applying TM methods to historical text can be challenging, according to differences and evolutions in vocabulary, terminology, language structure and style, compared to more modern text. In this article, we present our efforts to overcome the various challenges faced in the semantic analysis of published historical medical text dating back to the mid 19th century. Firstly, we used evidence from diverse historical medical documents from different periods to develop new resources that provide accounts of the multiple, evolving ways in which concepts, their variants and relationships amongst them may be expressed. These resources were employed to support the development of a modular processing pipeline of TM tools for the robust detection of semantic information in historical medical documents with varying characteristics. We applied the pipeline to two large-scale medical document archives covering wide temporal ranges as the basis for the development of a publicly accessible semantically-oriented search system. The novel resources are available for research purposes, while the processing pipeline and its modules may be used and configured within the Argo TM platform. PMID:26734936

  13. Summarizing and visualizing structural changes during the evolution of biomedical ontologies using a Diff Abstraction Network.

    PubMed

    Ochs, Christopher; Perl, Yehoshua; Geller, James; Haendel, Melissa; Brush, Matthew; Arabandi, Sivaram; Tu, Samson

    2015-08-01

    Biomedical ontologies are a critical component in biomedical research and practice. As an ontology evolves, its structure and content change in response to additions, deletions and updates. When editing a biomedical ontology, small local updates may affect large portions of the ontology, leading to unintended and potentially erroneous changes. Such unwanted side effects often go unnoticed since biomedical ontologies are large and complex knowledge structures. Abstraction networks, which provide compact summaries of an ontology's content and structure, have been used to uncover structural irregularities, inconsistencies and errors in ontologies. In this paper, we introduce Diff Abstraction Networks ("Diff AbNs"), compact networks that summarize and visualize global structural changes due to ontology editing operations that result in a new ontology release. A Diff AbN can be used to support curators in identifying unintended and unwanted ontology changes. The derivation of two Diff AbNs, the Diff Area Taxonomy and the Diff Partial-area Taxonomy, is explained and Diff Partial-area Taxonomies are derived and analyzed for the Ontology of Clinical Research, Sleep Domain Ontology, and eagle-i Research Resource Ontology. Diff Taxonomy usage for identifying unintended erroneous consequences of quality assurance and ontology merging are demonstrated. PMID:26048076

  14. Development of a Summarized Health Index (SHI) for use in predicting survival in sea turtles.

    PubMed

    Li, Tsung-Hsien; Chang, Chao-Chin; Cheng, I-Jiunn; Lin, Suen-Chuain

    2015-01-01

    Veterinary care plays an influential role in sea turtle rehabilitation, especially in endangered species. Physiological characteristics, hematological and plasma biochemistry profiles, are useful references for clinical management in animals, especially when animals are during the convalescence period. In this study, these factors associated with sea turtle surviving were analyzed. The blood samples were collected when sea turtles remained alive, and then animals were followed up for surviving status. The results indicated that significantly negative correlation was found between buoyancy disorders (BD) and sea turtle surviving (p < 0.05). Furthermore, non-surviving sea turtles had significantly higher levels of aspartate aminotranspherase (AST), creatinine kinase (CK), creatinine and uric acid (UA) than surviving sea turtles (all p < 0.05). After further analysis by multiple logistic regression model, only factors of BD, creatinine and UA were included in the equation for calculating summarized health index (SHI) for each individual. Through evaluation by receiver operating characteristic (ROC) curve, the result indicated that the area under curve was 0.920 0.037, and a cut-off SHI value of 2.5244 showed 80.0% sensitivity and 86.7% specificity in predicting survival. Therefore, the developed SHI could be a useful index to evaluate health status of sea turtles and to improve veterinary care at rehabilitation facilities. PMID:25803431

  15. Summarizing polygenic risks for complex diseases in a clinical whole genome report

    PubMed Central

    Kong, Sek Won; Lee, In-Hee; Leschiner, Ignaty; Krier, Joel; Kraft, Peter; Rehm, Heidi L.; Green, Robert C.; Kohane, Isaac S.; MacRae, Calum A.

    2015-01-01

    Purpose Disease-causing mutations and pharmacogenomic variants are of primary interest for clinical whole-genome sequencing. However, estimating genetic liability for common complex diseases using established risk alleles might one day prove clinically useful. Methods We compared polygenic scoring methods using a case-control data set with independently discovered risk alleles in the MedSeq Project. For eight traits of clinical relevance in both the primary-care and cardiomyopathy study cohorts, we estimated multiplicative polygenic risk scores using 161 published risk alleles and then normalized using the population median estimated from the 1000 Genomes Project. Results Our polygenic score approach identified the overrepresentation of independently discovered risk alleles in cases as compared with controls using a large-scale genome-wide association study data set. In addition to normalized multiplicative polygenic risk scores and rank in a population, the disease prevalence and proportion of heritability explained by known common risk variants provide important context in the interpretation of modern multilocus disease risk models. Conclusion Our approach in the MedSeq Project demonstrates how complex trait risk variants from an individual genome can be summarized and reported for the general clinician and also highlights the need for definitive clinical studies to obtain reference data for such estimates and to establish clinical utility. PMID:25341114

  16. Development of a Summarized Health Index (SHI) for Use in Predicting Survival in Sea Turtles

    PubMed Central

    Li, Tsung-Hsien; Chang, Chao-Chin; Cheng, I-Jiunn; Lin, Suen-Chuain

    2015-01-01

    Veterinary care plays an influential role in sea turtle rehabilitation, especially in endangered species. Physiological characteristics, hematological and plasma biochemistry profiles, are useful references for clinical management in animals, especially when animals are during the convalescence period. In this study, these factors associated with sea turtle surviving were analyzed. The blood samples were collected when sea turtles remained alive, and then animals were followed up for surviving status. The results indicated that significantly negative correlation was found between buoyancy disorders (BD) and sea turtle surviving (p < 0.05). Furthermore, non-surviving sea turtles had significantly higher levels of aspartate aminotranspherase (AST), creatinine kinase (CK), creatinine and uric acid (UA) than surviving sea turtles (all p < 0.05). After further analysis by multiple logistic regression model, only factors of BD, creatinine and UA were included in the equation for calculating summarized health index (SHI) for each individual. Through evaluation by receiver operating characteristic (ROC) curve, the result indicated that the area under curve was 0.920 ± 0.037, and a cut-off SHI value of 2.5244 showed 80.0% sensitivity and 86.7% specificity in predicting survival. Therefore, the developed SHI could be a useful index to evaluate health status of sea turtles and to improve veterinary care at rehabilitation facilities. PMID:25803431

  17. Automatic flowmeter calibration system

    NASA Technical Reports Server (NTRS)

    Lisle, R. V.; Wilson, T. L. (inventor)

    1981-01-01

    A system for automatically calibrating the accuracy of a flowmeter is described. The system includes a calculator capable of performing mathematical functions responsive to receiving data signals and function command signals. A prover cylinder is provided for measuring the temperature, pressure, and time required for accumulating a predetermined volume of fluid. Along with these signals, signals representing the temperature and pressure of the fluid going into the meter are fed to a plurality of data registers. Under control of a progress controller, the data registers are read out and the information is fed through a data select circuit to the calculator. Command signals are also produced by a function select circuit and are fed to the calculator set indicating the desired function to be performed. The reading is then compared with the reading produced by the flowmeter.

  18. Automatic damper control

    SciTech Connect

    Perrelli, N.J.

    1981-06-16

    An automatic damper control comprising: damper means for use in a flue or the like; diaphragm means adapted to expand and contract in response to pressure variations in the flue; and sensor means for sensing the pressure in the flue is disclosed. The sensor means is connected to the diaphragm means and comprises means for generating a plurality of beams of light. The sensor means further comprises collimating means adapted to collimate each of the beams of light onto a photoreceptor. The damper control also comprises blocking means for blocking at least one of the beams of light. The blocking means is operatively associated with the diaphragm means and is adapted to be displaced relative to the beams of light in response to movement by the diaphragm means. Drive means are additionally provided for moving the damper means in response to pressure variations in the flue.

  19. Automatic thermal switch

    NASA Technical Reports Server (NTRS)

    Wing, L. D.; Cunningham, J. W. (inventors)

    1981-01-01

    An automatic thermal switch to control heat flow includes a first thermally conductive plate, a second thermally conductive plate and a thermal transfer plate pivotally mounted between the first and second plates. A phase change power unit, including a plunger connected to the transfer plate, is in thermal contact with the first thermally conductive plate. A biasing element, connected to the transfer plate, biases the transfer plate in a predetermined position with respect to the first and second plates. When the phase change power unit is actuated by an increase in heat transmitted through the first plate, the plunger extends and pivots the transfer plate to vary the thermal conduction between the first and second plates through the transfer plate. The biasing element, transfer plate and piston can be arranged to provide either a normally closed or normally open thermally conductive path between the first and second plates.

  20. Automatic geophysical observatories

    NASA Astrophysics Data System (ADS)

    To extend Antarctic science support capacity, the Division of Polar Programs of the National Science Foundation is developing Automatic Geophysical Observatories (AGOs). The AGO program was initiated at the urging of the upper atmospheric physics research community for ionospheric, magnetospheric, and thermospheric studies and aeronomy. However, it is expected that the AGOs will have other uses, such as tropospheric or stratospheric chemistry, seismology, or the collection of meteoric dust.The AGOs will be transportable to almost any site in Antarctica. It is expected that the first two will be deployed in the field during the austral summer of 1990-1991, with two more being deployed each summer up to a total of six AGOs. It is probable that measurements made at AGOs could be complemented by similar measurements at manned stations, and it is encouraged that such possibilities be explored.

  1. Automatic alkaloid removal system.

    PubMed

    Yahaya, Muhammad Rizuwan; Hj Razali, Mohd Hudzari; Abu Bakar, Che Abdullah; Ismail, Wan Ishak Wan; Muda, Wan Musa Wan; Mat, Nashriyah; Zakaria, Abd

    2014-01-01

    This alkaloid automated removal machine was developed at Instrumentation Laboratory, Universiti Sultan Zainal Abidin Malaysia that purposely for removing the alkaloid toxicity from Dioscorea hispida (DH) tuber. It is a poisonous plant where scientific study has shown that its tubers contain toxic alkaloid constituents, dioscorine. The tubers can only be consumed after it poisonous is removed. In this experiment, the tubers are needed to blend as powder form before inserting into machine basket. The user is need to push the START button on machine controller for switching the water pump ON by then creating turbulence wave of water in machine tank. The water will stop automatically by triggering the outlet solenoid valve. The powders of tubers are washed for 10 minutes while 1 liter of contaminated water due toxin mixture is flowing out. At this time, the controller will automatically triggered inlet solenoid valve and the new water will flow in machine tank until achieve the desire level that which determined by ultra sonic sensor. This process will repeated for 7 h and the positive result is achieved and shows it significant according to the several parameters of biological character ofpH, temperature, dissolve oxygen, turbidity, conductivity and fish survival rate or time. From that parameter, it also shows the positive result which is near or same with control water and assuming was made that the toxin is fully removed when the pH of DH powder is near with control water. For control water, the pH is about 5.3 while water from this experiment process is 6.0 and before run the machine the pH of contaminated water is about 3.8 which are too acid. This automated machine can save time for removing toxicity from DH compared with a traditional method while less observation of the user. PMID:24783795

  2. Networked Automatic Optical Telescopes

    NASA Astrophysics Data System (ADS)

    Mattox, J. R.

    2000-05-01

    Many groups around the world are developing automated or robotic optical observatories. The coordinated operation of automated optical telescopes at diverse sites could provide observing prospects which are not otherwise available, e.g., continuous optical photometry without diurnal interruption. Computer control and scheduling also offers the prospect of effective response to transient events such as γ -ray bursts. These telescopes could also serve science education by providing high-quality CCD data for educators and students. The Automatic Telescope Network (ATN) project has been undertaken to promote networking of automated telescopes. A web site is maintained at http://gamma.bu.edu/atn/. The development of such networks will be facilitated by the existence of standards. A set of standard commands for instrument and telescope control systems will allow for the creation of software for an ``observatory control system'' which can be used at any facility which complies with the TCS and ICS standards. Also, there is a strong need for standards for the specification of observations to be done, and reports on the results and status of observations. A proposed standard for this is the Remote Telescope Markup Language (RTML), which is expected to be described in another poster in this session. It may thus be feasible for amateur-astronomers to soon buy all necessary equipment and software to field an automatic telescope. The owner/operator could make otherwise unused telescope time available to the network in exchange for the utilization of other telescopes in the network --- including occasional utilization of meter-class telescopes with research-grade CCD detectors at good sites.

  3. Injury narrative text classification using factorization model

    PubMed Central

    2015-01-01

    Narrative text is a useful way of identifying injury circumstances from the routine emergency department data collections. Automatically classifying narratives based on machine learning techniques is a promising technique, which can consequently reduce the tedious manual classification process. Existing works focus on using Naive Bayes which does not always offer the best performance. This paper proposes the Matrix Factorization approaches along with a learning enhancement process for this task. The results are compared with the performance of various other classification approaches. The impact on the classification results from the parameters setting during the classification of a medical text dataset is discussed. With the selection of right dimension k, Non Negative Matrix Factorization-model method achieves 10 CV accuracy of 0.93. PMID:26043671

  4. Practical vision based degraded text recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published techniques. The system successfully produced impressive OCR accuracies (90% -to- 93%) using customized systems generated by our development framework in two industrial OCR applications: water bottle label text recognition and concrete slab plate text recognition. The system was also trained for the Arabic language alphabet, and demonstrated extremely high recognition accuracy (99%) for Arabic license name plate text recognition with processing times of 10 seconds. The accuracy and run times of the system were compared to conventional and many states of art methods, the proposed system shows excellent results.

  5. Automatic differentiation of limit functions

    SciTech Connect

    Michelotti, L.

    1993-05-01

    Automatic differentiation can be used to evaluate the derivatives of and set up Taylor series for implicitly defined functions and maps. The author provides several examples of how this works, within the context of the MXYZPTLK class library, and discusses its extension to inverse functions. The techniques of automatic differentiation and differential algebra are rapidly becoming a standard part of accelerator physicists` arsenals.

  6. Automatic Coal-Mining System

    NASA Technical Reports Server (NTRS)

    Collins, E. R., Jr.

    1985-01-01

    Coal cutting and removal done with minimal hazard to people. Automatic coal mine cutting, transport and roof-support movement all done by automatic machinery. Exposure of people to hazardous conditions reduced to inspection tours, maintenance, repair, and possibly entry mining.

  7. Assessment of Positive Automatic Cognition.

    ERIC Educational Resources Information Center

    Ingram, Rick E.; Wisnicki, Kathleen S.

    1988-01-01

    Reports on two studies designed to develop and evaluate the Automatic Thoughts Questionnaire-Positive (ATQ-P), a measure of positive automatic thinking that is complementary to the ATQ, a measure of negative thinking in psychopathology. Describes results suggesting that ATQ-P is reliable and valid measure of positive thinking. (Author/NB)

  8. Summarizing motion contents of the video clip using moving edge overlaid frame (MEOF)

    NASA Astrophysics Data System (ADS)

    Yu, Tianli; Zhang, Yujin

    2001-12-01

    How to quickly and effectively exchange video information with the user is a major task for video searching engine's user interface. In this paper, we proposed to use Moving Edge Overlaid Frame (MEOF) image to summarize both the local object motion and global camera motion information of the video clip into a single image. MEOF will supplement the motion information that is generally dropped by the key frame representation, and it will enable faster perception for the user than viewing the actual video. The key technology of our MEOF generating algorithm involves the global motion estimation (GME). In order to extract the precise global motion model from general video, our GME module takes two stages, the match based initial GME and the gradient based GME refinement. The GME module also maintains a sprite image that will be aligned with the new input frame in the background after the global motion compensation transform. The difference between the aligned sprite and the new frame will be used to extract the masks that will help to pick out the moving objects' edges. The sprite is updated with each input frame and the moving edges are extracted at a constant interval. After all the frames are processed, the extracted moving edges are overlaid to the sprite according to there global motion displacement with the sprite and the temporal distance with the last frame, thus create our MEOF image. Experiments show that the MEOF representation of the video clip helps the user acquire the motion knowledge much faster and also be compact enough to serve the needs of online applications.

  9. Automatic Evidence Retrieval for Systematic Reviews

    PubMed Central

    Choong, Miew Keen; Galgani, Filippo; Dunn, Adam G

    2014-01-01

    Background Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing’s effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Objective Our goal was to evaluate an automatic method for citation snowballing’s capacity to identify and retrieve the full text and/or abstracts of cited articles. Methods Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. Results The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. Conclusions The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews. PMID:25274020

  10. Humans in Space: Summarizing the Medico-Biological Results of the Space Shuttle Program

    NASA Technical Reports Server (NTRS)

    Risin, Diana; Stepaniak, P. C.; Grounds, D. J.

    2011-01-01

    As we celebrate the 50th anniversary of Gagarin's flight that opened the era of Humans in Space we also commemorate the 30th anniversary of the Space Shuttle Program (SSP) which was triumphantly completed by the flight of STS-135 on July 21, 2011. These were great milestones in the history of Human Space Exploration. Many important questions regarding the ability of humans to adapt and function in space were answered for the past 50 years and many lessons have been learned. Significant contribution to answering these questions was made by the SSP. To ensure the availability of the Shuttle Program experiences to the international space community NASA has made a decision to summarize the medico-biological results of the SSP in a fundamental edition that is scheduled to be completed by the end of 2011 beginning 2012. The goal of this edition is to define the normal responses of the major physiological systems to short-duration space flights and provide a comprehensive source of information for planning, ensuring successful operational activities and for management of potential medical problems that might arise during future long-term space missions. The book includes the following sections: 1. History of Shuttle Biomedical Research and Operations; 2. Medical Operations Overview Systems, Monitoring, and Care; 3. Biomedical Research Overview; 4. System-specific Adaptations/Responses, Issues, and Countermeasures; 5. Multisystem Issues and Countermeasures. In addition, selected operational documents will be presented in the appendices. The chapters are written by well-recognized experts in appropriate fields, peer reviewed, and edited by physicians and scientists with extensive expertise in space medical operations and space-related biomedical research. As Space Exploration continues the major question whether humans are capable of adapting to long term presence and adequate functioning in space habitats remains to be answered We expect that the comprehensive review of the medico-biological results of the SSP along with the data collected during the missions on the space stations (Mir and ISS) provides a good starting point in seeking the answer to this question.

  11. Automatic Command Sequence Generation

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Gladded, Roy; Khanampompan, Teerapat

    2007-01-01

    Automatic Sequence Generator (Autogen) Version 3.0 software automatically generates command sequences for the Mars Reconnaissance Orbiter (MRO) and several other JPL spacecraft operated by the multi-mission support team. Autogen uses standard JPL sequencing tools like APGEN, ASP, SEQGEN, and the DOM database to automate the generation of uplink command products, Spacecraft Command Message Format (SCMF) files, and the corresponding ground command products, DSN Keywords Files (DKF). Autogen supports all the major multi-mission mission phases including the cruise, aerobraking, mapping/science, and relay mission phases. Autogen is a Perl script, which functions within the mission operations UNIX environment. It consists of two parts: a set of model files and the autogen Perl script. Autogen encodes the behaviors of the system into a model and encodes algorithms for context sensitive customizations of the modeled behaviors. The model includes knowledge of different mission phases and how the resultant command products must differ for these phases. The executable software portion of Autogen, automates the setup and use of APGEN for constructing a spacecraft activity sequence file (SASF). The setup includes file retrieval through the DOM (Distributed Object Manager), an object database used to store project files. This step retrieves all the needed input files for generating the command products. Depending on the mission phase, Autogen also uses the ASP (Automated Sequence Processor) and SEQGEN to generate the command product sent to the spacecraft. Autogen also provides the means for customizing sequences through the use of configuration files. By automating the majority of the sequencing generation process, Autogen eliminates many sequence generation errors commonly introduced by manually constructing spacecraft command sequences. Through the layering of commands into the sequence by a series of scheduling algorithms, users are able to rapidly and reliably construct the desired uplink command products. With the aid of Autogen, sequences may be produced in a matter of hours instead of weeks, with a significant reduction in the number of people on the sequence team. As a result, the uplink product generation process is significantly streamlined and mission risk is significantly reduced. Autogen is used for operations of MRO, Mars Global Surveyor (MGS), Mars Exploration Rover (MER), Mars Odyssey, and will be used for operations of Phoenix. Autogen Version 3.0 is the operational version of Autogen including the MRO adaptation for the cruise mission phase, and was also used for development of the aerobraking and mapping mission phases for MRO.

  12. Component Skills of Text Comprehension in Less Competent Chinese Comprehenders

    ERIC Educational Resources Information Center

    Leong, Che Kan; Hau, Kit Tai; Tse, Shek Kam; Loh, Ka Yee

    2007-01-01

    The present study examined the role of verbal working memory (memory span and tongue-twister), two-character Chinese pseudoword reading (two tasks), rapid automatized naming (RAN) (letters and numbers), and phonological segmentation (deletion of rimes and onsets) in inferential text comprehension in Chinese in 31 less competent comprehenders

  13. Automatic Structures Recent Results and Open Questions

    NASA Astrophysics Data System (ADS)

    Stephan, Frank

    2015-06-01

    Regular languages are languages recognised by finite automata; automatic structures are a generalisation of regular languages where one also uses automatic relations (which are relations recognised by synchronous finite automata) and automatic functions (which are functions whose graph is an automatic relation). Functions and relations first-order definable from other automatic functions and relations are again automatic. Automatic functions coincide with the functions computed by position-faithful one-tape Turing machines in linear time. This survey addresses recent results and open questions on topics related to automatic structures: How difficult is the isomorphism problem for various types of automatic structures? Which groups are automatic? When are automatic groups Abelian or orderable? How can one overcome some of the limitations to represent rings and fields by weakening the automaticity requirements of a structure?

  14. Automatic drilling control system

    SciTech Connect

    Ball, J.W.

    1987-05-05

    An automatic drilling control system is described for a drilling apparatus having a rig with a crown block and a traveling block. A draw works include an engine, a drum powered by the engine, clutches, and controls, a drilling line wound on the drum and rolled up or fed out during drilling by the engine. The drilling line extends through the crown block and the traveling block and connects to a fixed point. The line portion from the crown block to the fixed point is the dead line. The crown block and traveling block form a pulley system for supporting a drill pipe to raise or lower the same during drilling. A hydraulic pressure sensor connects to the dead line to measure the tension. A weight indicator gauge adjacent to the controls connects to the pressure sensor by a hydraulic line. A brake, having a brake handle, controls the rate of feed out of the drilling line to determine the tension on the dead line.

  15. Automatic Welding System

    NASA Astrophysics Data System (ADS)

    1982-01-01

    Robotic welding has been of interest to industrial firms because it offers higher productivity at lower cost than manual welding. There are some systems with automated arc guidance available, but they have disadvantages, such as limitations on types of materials or types of seams that can be welded; susceptibility to stray electrical signals; restricted field of view; or tendency to contaminate the weld seam. Wanting to overcome these disadvantages, Marshall Space Flight Center, aided by Hayes International Corporation, developed system that uses closed-circuit TV signals for automatic guidance of the welding torch. NASA granted license to Combined Technologies, Inc. for commercial application of the technology. They developed a refined and improved arc guidance system. CTI in turn, licensed the Merrick Corporation, also of Nashville, for marketing and manufacturing of the new system, called the CT2 Optical Trucker. CT2 is a non-contracting system that offers adaptability to broader range of welding jobs and provides greater reliability in high speed operation. It is extremely accurate and can travel at high speed of up to 150 inches per minute.

  16. Automatic flue damper

    SciTech Connect

    Prikkel, J.I.

    1982-12-28

    In the present invention, an automatic flue damper located in the vent stack of a household furnace or other apparatus requiring venting is interfaced with a thermostatic control and a fuel supply valve associated with the apparatus so as to maintain a vent passage to the atmosphere normally open during times when combustion is occurring in the apparatus and to close the vent passage at an appropriate time following the termination of combustion in the apparatus. The normally open vent condition is maintained by a damper positioning spring. Vent closure following combustion is accomplished by means of a direct current solenoid. Further included in the interfacing circuitry are a temperature sensing device effective to disable the solenoid, thus opening the vent, upon the appearance of unsafe stack temperatures and a pressure sensor which also disables the solenoid to open the vent stack upon the occurrence of an inappropriate discharge of fuel to the combustion chamber. The pressure sensor then also sounds an alarm to alert those nearby of the unauthorized escape of fuel.

  17. Automatic damper assembly

    SciTech Connect

    Kolt, S.

    1986-04-15

    An automatic temperature responsive damper assembly is described for use within the conduit of a ventilating system designed to exhaust air from a defined space into the atmosphere. The assembly consists of: (a) mounting means for mounting the damper assembly within the conduit; (b) at least one vane rotatably mounted on the mounting means, the vane being rotatable between a normally closed position wherein the vane substantially reduces the passage of air flow through the conduit and open position wherein the vane increases the air flow through the conduit, the vane having; (i) a pivotal axis disposed transverse to the axis of the conduit, (ii) a surface area including a larger first portion and a smaller second portion, the larger first portion extending beyond one side of the pivotal axis for biasing the vane to a closed position, the second portion extending beyond the other side of the pivotal axis, the second portion being adapted to cooperate with a first portion of a complementary vane to substantially reduce the air flow through the conduit, and (iii) an air flow path provided in the first vane portion for permitting a prescribed amount of the air to flow through the conduit when the vane is in the normally closed position, and (c) means for rotating the vane towards the open position when the temperature of the air in the defined space reached a predetermined level.

  18. Automatic transmission system

    SciTech Connect

    Ha, J.S.

    1989-04-25

    An automatic transmission system is described for use in vehicles, which comprises: a clutch wheel containing a plurality of concentric rings of decreasing diameter, the clutch wheel being attached to an engine of the vehicle; a plurality of clutch gears corresponding in size to the concentric rings, the clutch gears being adapted to selectively and frictionally engage with the concentric rings of the clutch wheel; an accelerator pedal and a gear selector, the accelerator pedals being connected to one end of a substantially U-shaped frame member, the other end of the substantially U-shaped frame member selectively engaging with one end of one of wires received in a pair of apertures of the gear selector; a plurality of drive gear controllers and a reverse gear controller; means operatively connected with the gear selector and the plurality of drive gear controllers and reverse gear controller for selectively engaging one of the drive and reverse gear controllers depending upon the position of the gear selector; and means for individually connecting the drive and reverse gear controllers with the corresponding clutch gears whereby upon the selection of the gear selector, friction engagement is achieved between the clutch gear and the clutch wheels for rotating the wheel in the forward or reverse direction.

  19. Automatic truck dispatching: increasing productivity

    SciTech Connect

    Not Available

    1982-03-01

    There are various levels of sophistication in haulage truck dispatching methodology, the most advanced of which is totally automatic. In this system, truck status and location data are automatically input to a computer. Dispatch decisions are then made by the computer and automatically transmitted to truck drivers. The dispatcher is free to handle non-routine situations by exception. The primary function of one of these systems, the Automatic Truck Identification and Dispatching (ATID) system is to increase the productivity of existing surface mining equipment (trucks, shovels, crushers), without significantly altering existing mine operating procedures. The system is distributed by AVM Systems Inc. The ATID system automatically determines and reports the location of all haulage trucks to a central processor and combines this information with other data in an optimum dispatch algorithm. This dispatch order developed by the algorithm is automatically transmitted to the truck driver. The system then provides the dispatcher with the capability to monitor each truck. AVM recommends a feasibility study of the system to mine operators before they commit to full scale implementation. Management can be thus assured that the investment made in the system will lower operational costs. It is claimed that, on the average, the system pays for itself in less than one year. The ATID system has three major subdivisions: automatic vehicle location and identification; communications; computation and display.

  20. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  1. Attaining Automaticity in the Visual Numerosity Task is Not Automatic

    PubMed Central

    Speelman, Craig P.; Muller Townsend, Katrina L.

    2015-01-01

    This experiment is a replication of experiments reported by Lassaline and Logan (1993) using the visual numerosity task. The aim was to replicate the transition from controlled to automatic processing reported by Lassaline and Logan (1993), and to examine the extent to which this result, reported with average group results, can be observed in the results of individuals within a group. The group results in this experiment did replicate those reported by Lassaline and Logan (1993); however, one half of the sample did not attain automaticity with the task, and one-third did not exhibit a transition from controlled to automatic processing. These results raise questions about the pervasiveness of automaticity, and the interpretation of group means when examining cognitive processes. PMID:26635658

  2. Nonverbatim Captioning in Dutch Television Programs: A Text Linguistic Approach

    ERIC Educational Resources Information Center

    Schilperoord, Joost; de Groot, Vanja; van Son, Nic

    2005-01-01

    In the Netherlands, as in most other European countries, closed captions for the deaf summarize texts rather than render them verbatim. Caption editors argue that in this way television viewers have enough time to both read the text and watch the program. They also claim that the meaning of the original message is properly conveyed. However, many

  3. Use of SI Metric Units Misrepresented in College Physics Texts.

    ERIC Educational Resources Information Center

    Hooper, William

    1980-01-01

    Summarizes results of a survey that examined 13 textbooks claiming to use SI units. Tables present data concerning the SI and non-SI units actually used in each text in discussion of fluid pressure and thermal energy, and data concerning which texts do and do not use SI as claimed. (CS)

  4. Important Text Characteristics for Early-Grades Text Complexity

    ERIC Educational Resources Information Center

    Fitzgerald, Jill; Elmore, Jeff; Koons, Heather; Hiebert, Elfrieda H.; Bowen, Kimberly; Sanford-Moore, Eleanor E.; Stenner, A. Jackson

    2015-01-01

    The Common Core set a standard for all children to read increasingly complex texts throughout schooling. The purpose of the present study was to explore text characteristics specifically in relation to early-grades text complexity. Three hundred fifty primary-grades texts were selected and digitized. Twenty-two text characteristics were identified…

  5. Important Text Characteristics for Early-Grades Text Complexity

    ERIC Educational Resources Information Center

    Fitzgerald, Jill; Elmore, Jeff; Koons, Heather; Hiebert, Elfrieda H.; Bowen, Kimberly; Sanford-Moore, Eleanor E.; Stenner, A. Jackson

    2015-01-01

    The Common Core set a standard for all children to read increasingly complex texts throughout schooling. The purpose of the present study was to explore text characteristics specifically in relation to early-grades text complexity. Three hundred fifty primary-grades texts were selected and digitized. Twenty-two text characteristics were identified

  6. Identifying and classifying biomedical perturbations in text.

    PubMed

    Rodriguez-Esteban, Raul; Roberts, Phoebe M; Crawford, Matthew E

    2009-02-01

    Molecular perturbations provide a powerful toolset for biomedical researchers to scrutinize the contributions of individual molecules in biological systems. Perturbations qualify the context of experimental results and, despite their diversity, share properties in different dimensions in ways that can be formalized. We propose a formal framework to describe and classify perturbations that allows accumulation of knowledge in order to inform the process of biomedical scientific experimentation and target analysis. We apply this framework to develop a novel algorithm for automatic detection and characterization of perturbations in text and show its relevance in the study of gene-phenotype associations and protein-protein interactions in diabetes and cancer. Analyzing perturbations introduces a novel view of the multivariate landscape of biological systems. PMID:19074486

  7. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1988-01-01

    The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.

  8. Instrumentation for automatic coke cutting

    SciTech Connect

    Alworth, C.W.

    1985-01-01

    This paper describes a system developed by Conoco and currently in use at a refinery for automatically decoking delayed coker drums. The paper describes experiences with the system and discusses future plans for automated hydraulic decoking systems.

  9. ADMAP (automatic data manipulation program)

    NASA Technical Reports Server (NTRS)

    Mann, F. I.

    1971-01-01

    Instructions are presented on the use of ADMAP, (automatic data manipulation program) an aerospace data manipulation computer program. The program was developed to aid in processing, reducing, plotting, and publishing electric propulsion trajectory data generated by the low thrust optimization program, HILTOP. The program has the option of generating SC4020 electric plots, and therefore requires the SC4020 routines to be available at excution time (even if not used). Several general routines are present, including a cubic spline interpolation routine, electric plotter dash line drawing routine, and single parameter and double parameter sorting routines. Many routines are tailored for the manipulation and plotting of electric propulsion data, including an automatic scale selection routine, an automatic curve labelling routine, and an automatic graph titling routine. Data are accepted from either punched cards or magnetic tape.

  10. Clothes Dryer Automatic Termination Evaluation

    SciTech Connect

    TeGrotenhuis, Ward E.

    2014-10-01

    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this report shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.

  11. Evaluation of decision forests on text categorization

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Ho, Tin K.

    1999-12-01

    Text categorization is useful for indexing documents for information retrieval, filtering parts for document understanding, and summarizing contents of documents of special interests. We describe a text categorization task and an experiment using documents from the Reuters and OHSUMED collections. We applied the Decision Forest classifier and compared its accuracies to those of C4.5 and kNN classifiers using both category dependent and category independent term selection schemes. It is found that Decision Forest outperforms both C4.5 and kNN in all cases, and that category dependent term selection yields better accuracies. Performances of al three classifiers degrade from the Reuters collection to the OHSUMED collection, but Decision Forest remains to be superior.

  12. Automatic safety rod for reactors

    DOEpatents

    Germer, John H. (San Jose, CA)

    1988-01-01

    An automatic safety rod for a nuclear reactor containing neutron absorbing material and designed to be inserted into a reactor core after a loss-of-core flow. Actuation is based upon either a sudden decrease in core pressure drop or the pressure drop decreases below a predetermined minimum value. The automatic control rod includes a pressure regulating device whereby a controlled decrease in operating pressure due to reduced coolant flow does not cause the rod to drop into the core.

  13. Automatic Collision Avoidance Technology (ACAT)

    NASA Technical Reports Server (NTRS)

    Swihart, Donald E.; Skoog, Mark A.

    2007-01-01

    This document represents two views of the Automatic Collision Avoidance Technology (ACAT). One viewgraph presentation reviews the development and system design of Automatic Collision Avoidance Technology (ACAT). Two types of ACAT exist: Automatic Ground Collision Avoidance (AGCAS) and Automatic Air Collision Avoidance (AACAS). The AGCAS Uses Digital Terrain Elevation Data (DTED) for mapping functions, and uses Navigation data to place aircraft on map. It then scans DTED in front of and around aircraft and uses future aircraft trajectory (5g) to provide automatic flyup maneuver when required. The AACAS uses data link to determine position and closing rate. It contains several canned maneuvers to avoid collision. Automatic maneuvers can occur at last instant and both aircraft maneuver when using data link. The system can use sensor in place of data link. The second viewgraph presentation reviews the development of a flight test and an evaluation of the test. A review of the operation and comparison of the AGCAS and a pilot's performance are given. The same review is given for the AACAS is given.

  14. Text analysis methods, text analysis apparatuses, and articles of manufacture

    DOEpatents

    Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M

    2014-10-28

    Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.

  15. Torpedo: topic periodicity discovery from text data

    NASA Astrophysics Data System (ADS)

    Wang, Jingjing; Deng, Hongbo; Han, Jiawei

    2015-05-01

    Although history may not repeat itself, many human activities are inherently periodic, recurring daily, weekly, monthly, yearly or following some other periods. Such recurring activities may not repeat the same set of keywords, but they do share similar topics. Thus it is interesting to mine topic periodicity from text data instead of just looking at the temporal behavior of a single keyword/phrase. Some previous preliminary studies in this direction prespecify a periodic temporal template for each topic. In this paper, we remove this restriction and propose a simple yet effective framework Torpedo to mine periodic/recurrent patterns from text, such as news articles, search query logs, research papers, and web blogs. We first transform text data into topic-specific time series by a time dependent topic modeling module, where each of the time series characterizes the temporal behavior of a topic. Then we use time series techniques to detect periodicity. Hence we both obtain a clear view of how topics distribute over time and enable the automatic discovery of periods that are inherent in each topic. Theoretical and experimental analyses demonstrate the advantage of Torpedo over existing work.

  16. Automatic addressing of telemetry channels

    SciTech Connect

    Lucero, L A

    1982-08-01

    To simplify telemetry software development, a design that eliminates the use of software instructions to address telemetry channels is being implemented in our telemetry systems. By using the direct memory access function of the RCA 1802 microprocessor, once initialized, addressing of telemetry channels is automatic, requiring no software. In this report the automatic addressing of telemetry channels (AATC) scheme is compared with an earlier technique that uses software. In comparison, the automatic addressing scheme effectively increases the software capability of the microprocessor, simplifies telemetry dataset encoding, eases dataset changes, and may decrease the electronic hardware count. The software addressing technique uses at least three instructions to address each channel. The automatic addressing technique requires no software instructions. Instead, addressing is performed using a direct memory access cycle stealing technique. Application of an early version of this addressing scheme to telemetry Type 1, Dataset 3, opened up the capability to execute 400 more microprocessor instructions than could be executed using the software addressing scheme. The present version of the automatic addressing scheme uses a section of PROM reserved for telemetry channel addresses. Encoding for a dataset is accomplished by programming the PROM with channel addresses in the order they are to be monitored. The telemetry Type 2 software was written using the software addressing scheme, then rewritten using the automatic addressing scheme. While 1000 bytes of memory were required by the software addressing scheme, the automatic addressing scheme required only 396 bytes. A number of prototypes using AATC have been built and tested in a full telemetry lab unit. All have worked successfully.

  17. Mining the Text: 34 Text Features that Can Ease or Obstruct Text Comprehension and Use

    ERIC Educational Resources Information Center

    White, Sheida

    2012-01-01

    This article presents 34 characteristics of texts and tasks ("text features") that can make continuous (prose), noncontinuous (document), and quantitative texts easier or more difficult for adolescents and adults to comprehend and use. The text features were identified by examining the assessment tasks and associated texts in the national…

  18. Mining the Text: 34 Text Features that Can Ease or Obstruct Text Comprehension and Use

    ERIC Educational Resources Information Center

    White, Sheida

    2012-01-01

    This article presents 34 characteristics of texts and tasks ("text features") that can make continuous (prose), noncontinuous (document), and quantitative texts easier or more difficult for adolescents and adults to comprehend and use. The text features were identified by examining the assessment tasks and associated texts in the national

  19. Text Complexity and the CCSS

    ERIC Educational Resources Information Center

    Aspen Institute, 2012

    2012-01-01

    What is meant by text complexity is a measurement of how challenging a particular text is to read. There are a myriad of different ways of explaining what makes text challenging to read, from the sophistication of the vocabulary employed to the length of its sentences to even measurements of how the text as a whole coheres. Research shows that no…

  20. The Challenge of Challenging Text

    ERIC Educational Resources Information Center

    Shanahan, Timothy; Fisher, Douglas; Frey, Nancy

    2012-01-01

    The Common Core State Standards emphasize the value of teaching students to engage with complex text. But what exactly makes a text complex, and how can teachers help students develop their ability to learn from such texts? The authors of this article discuss five factors that determine text complexity: vocabulary, sentence structure, coherence,…

  1. Technical Vocabulary in Specialised Texts.

    ERIC Educational Resources Information Center

    Chung, Teresa Mihwa; Nation, Paul

    2003-01-01

    Describes two studies of technical vocabulary, one using an anatomy text and the other an applied linguistics text. Technical vocabulary was found by rating words in the texts on a four-step scale. Found that technical vocabulary made up a very substantial proportion of both the different words and the running words in texts. (Author/VWL)

  2. Text analysis devices, articles of manufacture, and text analysis methods

    DOEpatents

    Turner, Alan E; Hetzler, Elizabeth G; Nakamura, Grant C

    2013-05-28

    Text analysis devices, articles of manufacture, and text analysis methods are described according to some aspects. In one aspect, a text analysis device includes processing circuitry configured to analyze initial text to generate a measurement basis usable in analysis of subsequent text, wherein the measurement basis comprises a plurality of measurement features from the initial text, a plurality of dimension anchors from the initial text and a plurality of associations of the measurement features with the dimension anchors, and wherein the processing circuitry is configured to access a viewpoint indicative of a perspective of interest of a user with respect to the analysis of the subsequent text, and wherein the processing circuitry is configured to use the viewpoint to generate the measurement basis.

  3. Evaluating a variety of text-mined features for automatic protein function prediction with GOstruct.

    PubMed

    Funk, Christopher S; Kahanda, Indika; Ben-Hur, Asa; Verspoor, Karin M

    2015-01-01

    Most computational methods that predict protein function do not take advantage of the large amount of information contained in the biomedical literature. In this work we evaluate both ontology term co-mention and bag-of-words features mined from the biomedical literature and analyze their impact in the context of a structured output support vector machine model, GOstruct. We find that even simple literature based features are useful for predicting human protein function (F-max: Molecular Function =0.408, Biological Process =0.461, Cellular Component =0.608). One advantage of using literature features is their ability to offer easy verification of automated predictions. We find through manual inspection of misclassifications that some false positive predictions could be biologically valid predictions based upon support extracted from the literature. Additionally, we present a "medium-throughput" pipeline that was used to annotate a large subset of co-mentions; we suggest that this strategy could help to speed up the rate at which proteins are curated. PMID:26005564

  4. Automatically Detecting Acute Myocardial Infarction Events from EHR Text: A Preliminary Study

    PubMed Central

    Zheng, Jiaping; Yarzebski, Jorge; Ramesh, Balaji Polepalli; Goldberg, Robert J.; Yu, Hong

    2014-01-01

    The Worcester Heart Attack Study (WHAS) is a population-based surveillance project examining trends in the incidence, in-hospital, and long-term survival rates of acute myocardial infarction (AMI) among residents of central Massachusetts. It provides insights into various aspects of AMI. Much of the data has been assessed manually. We are developing supervised machine learning approaches to automate this process. Since the existing WHAS data cannot be used directly for an automated system, we first annotated the AMI information in electronic health records (EHR). With strict inter-annotator agreement over 0.74 and un-strict agreement over 0.9 of Cohens ?, we annotated 105 EHR discharge summaries (135k tokens). Subsequently, we applied the state-of-the-art supervised machine-learning model, Conditional Random Fields (CRFs) for AMI detection. We explored different approaches to overcome the data sparseness challenge and our results showed that cluster-based word features achieved the highest performance. PMID:25954440

  5. A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text

    ERIC Educational Resources Information Center

    Nguyen, Bao-An; Yang, Don-Lin

    2012-01-01

    An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…

  6. Automatic Identification of Topic Tags from Texts Based on Expansion-Extraction Approach

    ERIC Educational Resources Information Center

    Yang, Seungwon

    2013-01-01

    Identifying topics of a textual document is useful for many purposes. We can organize the documents by topics in digital libraries. Then, we could browse and search for the documents with specific topics. By examining the topics of a document, we can quickly understand what the document is about. To augment the traditional manual way of topic…

  7. Automatic Word Sense Disambiguation of Acronyms and Abbreviations in Clinical Texts

    ERIC Educational Resources Information Center

    Moon, Sungrim

    2012-01-01

    The use of acronyms and abbreviations is increasing profoundly in the clinical domain in large part due to the greater adoption of electronic health record (EHR) systems and increased electronic documentation within healthcare. A single acronym or abbreviation may have multiple different meanings or senses. Comprehending the proper meaning of an…

  8. Use of a New Set of Linguistic Features to Improve Automatic Assessment of Text Readability

    ERIC Educational Resources Information Center

    Yoshimi, Takehiko; Kotani, Katsunori; Isahara, Hitoshi

    2012-01-01

    The present paper proposes and evaluates a readability assessment method designed for Japanese learners of EFL (English as a foreign language). The proposed readability assessment method is constructed by a regression algorithm using a new set of linguistic features that were employed separately in previous studies. The results showed that the

  9. Automatic Word Sense Disambiguation of Acronyms and Abbreviations in Clinical Texts

    ERIC Educational Resources Information Center

    Moon, Sungrim

    2012-01-01

    The use of acronyms and abbreviations is increasing profoundly in the clinical domain in large part due to the greater adoption of electronic health record (EHR) systems and increased electronic documentation within healthcare. A single acronym or abbreviation may have multiple different meanings or senses. Comprehending the proper meaning of an

  10. Automatic Identification of Topic Tags from Texts Based on Expansion-Extraction Approach

    ERIC Educational Resources Information Center

    Yang, Seungwon

    2013-01-01

    Identifying topics of a textual document is useful for many purposes. We can organize the documents by topics in digital libraries. Then, we could browse and search for the documents with specific topics. By examining the topics of a document, we can quickly understand what the document is about. To augment the traditional manual way of topic

  11. Vela network and automatic processing research. Final report

    SciTech Connect

    Cohen, T.J.

    1980-06-01

    This final report summarizes the material covered in each technical report and the technical memorandum prepared in FY79, and the conclusions drawn from this material. Eight major task areas were covered as follows: SRO/ASRO Evaluation Task; Signal Detection and Extraction Using Adaptive Beamforming (ABF) Techniques; Long-Period Signal Extraction Task; Extraction of Short-Period (SP) Regional Waveforms Using Polarization Filters; Event Identification Task; Data Base Transfer Task; Detection of Signal Periodicities Task; and Review of Automatic Signal Detectors Task.

  12. Exploiting vibration-based spectral signatures for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Crider, Lauren; Kangas, Scott

    2014-06-01

    Feature extraction algorithms for vehicle classification techniques represent a large branch of Automatic Target Recognition (ATR) efforts. Traditionally, vehicle ATR techniques have assumed time series vibration data collected from multiple accelerometers are a function of direct path, engine driven signal energy. If data, however, is highly dependent on measurement location these pre-established feature extraction algorithms are ineffective. In this paper, we examine the consequences of analyzing vibration data potentially contingent upon transfer path effects by exploring the sensitivity of sensor location. We summarize our analysis of spectral signatures from each accelerometer and investigate similarities within the data.

  13. Automatic system for computer program documentation

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.; Elliott, R. W.; Arseven, S.; Colunga, D.

    1972-01-01

    Work done on a project to design an automatic system for computer program documentation aids was made to determine what existing programs could be used effectively to document computer programs. Results of the study are included in the form of an extensive bibliography and working papers on appropriate operating systems, text editors, program editors, data structures, standards, decision tables, flowchart systems, and proprietary documentation aids. The preliminary design for an automated documentation system is also included. An actual program has been documented in detail to demonstrate the types of output that can be produced by the proposed system.

  14. Text2Video: text-driven facial animation using MPEG-4

    NASA Astrophysics Data System (ADS)

    Rurainsky, J.; Eisert, P.

    2005-07-01

    We present a complete system for the automatic creation of talking head video sequences from text messages. Our system converts the text into MPEG-4 Facial Animation Parameters and synthetic voice. A user selected 3D character will perform lip movements synchronized to the speech data. The 3D models created from a single image vary from realistic people to cartoon characters. A voice selection for different languages and gender as well as a pitch shift component enables a personalization of the animation. The animation can be shown on different displays and devices ranging from 3GPP players on mobile phones to real-time 3D render engines. Therefore, our system can be used in mobile communication for the conversion of regular SMS messages to MMS animations.

  15. An evaluation of an automatic markup system

    SciTech Connect

    Taghva, K.; Condit, A.; Borsack, J.

    1995-04-01

    One predominant application of OCR is the recognition of full text documents for information retrieval. Modern retrieval systems exploit both the textual content of the document as well as its structure. The relationship between textual content and character accuracy have been the focus of recent studies. It has been shown that due to the redundancies in text, average precision and recall is not heavily affected by OCR character errors. What is not fully known is to what extent OCR devices can provide reliable information that can be used to capture the structure of the document. In this paper, the authors present a preliminary report on the design and evaluation of a system to automatically markup technical documents, based on information provided by an OCR device. The device the authors use differs from traditional OCR devices in that it not only performs optical character recognition, but also provides detailed information about page layout, word geometry, and font usage. Their automatic markup program, which they call Autotag, uses this information, combined with dictionary, lookup and content analysis, to identify structural components of the text. These include the document title, author information, abstract, sections, section titles, paragraphs, sentences, and de-hyphenated words. A visual examination of the hardcopy will be compared to the output of their markup system to determine its correctness.

  16. Text editor on a chip

    SciTech Connect

    Jung Wan Cho; Heung Kyu Lee

    1983-01-01

    The authors propose a processor which provides useful facilities for implementing text editing commands. The processor now being developed is a component of the general front-end editing system which parses the program text and processes the text. This processor attached to a conventional microcomputer system bus executes screen editing functions. Conventional text editing is a typical application of the microprocessors. But in this paper emphasis is given to the firmware and hardware processing of texts in order that the processor can be fabricated in a single VLSI chip. To increase the overall regularity and decrease the design cost, the basic instructions are text editing oriented with short basic cycles. 6 references.

  17. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1990-01-01

    The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.

  18. Automatic rapid attachable warhead section

    DOEpatents

    Trennel, Anthony J. (Albuquerque, NM)

    1994-05-10

    Disclosed are a method and apparatus for (1) automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, (2) automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, (3) manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and (4) automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly.

  19. Automatic rapid attachable warhead section

    DOEpatents

    Trennel, A.J.

    1994-05-10

    Disclosed are a method and apparatus for automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly. 10 figures.

  20. Automatic analysis of multispectral images

    NASA Astrophysics Data System (ADS)

    Desouza, R. C. M.; Mitsuo, Fernando Augusto, II; Moreira, J. C.; Dutra, L. V.

    1981-08-01

    Some ideas of automatic multispectral image analysis are introduced. Automatic multispectral image analysis plays a central role in numerically oriented remote sensing systems. It presupposes the utilization of electronic equipments, mainly computers and their peripherals, to help people to interpret the information contained in multispectral digital imagery. This necessity derives from the great amount of multispectral data gathered by remote sensors within satellites and airplanes. When the number of channels or spectral bands is increased, the interpretation becomes more complex and subjective. In some cases, for example, in harvest estimation in national or regional level, it is imperative to use computer systems to complete the work within the time required. Automatic analysis also aimes to eliminate subjective factors that appear in the human interpretation, so increasing the global precision.

  1. ParaText : scalable text analysis and visualization.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-07-01

    Automated analysis of unstructured text documents (e.g., web pages, newswire articles, research publications, business reports) is a key capability for solving important problems in areas including decision making, risk assessment, social network analysis, intelligence analysis, scholarly research and others. However, as data sizes continue to grow in these areas, scalable processing, modeling, and semantic analysis of text collections becomes essential. In this paper, we present the ParaText text analysis engine, a distributed memory software framework for processing, modeling, and analyzing collections of unstructured text documents. Results on several document collections using hundreds of processors are presented to illustrate the exibility, extensibility, and scalability of the the entire process of text modeling from raw data ingestion to application analysis.

  2. Text Association Analysis and Ambiguity in Text Mining

    NASA Astrophysics Data System (ADS)

    Bhonde, S. B.; Paikrao, R. L.; Rahane, K. U.

    2010-11-01

    Text Mining is the process of analyzing a semantically rich document or set of documents to understand the content and meaning of the information they contain. The research in Text Mining will enhance human's ability to process massive quantities of information, and it has high commercial values. Firstly, the paper discusses the introduction of TM its definition and then gives an overview of the process of text mining and the applications. Up to now, not much research in text mining especially in concept/entity extraction has focused on the ambiguity problem. This paper addresses ambiguity issues in natural language texts, and presents a new technique for resolving ambiguity problem in extracting concept/entity from texts. In the end, it shows the importance of TM in knowledge discovery and highlights the up-coming challenges of document mining and the opportunities it offers.

  3. Text Editing in Chemistry Instruction.

    ERIC Educational Resources Information Center

    Ngu, Bing Hiong; Low, Renae; Sweller, John

    2002-01-01

    Describes experiments with Australian high school students that investigated differences in performance on chemistry word problems between two learning strategies: text editing, and conventional problem solving. Concluded that text editing had no advantage over problem solving in stoichiometry problems, and that the suitability of a text editing…

  4. Informational Text and the CCSS

    ERIC Educational Resources Information Center

    Aspen Institute, 2012

    2012-01-01

    What constitutes an informational text covers a broad swath of different types of texts. Biographies & memoirs, speeches, opinion pieces & argumentative essays, and historical, scientific or technical accounts of a non-narrative nature are all included in what the Common Core State Standards (CCSS) envisions as informational text. Also included…

  5. Choosing Software for Text Processing.

    ERIC Educational Resources Information Center

    Mason, Robert M.

    1983-01-01

    Review of text processing software for microcomputers covers data entry, text editing, document formatting, and spelling and proofreading programs including "Wordstar,""PeachText,""PerfectWriter,""Select," and "The Word Plus.""The Whole Earth Software Catalog" and a new terminal to be manufactured for OCLC by IBM are mentioned. (EJS)

  6. Text Signals Influence Team Artifacts

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Rysavy, Monica D.; Taricani, Ellen

    2015-01-01

    This exploratory quasi-experimental investigation describes the influence of text signals on team visual map artifacts. In two course sections, four-member teams were given one of two print-based text passage versions on the course-related topic "Social influence in groups" downloaded from Wikipedia; this text had two paragraphs, each

  7. Selecting Texts and Course Materials.

    ERIC Educational Resources Information Center

    Smith, Robert E.

    One of the most important decisions speech communication basic course directors make is the selection of the textbook. The first consideration in their choice of text should be whether or not the proposed text covers the units integral to the course. A second consideration should be whether or not the text covers the special topics integral to the

  8. Text Editing in Chemistry Instruction.

    ERIC Educational Resources Information Center

    Ngu, Bing Hiong; Low, Renae; Sweller, John

    2002-01-01

    Describes experiments with Australian high school students that investigated differences in performance on chemistry word problems between two learning strategies: text editing, and conventional problem solving. Concluded that text editing had no advantage over problem solving in stoichiometry problems, and that the suitability of a text editing

  9. Too Dumb for Complex Texts?

    ERIC Educational Resources Information Center

    Bauerlein, Mark

    2011-01-01

    High school students' lack of experience and practice with reading complex texts is a primary cause of their difficulties with college-level reading. Filling the syllabus with digital texts does little to address this deficiency. Complex texts demand three dispositions from readers: a willingness to probe works characterized by dense meanings, the…

  10. Text Signals Influence Team Artifacts

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Rysavy, Monica D.; Taricani, Ellen

    2015-01-01

    This exploratory quasi-experimental investigation describes the influence of text signals on team visual map artifacts. In two course sections, four-member teams were given one of two print-based text passage versions on the course-related topic "Social influence in groups" downloaded from Wikipedia; this text had two paragraphs, each…

  11. Algorithms for skiascopy measurement automatization

    NASA Astrophysics Data System (ADS)

    Fomins, Sergejs; Truka, Ren?rs; Kr?mi?a, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  12. Automatic diluter for bacteriological samples.

    PubMed Central

    Trinel, P A; Bleuze, P; Leroy, G; Moschetto, Y; Leclerc, H

    1983-01-01

    The described apparatus, carrying 190 tubes, allows automatic and aseptic dilution of liquid or suspended-solid samples. Serial 10-fold dilutions are programmable from 10(-1) to 10(-9) and are carried out in glass tubes with screw caps and split silicone septa. Dilution assays performed with strains of Escherichia coli and Bacillus stearothermophilus permitted efficient conditions for sterilization of the needle to be defined and showed that the automatic dilutions were as accurate and as reproducible as the most rigorous conventional dilutions. Images PMID:6338826

  13. Succinct Text Indexing with Wildcards

    NASA Astrophysics Data System (ADS)

    Tam, Alan; Wu, Edward; Lam, Tak-Wah; Yiu, Siu-Ming

    A succinct text index uses space proportional to the text itself, say, two times n log? for a text of n characters over an alphabet of size ?. In the past few years, there were several exciting results leading to succinct indexes that support efficient pattern matching. In this paper we present the first succinct index for a text that contains wildcards. The space complexity of our index is (3 + o(1))n log? + O(?logn) bits, where ? is the number of wildcard groups in the text. Such an index finds applications in indexing genomic sequences that contain single-nucleotide polymorphisms (SNP), which could be modeled as wildcards.

  14. ParaText : scalable text modeling and analysis.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-06-01

    Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language.

  15. Statement Summarizing Research Findings on the Issue of the Relationship Between Food-Additive-Free Diets and Hyperkinesis in Children.

    ERIC Educational Resources Information Center

    Lipton, Morris; Wender, Esther

    The National Advisory Committee on Hyperkinesis and Food Additives paper summarized some research findings on the issue of the relationship between food-additive-free diets and hyperkinesis in children. Based on several challenge studies, it is concluded that the evidence generally refutes Dr. B. F. Feingold's claim that artificial colorings in

  16. Acid rain trends summarized

    NASA Astrophysics Data System (ADS)

    In the northeastern United States, the acidity of precipitation has changed little in recent years, although the acidity is increasing in other regions. That's the latest word from a comprehensive review by the U.S. Geological Survey (USGS) of more than 200 published reports of acid rain research from the past 30 years. The report contributes to the controversy over whether increased sulfur emissions from Midwest powerplants increase the acidity of precipitation in the Northeast.When the results of the many individual studies are combined, they show that acidification of precipitation in the Northeast, which has the most damaging level of acidity on a regional basis, occurred primarily before the mid-1950's and has been largely stabilized since the mid-1960s, said John T. Turk, a research hydrologist at the USGS Denver office and author of the 18-page summary report.

  17. Opinion Integration and Summarization

    ERIC Educational Resources Information Center

    Lu, Yue

    2011-01-01

    As Web 2.0 applications become increasingly popular, more and more people express their opinions on the Web in various ways in real time. Such wide coverage of topics and abundance of users make the Web an extremely valuable source for mining people's opinions about all kinds of topics. However, since the opinions are usually expressed as

  18. Opinion Integration and Summarization

    ERIC Educational Resources Information Center

    Lu, Yue

    2011-01-01

    As Web 2.0 applications become increasingly popular, more and more people express their opinions on the Web in various ways in real time. Such wide coverage of topics and abundance of users make the Web an extremely valuable source for mining people's opinions about all kinds of topics. However, since the opinions are usually expressed as…

  19. Text analysis devices, articles of manufacture, and text analysis methods

    DOEpatents

    Turner, Alan E; Hetzler, Elizabeth G; Nakamura, Grant C

    2015-03-31

    Text analysis devices, articles of manufacture, and text analysis methods are described according to some aspects. In one aspect, a text analysis device includes a display configured to depict visible images, and processing circuitry coupled with the display and wherein the processing circuitry is configured to access a first vector of a text item and which comprises a plurality of components, to access a second vector of the text item and which comprises a plurality of components, to weight the components of the first vector providing a plurality of weighted values, to weight the components of the second vector providing a plurality of weighted values, and to combine the weighted values of the first vector with the weighted values of the second vector to provide a third vector.

  20. Automatic caption generation for news images.

    PubMed

    Feng, Yansong; Lapata, Mirella

    2013-04-01

    This paper is concerned with the task of automatically generating captions for images, which is important for many image-related applications. Examples include video and image retrieval as well as the development of tools that aid visually impaired individuals to access pictorial information. Our approach leverages the vast resource of pictures available on the web and the fact that many of them are captioned and colocated with thematically related documents. Our model learns to create captions from a database of news articles, the pictures embedded in them, and their captions, and consists of two stages. Content selection identifies what the image and accompanying article are about, whereas surface realization determines how to verbalize the chosen content. We approximate content selection with a probabilistic image annotation model that suggests keywords for an image. The model postulates that images and their textual descriptions are generated by a shared set of latent variables (topics) and is trained on a weakly labeled dataset (which treats the captions and associated news articles as image labels). Inspired by recent work in summarization, we propose extractive and abstractive surface realization models. Experimental results show that it is viable to generate captions that are pertinent to the specific content of an image and its associated article, while permitting creativity in the description. Indeed, the output of our abstractive model compares favorably to handwritten captions and is often superior to extractive methods. PMID:22641700

  1. Texting while driving: is speech-based text entry less risky than handheld text entry?

    PubMed

    He, J; Chaparro, A; Nguyen, B; Burge, R J; Crandall, J; Chaparro, B; Ni, R; Cao, S

    2014-11-01

    Research indicates that using a cell phone to talk or text while maneuvering a vehicle impairs driving performance. However, few published studies directly compare the distracting effects of texting using a hands-free (i.e., speech-based interface) versus handheld cell phone, which is an important issue for legislation, automotive interface design and driving safety training. This study compared the effect of speech-based versus handheld text entries on simulated driving performance by asking participants to perform a car following task while controlling the duration of a secondary text-entry task. Results showed that both speech-based and handheld text entries impaired driving performance relative to the drive-only condition by causing more variation in speed and lane position. Handheld text entry also increased the brake response time and increased variation in headway distance. Text entry using a speech-based cell phone was less detrimental to driving performance than handheld text entry. Nevertheless, the speech-based text entry task still significantly impaired driving compared to the drive-only condition. These results suggest that speech-based text entry disrupts driving, but reduces the level of performance interference compared to text entry with a handheld device. In addition, the difference in the distraction effect caused by speech-based and handheld text entry is not simply due to the difference in task duration. PMID:25089769

  2. Automatic marker for photographic film

    NASA Technical Reports Server (NTRS)

    Gabbard, N. M.; Surrency, W. M.

    1974-01-01

    Commercially-produced wire-marking machine is modified to title or mark film rolls automatically. Machine is used with film drive mechanism which is powered with variable-speed, 28-volt dc motor. Up to 40 frames per minute can be marked, reducing time and cost of process.

  3. Automatically Preparing Safe SQL Queries

    NASA Astrophysics Data System (ADS)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  4. Automatic agar tray inoculation device

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Mills, S. M.

    1972-01-01

    Automatic agar tray inoculation device is simple in design and foolproof in operation. It employs either conventional inoculating loop or cotton swab for uniform inoculation of agar media, and it allows technician to carry on with other activities while tray is being inoculated.

  5. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  6. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB

  7. Graphonomics, Automaticity and Handwriting Assessment

    ERIC Educational Resources Information Center

    Tucha, Oliver; Tucha, Lara; Lange, Klaus W.

    2008-01-01

    A recent review of handwriting research in "Literacy" concluded that current curricula of handwriting education focus too much on writing style and neatness and neglect the aspect of handwriting automaticity. This conclusion is supported by evidence in the field of graphonomic research, where a range of experiments have been used to investigate

  8. Zum Uebersetzen fachlicher Texte (On the Translation of Technical Texts)

    ERIC Educational Resources Information Center

    Friederich, Wolf

    1975-01-01

    Reviews a 1974 East German publication on translation of scientific literature from Russian to German. Considers terminology, different standard levels of translation in East Germany, and other matters related to translation. (Text is in German.) (DH)

  9. Improve Reading with Complex Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    The Common Core State Standards have cast a renewed light on reading instruction, presenting teachers with the new requirements to teach close reading of complex texts. Teachers and administrators should consider a number of essential features of close reading: They are short, complex texts; rich discussions based on worthy questions; revisiting…

  10. Towards Sustainable Text Concept Mapping

    ERIC Educational Resources Information Center

    Conlon, Tom

    2009-01-01

    Previous experimental studies have indicated that young people's text comprehension and summarisation skills can be improved by techniques based on text concept mapping (TCM). However, these studies have done little to elucidate a practical pedagogy that can make the techniques adoptable within the context of typical secondary school classrooms.

  11. Understanding and Teaching Complex Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2014-01-01

    Teachers in today's classrooms struggle every day to design instructional interventions that would build students' reading skills and strategies in order to ensure their comprehension of complex texts. Text complexity can be determined in both qualitative and quantitative ways. In this article, the authors describe various innovative

  12. Improve Reading with Complex Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    The Common Core State Standards have cast a renewed light on reading instruction, presenting teachers with the new requirements to teach close reading of complex texts. Teachers and administrators should consider a number of essential features of close reading: They are short, complex texts; rich discussions based on worthy questions; revisiting

  13. Text mining for systems biology.

    PubMed

    Fluck, Juliane; Hofmann-Apitius, Martin

    2014-02-01

    Scientific communication in biomedicine is, by and large, still text based. Text mining technologies for the automated extraction of useful biomedical information from unstructured text that can be directly used for systems biology modelling have been substantially improved over the past few years. In this review, we underline the importance of named entity recognition and relationship extraction as fundamental approaches that are relevant to systems biology. Furthermore, we emphasize the role of publicly organized scientific benchmarking challenges that reflect the current status of text-mining technology and are important in moving the entire field forward. Given further interdisciplinary development of systems biology-orientated ontologies and training corpora, we expect a steadily increasing impact of text-mining technology on systems biology in the future. PMID:24070668

  14. Problem of Automatic Thesaurus Construction (K Voprosu Ob Avtomaticheskom Postroenii Tezarusa). Subject Country: USSR.

    ERIC Educational Resources Information Center

    Ivanova, I. S.

    With respect to automatic indexing and information retrieval, statistical analysis of word usages in written texts is finding broad application in the solution of a number of problems. One of these problems is compiling a thesaurus on a digital computer. Using two methods, a comparative experiment in automatic thesaurus construction is presented.

  15. Automatic Scaffolding and Measurement of Concept Mapping for EFL Students to Write Summaries

    ERIC Educational Resources Information Center

    Yang, Yu-Fen

    2015-01-01

    An incorrect concept map may obstruct a student's comprehension when writing summaries if they are unable to grasp key concepts when reading texts. The purpose of this study was to investigate the effects of automatic scaffolding and measurement of three-layer concept maps on improving university students' writing summaries. The automatic

  16. Problem of Automatic Thesaurus Construction (K Voprosu Ob Avtomaticheskom Postroenii Tezarusa). Subject Country: USSR.

    ERIC Educational Resources Information Center

    Ivanova, I. S.

    With respect to automatic indexing and information retrieval, statistical analysis of word usages in written texts is finding broad application in the solution of a number of problems. One of these problems is compiling a thesaurus on a digital computer. Using two methods, a comparative experiment in automatic thesaurus construction is presented.…

  17. Sentence Similarity Analysis with Applications in Automatic Short Answer Grading

    ERIC Educational Resources Information Center

    Mohler, Michael A. G.

    2012-01-01

    In this dissertation, I explore unsupervised techniques for the task of automatic short answer grading. I compare a number of knowledge-based and corpus-based measures of text similarity, evaluate the effect of domain and size on the corpus-based measures, and also introduce a novel technique to improve the performance of the system by integrating…

  18. Sentence Similarity Analysis with Applications in Automatic Short Answer Grading

    ERIC Educational Resources Information Center

    Mohler, Michael A. G.

    2012-01-01

    In this dissertation, I explore unsupervised techniques for the task of automatic short answer grading. I compare a number of knowledge-based and corpus-based measures of text similarity, evaluate the effect of domain and size on the corpus-based measures, and also introduce a novel technique to improve the performance of the system by integrating

  19. Automatically Assessing Lexical Sophistication: Indices, Tools, Findings, and Application

    ERIC Educational Resources Information Center

    Kyle, Kristopher; Crossley, Scott A.

    2015-01-01

    This study explores the construct of lexical sophistication and its applications for measuring second language lexical and speaking proficiency. In doing so, the study introduces the Tool for the Automatic Analysis of LExical Sophistication (TAALES), which calculates text scores for 135 classic and newly developed lexical indices related to word

  20. Automatically Assessing Lexical Sophistication: Indices, Tools, Findings, and Application

    ERIC Educational Resources Information Center

    Kyle, Kristopher; Crossley, Scott A.

    2015-01-01

    This study explores the construct of lexical sophistication and its applications for measuring second language lexical and speaking proficiency. In doing so, the study introduces the Tool for the Automatic Analysis of LExical Sophistication (TAALES), which calculates text scores for 135 classic and newly developed lexical indices related to word…

  1. Machine aided indexing from natural language text

    NASA Technical Reports Server (NTRS)

    Silvester, June P.; Genuardi, Michael T.; Klingbiel, Paul H.

    1993-01-01

    The NASA Lexical Dictionary (NLD) Machine Aided Indexing (MAI) system was designed to (1) reuse the indexing of the Defense Technical Information Center (DTIC); (2) reuse the indexing of the Department of Energy (DOE); and (3) reduce the time required for original indexing. This was done by automatically generating appropriate NASA thesaurus terms from either the other agency's index terms, or, for original indexing, from document titles and abstracts. The NASA STI Program staff devised two different ways to generate thesaurus terms from text. The first group of programs identified noun phrases by a parsing method that allowed for conjunctions and certain prepositions, on the assumption that indexable concepts are found in such phrases. Results were not always satisfactory, and it was noted that indexable concepts often occurred outside of noun phrases. The first method also proved to be too slow for the ultimate goal of interactive (online) MAI. The second group of programs used the knowledge base (KB), word proximity, and frequency of word and phrase occurrence to identify indexable concepts. Both methods are described and illustrated. Online MAI has been achieved, as well as several spinoff benefits, which are also described.

  2. Auxiliary circuit enables automatic monitoring of EKG'S

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Auxiliary circuits allow direct, automatic monitoring of electrocardiograms by digital computers. One noiseless square-wave output signal for each trigger pulse from an electrocardiogram preamplifier is produced. The circuit also permits automatic processing of cardiovascular data from analog tapes.

  3. Text structures in medical text processing: empirical evidence and a text understanding prototype.

    PubMed Central

    Hahn, U.; Romacker, M.

    1997-01-01

    We consider the role of textual structures in medical texts. In particular, we examine the impact the lacking recognition of text phenomena has on the validity of medical knowledge bases fed by a natural language understanding front-end. First, we review the results from an empirical study on a sample of medical texts considering, in various forms of local coherence phenomena (anaphora and textual ellipses). We then discuss the representation bias emerging in the text knowledge base that is likely to occur when these phenomena are not dealt with--mainly the emergence of referentially incoherent and invalid representations. We then turn to a medical text understanding system designed to account for local text coherence. PMID:9357739

  4. Candidate wind-turbine-generator site summarized meteorological data for December 1976-December 1981. [Program WIND listed

    SciTech Connect

    Sandusky, W.F.; Renne, D.S.; Hadley, D.L.

    1982-09-01

    Summarized hourly meteorological data for 16 of the original 17 candidate and wind turbine generator sites collected during the period from December 1976 through December 1981 are presented. The data collection program at some individual sites may not span this entire period, but will be contained within the reporting period. The purpose of providing the summarized data is to document the data collection program and provide data that could be considered representative of long-term meteorological conditions at each site. For each site, data are given in eight tables and a topographic map showing the location of the meteorological tower and turbine, if applicable. Use of information from these tables, along with information about specific wind turbines, should allow the user to estimate the potential for long-term average wind energy production at each site.

  5. Text mining for systems modeling.

    PubMed

    Kowald, Axel; Schmeier, Sebastian

    2011-01-01

    The yearly output of scientific papers is constantly rising and makes it often impossible for the individual researcher to keep up. Text mining of scientific publications is, therefore, an interesting method to automate knowledge and data retrieval from the literature. In this chapter, we discuss specific tasks required for text mining, including their problems and limitations. The second half of the chapter demonstrates the various aspects of text mining using a practical example. Publications are transformed into a vector space representation and then support vector machines are used to classify papers depending on their content of kinetic parameters, which are required for model building in systems biology. PMID:21063956

  6. Toward text understanding: classification of text documents by word map

    NASA Astrophysics Data System (ADS)

    Visa, Ari J. E.; Toivanen, Jarmo; Back, Barbro; Vanharanta, Hannu

    2000-04-01

    In many fields, for example in business, engineering, and law there is interest in the search and the classification of text documents in large databases. To information retrieval purposes there exist methods. They are mainly based on keywords. In cases where keywords are lacking the information retrieval is problematic. One approach is to use the whole text document as a search key. Neural networks offer an adaptive tool for this purpose. This paper suggests a new adaptive approach to the problem of clustering and search in large text document databases. The approach is a multilevel one based on word, sentence, and paragraph level maps. Here only the word map level is reported. The reported approach is based on smart encoding, on Self-Organizing Maps, and on document histograms. The results are very promising.

  7. Why is Light Text Harder to Read Than Dark Text?

    NASA Technical Reports Server (NTRS)

    Scharff, Lauren V.; Ahumada, Albert J.

    2005-01-01

    Scharff and Ahumada (2002, 2003) measured text legibility for light text and dark text. For paragraph readability and letter identification, responses to light text were slower and less accurate for a given contrast. Was this polarity effect (1) an artifact of our apparatus, (2) a physiological difference in the separate pathways for positive and negative contrast or (3) the result of increased experience with dark text on light backgrounds? To rule out the apparatus-artifact hypothesis, all data were collected on one monitor. Its luminance was measured at all levels used, and the spatial effects of the monitor were reduced by pixel doubling and quadrupling (increasing the viewing distance to maintain constant angular size). Luminances of vertical and horizontal square-wave gratings were compared to assess display speed effects. They existed, even for 4-pixel-wide bars. Tests for polarity asymmetries in display speed were negative. Increased experience might develop full letter templates for dark text, while recognition of light letters is based on component features. Earlier, an observer ran all conditions at one polarity and then switched. If dark and light letters were intermixed, the observer might use component features on all trials and do worse on the dark letters, reducing the polarity effect. We varied polarity blocking (completely blocked, alternating smaller blocks, and intermixed blocks). Letter identification responses times showed polarity effects at all contrasts and display resolution levels. Observers were also more accurate with higher contrasts and more pixels per degree. Intermixed blocks increased the polarity effect by reducing performance on the light letters, but only if the randomized block occurred prior to the nonrandomized block. Perhaps observers tried to use poorly developed templates, or they did not work as hard on the more difficult items. The experience hypothesis and the physiological gain hypothesis remain viable explanations.

  8. Mobile-cloud assisted video summarization framework for efficient management of remote sensing data generated by wireless capsule sensors.

    PubMed

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-01-01

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874

  9. Mobile-Cloud Assisted Video Summarization Framework for Efficient Management of Remote Sensing Data Generated by Wireless Capsule Sensors

    PubMed Central

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-01-01

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874

  10. Text Structures, Readings, and Retellings: An Exploration of Two Texts

    ERIC Educational Resources Information Center

    Martens, Prisca; Arya, Poonam; Wilson, Pat; Jin, Lijun

    2007-01-01

    The purpose of this study is to explore the relationship between children's use of reading strategies and language cues while reading and their comprehension after reading two texts: "Cherries and Cherry Pits" (Williams, 1986) and "There's Something in My Attic" (Mayer, 1988). The data were drawn from a larger study of the reading strategies of…

  11. 8 CFR 1205.1 - Automatic revocation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 8 Aliens and Nationality 1 2012-01-01 2012-01-01 false Automatic revocation. 1205.1 Section 1205.1 Aliens and Nationality EXECUTIVE OFFICE FOR IMMIGRATION REVIEW, DEPARTMENT OF JUSTICE IMMIGRATION REGULATIONS REVOCATION OF APPROVAL OF PETITIONS § 1205.1 Automatic revocation. (a) Reasons for automatic revocation. The approval of a petition or...

  12. 8 CFR 205.1 - Automatic revocation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 8 Aliens and Nationality 1 2012-01-01 2012-01-01 false Automatic revocation. 205.1 Section 205.1 Aliens and Nationality DEPARTMENT OF HOMELAND SECURITY IMMIGRATION REGULATIONS REVOCATION OF APPROVAL OF PETITIONS § 205.1 Automatic revocation. (a) Reasons for automatic revocation. The approval of a petition or self-petition made under section...

  13. 30 CFR 75.1405 - Automatic couplers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... between the ends of such equipment. All haulage equipment without automatic couplers in use in a mine on... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Automatic couplers. 75.1405 Section 75.1405... MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Hoisting and Mantrips § 75.1405 Automatic couplers....

  14. 30 CFR 75.1405 - Automatic couplers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... between the ends of such equipment. All haulage equipment without automatic couplers in use in a mine on... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Automatic couplers. 75.1405 Section 75.1405... MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Hoisting and Mantrips § 75.1405 Automatic couplers....

  15. 30 CFR 75.1405 - Automatic couplers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... between the ends of such equipment. All haulage equipment without automatic couplers in use in a mine on... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Automatic couplers. 75.1405 Section 75.1405... MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Hoisting and Mantrips § 75.1405 Automatic couplers....

  16. 30 CFR 75.1405 - Automatic couplers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... between the ends of such equipment. All haulage equipment without automatic couplers in use in a mine on... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Automatic couplers. 75.1405 Section 75.1405... MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Hoisting and Mantrips § 75.1405 Automatic couplers....

  17. 30 CFR 75.1405 - Automatic couplers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... between the ends of such equipment. All haulage equipment without automatic couplers in use in a mine on... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Automatic couplers. 75.1405 Section 75.1405... MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Hoisting and Mantrips § 75.1405 Automatic couplers....

  18. Self-Compassion and Automatic Thoughts

    ERIC Educational Resources Information Center

    Akin, Ahmet

    2012-01-01

    The aim of this research is to examine the relationships between self-compassion and automatic thoughts. Participants were 299 university students. In this study, the Self-compassion Scale and the Automatic Thoughts Questionnaire were used. The relationships between self-compassion and automatic thoughts were examined using correlation analysis…

  19. Nonverbatim captioning in Dutch television programs: a text linguistic approach.

    PubMed

    Schilperoord, Joost; de Groot, Vanja; van Son, Nic

    2005-01-01

    In the Netherlands, as in most other European countries, closed captions for the deaf summarize texts rather than render them verbatim. Caption editors argue that in this way television viewers have enough time to both read the text and watch the program. They also claim that the meaning of the original message is properly conveyed. However, many deaf people demand verbatim subtitles so that they have full access to all original information. They claim that vital information is withheld from them as a result of the summarizing process. Linguistic research was conducted in order (a) to identify the type of information that is left out of captioned texts and (b) to determine the effects of nonverbatim captioning on the meaning of the text. The differences between spoken and captioned texts were analyzed on the basis of on a model of coherence relations in discourse. One prominent finding is that summarizing affects coherence relations, making them less explicit and altering the implied meaning. PMID:16037483

  20. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  1. An Enterprise Ontology Building the Bases for Automatic Metadata Generation

    NASA Astrophysics Data System (ADS)

    Thnssen, Barbara

    'Information Overload' or 'Document Deluge' is a problem enterprises and Public Administrations alike are still dealing with. Although commercial products for Enterprise Content or Records Management are available since more than two decades, especially in Small and Medium Enterprises and Public Administrations they didn't get through. Because of the wide range of document types and formats full-text indexing is not sufficient, but assigning metadata manually is not possible. Thus, automatic, format-independent generation of metadata for (public) enterprise documents is needed. Using context to infer metadata automatically has been researched for example for web-documents or learning objects. If (public) enterprise objects were modelled 'machine understandable' they could be build the context for automatic metadata generation. The approach introduced in this paper is to model context (the (public) enterprise objects) in an ontology and using that ontology to infer content-related metadata.

  2. Automatic processing, analysis, and recognition of images

    NASA Astrophysics Data System (ADS)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  3. AUTO: Automatic script generation system

    NASA Astrophysics Data System (ADS)

    Granacki, John; Hom, Ivan; Kazi, Tauseef

    1993-11-01

    This technical manual describes an automatic script generation system (Auto) for guiding the physical design of a printed circuit board. Auto accepts a printed circuit board design as specified in a netlist and partslist and returns a script to automatically provide all the necessary commands and file specifications required by Harris EDA's Finesse CAD system for placing and routing the printed circuit board. Auto insulates the designer from learning the details of commercial CAD systems, allows designers to modify the script for customized design entry, and performs format and completeness checking of the design files. This technical manual contains a complete tutorial/design example describing how to use the Auto system and also contains appendices describing the format of files required by the Finesse CAD system.

  4. GPU-Accelerated Text Mining

    SciTech Connect

    Cui, Xiaohui; Mueller, Frank; Zhang, Yongpeng; Potok, Thomas E

    2009-01-01

    Accelerating hardware devices represent a novel promise for improving the performance for many problem domains but it is not clear for which domains what accelerators are suitable. While there is no room in general-purpose processor design to significantly increase the processor frequency, developers are instead resorting to multi-core chips duplicating conventional computing capabilities on a single die. Yet, accelerators offer more radical designs with a much higher level of parallelism and novel programming environments. This present work assesses the viability of text mining on CUDA. Text mining is one of the key concepts that has become prominent as an effective means to index the Internet, but its applications range beyond this scope and extend to providing document similarity metrics, the subject of this work. We have developed and optimized text search algorithms for GPUs to exploit their potential for massive data processing. We discuss the algorithmic challenges of parallelization for text search problems on GPUs and demonstrate the potential of these devices in experiments by reporting significant speedups. Our study may be one of the first to assess more complex text search problems for suitability for GPU devices, and it may also be one of the first to exploit and report on atomic instruction usage that have recently become available in NVIDIA devices.

  5. Automatic translation among spoken languages

    NASA Technical Reports Server (NTRS)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  6. Measuring Concentration Of Ozone Automatically

    NASA Technical Reports Server (NTRS)

    Lavelle, Joseph R.

    1990-01-01

    Airborne photometer measures absorption of ultraviolet. Automatically measures ozone concentrations in atmosphere to accuracy within 10 parts per billion. Air collected outside airplane enters photometer by way of transfer valve. Pressure and temperature of air measured simultaneously with transmissivity of air to ultraviolet light from lamp. Instrument has mass of 20.5 kg and fits in aluminum box measuring 78 by 58 by 25 cm. Compact, lightweight, low-power instrument developed for use on high-altitude research airplane.

  7. Automatic computation of transfer functions

    DOEpatents

    Atcitty, Stanley; Watson, Luke Dale

    2015-04-14

    Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

  8. Toward automatic finite element analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Perucchio, Renato; Voelcker, Herbert

    1987-01-01

    Two problems must be solved if the finite element method is to become a reliable and affordable blackbox engineering tool. Finite element meshes must be generated automatically from computer aided design databases and mesh analysis must be made self-adaptive. The experimental system described solves both problems in 2-D through spatial and analytical substructuring techniques that are now being extended into 3-D.

  9. Ageism in undergraduate psychology texts.

    PubMed

    Whitbourne, S K; Hulicka, I M

    1990-10-01

    A sample of 139 texts written over the past 40 years was analyzed for evidence of ageism (i.e., lack of attention to the psychology of later life and stereotyping of older adults). More recent texts cover the topic more comprehensively than in the past, but this coverage is limited in depth. Although textbook authors appear to be trying to communicate a positive message about aging and older persons, their efforts are compromised by ambivalence in the form of contradictory statements about the nature of the aging process. There is an unfortunate condensation of sources in recent texts, which draw heavily from a small cluster of authorities. Implications of these findings for the larger textbook enterprise are discussed. PMID:2252230

  10. Mapping text with phrase nets.

    PubMed

    van Ham, Frank; Wattenberg, Martin; Vigas, Fernanda B

    2009-01-01

    We present a new technique, the phrase net, for generating visual overviews of unstructured text. A phrase net displays a graph whose nodes are words and whose edges indicate that two words are linked by a user-specified relation. These relations may be defined either at the syntactic or lexical level; different relations often produce very different perspectives on the same text. Taken together, these perspectives often provide an illuminating visual overview of the key concepts and relations in a document or set of documents. PMID:19834186

  11. Biomarker Identification Using Text Mining

    PubMed Central

    Li, Hui; Liu, Chunmei

    2012-01-01

    Identifying molecular biomarkers has become one of the important tasks for scientists to assess the different phenotypic states of cells or organisms correlated to the genotypes of diseases from large-scale biological data. In this paper, we proposed a text-mining-based method to discover biomarkers from PubMed. First, we construct a database based on a dictionary, and then we used a finite state machine to identify the biomarkers. Our method of text mining provides a highly reliable approach to discover the biomarkers in the PubMed database. PMID:23197989

  12. Automatic Contrail Detection and Segmentation

    NASA Technical Reports Server (NTRS)

    Weiss, John M.; Christopher, Sundar A.; Welch, Ronald M.

    1998-01-01

    Automatic contrail detection is of major importance in the study of the atmospheric effects of aviation. Due to the large volume of satellite imagery, selecting contrail images for study by hand is impractical and highly subject to human error. It is far better to have a system in place that will automatically evaluate an image to determine 1) whether it contains contrails and 2) where the contrails are located. Preliminary studies indicate that it is possible to automatically detect and locate contrails in Advanced Very High Resolution Radiometer (AVHRR) imagery with a high degree of confidence. Once contrails have been identified and localized in a satellite image, it is useful to segment the image into contrail versus noncontrail pixels. The ability to partition image pixels makes it possible to determine the optical properties of contrails, including optical thickness and particle size. In this paper, we describe a new technique for segmenting satellite images containing contrails. This method has good potential for creating a contrail climatology in an automated fashion. The majority of contrails are detected, rejecting clutter in the image, even cirrus streaks. Long, thin contrails are most easily detected. However, some contrails may be missed because they are curved, diffused over a large area, or present in short segments. Contrails average 2-3 km in width for the cases studied.

  13. Young Children's Thinking in Relation to Texts: A Comparison with Older Children.

    ERIC Educational Resources Information Center

    Feathers, Karen M.

    2002-01-01

    Compared the thinking of kindergartners and sixth-graders as expressed in unassisted retellings of a narrative text. Found no significant age differences in retelling lengths and few significant age differences in the amount of types of thinking. Older children tended to summarize paragraphs and single sentences; young children tended to summarize

  14. Automatic, computerized testing of bolts

    NASA Technical Reports Server (NTRS)

    Carlucci, J., Jr.; Lobb, V. B.; Stoller, F. W.

    1970-01-01

    System for testing bolts with various platings, lubricants, nuts, and tightening procedures tests 200 fasteners, and processes and summarizes the results, within one month. System measures input torque, nut rotation, bolt clamping force, bolt shank twist, and bolt elongation, data is printed in report form. Test apparatus is described.

  15. Solar Concepts: A Background Text.

    ERIC Educational Resources Information Center

    Gorham, Jonathan W.

    This text is designed to provide teachers, students, and the general public with an overview of key solar energy concepts. Various energy terms are defined and explained. Basic thermodynamic laws are discussed. Alternative energy production is described in the context of the present energy situation. Described are the principal contemporary solar

  16. Predictive Encoding in Text Compression.

    ERIC Educational Resources Information Center

    Raita, Timo; Teuhola, Jukka

    1989-01-01

    Presents three text compression methods of increasing power and evaluates each based on the trade-off between compression gain and processing time. The advantages of using hash coding for speed and optimal arithmetic coding to successor information for compression gain are discussed. (26 references) (Author/CLB)

  17. Solar Concepts: A Background Text.

    ERIC Educational Resources Information Center

    Gorham, Jonathan W.

    This text is designed to provide teachers, students, and the general public with an overview of key solar energy concepts. Various energy terms are defined and explained. Basic thermodynamic laws are discussed. Alternative energy production is described in the context of the present energy situation. Described are the principal contemporary solar…

  18. Dangers of Texting While Driving

    MedlinePLUS

    ... the cause of 18 percent of all fatal crashes – with 3,328 people killed – and crashes resulting in an injury – with 421,000 people ... Transportation Institute found that text messaging creates a crash risk 23 times worse than driving while not ...

  19. Policy Discourses in School Texts

    ERIC Educational Resources Information Center

    Maguire, Meg; Hoskins, Kate; Ball, Stephen; Braun, Annette

    2011-01-01

    In this paper, we focus on some of the ways in which schools are both productive of and constituted by sets of "discursive practices, events and texts" that contribute to the process of policy enactment. As Colebatch (2002: 2) says, "policy involves the creation of order--that is, shared understandings about how the various participants will act

  20. FTP: Full-Text Publishing?

    ERIC Educational Resources Information Center

    Jul, Erik

    1992-01-01

    Describes the use of file transfer protocol (FTP) on the INTERNET computer network and considers its use as an electronic publishing system. The differing electronic formats of text files are discussed; the preparation and access of documents are described; and problems are addressed, including a lack of consistency. (LRW)

  1. Transformation and Text: Journal Pedagogy.

    ERIC Educational Resources Information Center

    Ellis, Carol

    One intention that an instructor had for her new course called "Writing and Healing: Women's Journal Writing" was to make apparent the power of self-written text to transform the writer. She asked her students--women studying women writing their lives and women writing their own lives--to write three pages a day and to focus on change. The

  2. Controversial Texts and Public Education.

    ERIC Educational Resources Information Center

    Smith, David L.

    Because public schools are designed to serve the widest range of interests and are committed to the ideal of democracy, teachers cannot afford to avoid teaching works or presenting ideas that offend some members of communities. Students need to learn the value of controversy and of the challenges posed by a text. Richard Wright's "Native Son" and

  3. Semantic Annotation of Complex Text Structures in Problem Reports

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Throop, David R.; Fleming, Land D.

    2011-01-01

    Text analysis is important for effective information retrieval from databases where the critical information is embedded in text fields. Aerospace safety depends on effective retrieval of relevant and related problem reports for the purpose of trend analysis. The complex text syntax in problem descriptions has limited statistical text mining of problem reports. The presentation describes an intelligent tagging approach that applies syntactic and then semantic analysis to overcome this problem. The tags identify types of problems and equipment that are embedded in the text descriptions. The power of these tags is illustrated in a faceted searching and browsing interface for problem report trending that combines automatically generated tags with database code fields and temporal information.

  4. Sex and gender differences in autism spectrum disorder: summarizing evidence gaps and identifying emerging areas of priority.

    PubMed

    Halladay, Alycia K; Bishop, Somer; Constantino, John N; Daniels, Amy M; Koenig, Katheen; Palmer, Kate; Messinger, Daniel; Pelphrey, Kevin; Sanders, Stephan J; Singer, Alison Tepper; Taylor, Julie Lounds; Szatmari, Peter

    2015-01-01

    One of the most consistent findings in autism spectrum disorder (ASD) research is a higher rate of ASD diagnosis in males than females. Despite this, remarkably little research has focused on the reasons for this disparity. Better understanding of this sex difference could lead to major advancements in the prevention or treatment of ASD in both males and females. In October of 2014, Autism Speaks and the Autism Science Foundation co-organized a meeting that brought together almost 60 clinicians, researchers, parents, and self-identified autistic individuals. Discussion at the meeting is summarized here with recommendations on directions of future research endeavors. PMID:26075049

  5. [On two antique medical texts].

    PubMed

    Rosa, Maria Carlota

    2005-01-01

    The two texts presented here--Regimento proueytoso contra ha pestenena [literally, "useful regime against pestilence"] and Modus curandi cum balsamo ["curing method using balm"]--represent the extent of Portugal's known medical library until circa 1530, produced in gothic letters by foreign printers: Germany's Valentim Fernandes, perhaps the era's most important printer, who worked in Lisbon between 1495 and 1518, and Germdo Galharde, a Frenchman who practiced his trade in Lisbon and Coimbra between 1519 and 1560. Modus curandi, which came to light in 1974 thanks to bibliophile Jos de Pina Martins, is anonymous. Johannes Jacobi is believed to be the author of Regimento proueytoso, which was translated into Latin (Regimen contra pestilentiam), French, and English. Both texts are presented here in facsimile and in modern Portuguese, while the first has also been reproduced in archaic Portuguese using modern typographical characters. This philological venture into sixteenth-century medicine is supplemented by a scholarly glossary which serves as a valuable tool in interpreting not only Regimento proueytoso but also other texts from the era. Two articles place these documents in historical perspective. PMID:17500134

  6. Multimodal Excitatory Interfaces with Automatic Content Classification

    NASA Astrophysics Data System (ADS)

    Williamson, John; Murray-Smith, Roderick

    We describe a non-visual interface for displaying data on mobile devices, based around active exploration: devices are shaken, revealing the contents rattling around inside. This combines sample-based contact sonification with event playback vibrotactile feedback for a rich and compelling display which produces an illusion much like balls rattling inside a box. Motion is sensed from accelerometers, directly linking the motions of the user to the feedback they receive in a tightly closed loop. The resulting interface requires no visual attention and can be operated blindly with a single hand: it is reactive rather than disruptive. This interaction style is applied to the display of an SMS inbox. We use language models to extract salient features from text messages automatically. The output of this classification process controls the timbre and physical dynamics of the simulated objects. The interface gives a rapid semantic overview of the contents of an inbox, without compromising privacy or interrupting the user.

  7. Towards Automatic Classification of Wikipedia Content

    NASA Astrophysics Data System (ADS)

    Szyma?ski, Julian

    Wikipedia - the Free Encyclopedia encounters the problem of proper classification of new articles everyday. The process of assignment of articles to categories is performed manually and it is a time consuming task. It requires knowledge about Wikipedia structure, which is beyond typical editor competence, which leads to human-caused mistakes - omitting or wrong assignments of articles to categories. The article presents application of SVM classifier for automatic classification of documents from The Free Encyclopedia. The classifier application has been tested while using two text representations: inter-documents connections (hyperlinks) and word content. The results of the performed experiments evaluated on hand crafted data show that the Wikipedia classification process can be partially automated. The proposed approach can be used for building a decision support system which suggests editors the best categories that fit new content entered to Wikipedia.

  8. Multi-dimensional classification of biomedical text: Toward automated, practical provision of high-utility text to diverse users

    PubMed Central

    Shatkay, Hagit; Pan, Fengxia; Rzhetsky, Andrey; Wilbur, W. John

    2008-01-01

    Motivation: Much current research in biomedical text mining is concerned with serving biologists by extracting certain information from scientific text. We note that there is no ‘average biologist’ client; different users have distinct needs. For instance, as noted in past evaluation efforts (BioCreative, TREC, KDD) database curators are often interested in sentences showing experimental evidence and methods. Conversely, lab scientists searching for known information about a protein may seek facts, typically stated with high confidence. Text-mining systems can target specific end-users and become more effective, if the system can first identify text regions rich in the type of scientific content that is of interest to the user, retrieve documents that have many such regions, and focus on fact extraction from these regions. Here, we study the ability to characterize and classify such text automatically. We have recently introduced a multi-dimensional categorization and annotation scheme, developed to be applicable to a wide variety of biomedical documents and scientific statements, while intended to support specific biomedical retrieval and extraction tasks. Results: The annotation scheme was applied to a large corpus in a controlled effort by eight independent annotators, where three individual annotators independently tagged each sentence. We then trained and tested machine learning classifiers to automatically categorize sentence fragments based on the annotation. We discuss here the issues involved in this task, and present an overview of the results. The latter strongly suggest that automatic annotation along most of the dimensions is highly feasible, and that this new framework for scientific sentence categorization is applicable in practice. Contact: shatkay@cs.queensu.ca PMID:18718948

  9. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  10. Guidelines for Effective Usage of Text Highlighting Techniques.

    PubMed

    Strobelt, Hendrik; Oelke, Daniela; Kwon, Bum Chul; Schreck, Tobias; Pfister, Hanspeter

    2016-01-01

    Semi-automatic text analysis involves manual inspection of text. Often, different text annotations (like part-of-speech or named entities) are indicated by using distinctive text highlighting techniques. In typesetting there exist well-known formatting conventions, such as bold typeface, italics, or background coloring, that are useful for highlighting certain parts of a given text. Also, many advanced techniques for visualization and highlighting of text exist; yet, standard typesetting is common, and the effects of standard typesetting on the perception of text are not fully understood. As such, we surveyed and tested the effectiveness of common text highlighting techniques, both individually and in combination, to discover how to maximize pop-out effects while minimizing visual interference between techniques. To validate our findings, we conducted a series of crowdsourced experiments to determine: i) a ranking of nine commonly-used text highlighting techniques; ii) the degree of visual interference between pairs of text highlighting techniques; iii) the effectiveness of techniques for visual conjunctive search. Our results show that increasing font size works best as a single highlighting technique, and that there are significant visual interferences between some pairs of highlighting techniques. We discuss the pros and cons of different combinations as a design guideline to choose text highlighting techniques for text viewers. PMID:26529715

  11. Supporting the education evidence portal via text mining

    PubMed Central

    Ananiadou, Sophia; Thompson, Paul; Thomas, James; Mu, Tingting; Oliver, Sandy; Rickinson, Mark; Sasaki, Yutaka; Weissenbacher, Davy; McNaught, John

    2010-01-01

    The UK Education Evidence Portal (eep) provides a single, searchable, point of access to the contents of the websites of 33 organizations relating to education, with the aim of revolutionizing work practices for the education community. Use of the portal alleviates the need to spend time searching multiple resources to find relevant information. However, the combined content of the websites of interest is still very large (over 500?000 documents and growing). This means that searches using the portal can produce very large numbers of hits. As users often have limited time, they would benefit from enhanced methods of performing searches and viewing results, allowing them to drill down to information of interest more efficiently, without having to sift through potentially long lists of irrelevant documents. The Joint Information Systems Committee (JISC)-funded ASSIST project has produced a prototype web interface to demonstrate the applicability of integrating a number of text-mining tools and methods into the eep, to facilitate an enhanced searching, browsing and document-viewing experience. New features include automatic classification of documents according to a taxonomy, automatic clustering of search results according to similar document content, and automatic identification and highlighting of key terms within documents. PMID:20643679

  12. Research on the automatic laser navigation system of the tunnel boring machine

    NASA Astrophysics Data System (ADS)

    Liu, Yake; Li, Yueqiang

    2011-12-01

    By establishing relevant coordinates of the Automatic Laser Navigation System, the basic principle of the system which accesses the TBM three-dimensional reference point and yawing angle by mathematical transformation between TBM, target prism and earth coordinate systems is discussed deeply in details. According to the way of rigid body descriptions of its posture, TBM attitude parameters measurement and data acquisition methods are proposed, and measures to improve the accuracy of the Laser Navigation System are summarized.

  13. Unification of automatic target tracking and automatic target recognition

    NASA Astrophysics Data System (ADS)

    Schachter, Bruce J.

    2014-06-01

    The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT∪ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.

  14. Intermediate leak protection/automatic shutdown for B and W helical coil steam generator

    SciTech Connect

    Not Available

    1981-01-01

    The report summarizes a follow-on study to the multi-tiered Intermediate Leak/Automatic Shutdown System report. It makes the automatic shutdown system specific to the Babcock and Wilcox (B and W) helical coil steam generator and to the Large Development LMFBR Plant. Threshold leak criteria specific to this steam generator design are developed, and performance predictions are presented for a multi-tier intermediate leak, automatic shutdown system applied to this unit. Preliminary performance predictions for application to the helical coil steam generator were given in the referenced report; for the most part, these predictions have been confirmed. The importance of including a cover gas hydrogen meter in this unit is demonstrated by calculation of a response time one-fifth that of an in-sodium meter at hot standby and refueling conditions.

  15. Commutated automatic gain control system

    NASA Technical Reports Server (NTRS)

    Yost, S. R.

    1982-01-01

    A commutated automatic gain control (AGC) system was designed and built for a prototype Loran C receiver. The receiver uses a microcomputer to control a memory aided phase-locked loop (MAPLL). The microcomputer also controls the input/output, latitude/longitude conversion, and the recently added AGC system. The circuit designed for the AGC is described, and bench and flight test results are presented. The AGC circuit described actually samples starting at a point 40 microseconds after a zero crossing determined by the software lock pulse ultimately generated by a 30 microsecond delay and add network in the receiver front end envelope detector.

  16. Automatic Test-Case Generation

    NASA Astrophysics Data System (ADS)

    Machado, Patrcia; Sampaio, Augusto

    This chapter is an introduction to the theory, techniques, and tool support for automatic test-case generation. We discuss how test models can be generated, for instance, from requirements specifications, and present different criteria and strategies for generating and selecting test cases from these models. The Target tool is presented and used in this the chapter for illustrating test-case generation techniques along with a case study in the domain of mobile-phone applications. Target generates abstract test cases from use-case specifications presented as templates whose contents are described using a controlled natural language.

  17. Plex: automatically generated microcomputer layouts

    SciTech Connect

    Buric, M.R.; Christensen, C.; Matheson, T.G.

    1983-01-01

    A program has been developed that automatically generates VLSI layouts of microcomputers tailored to user specifications. Most major features of the generated microcomputers, such as the data-word size, the number of registers, and the instruction-memory and data-memory size, can be varied. The resulting microcomputers are small enough to be used as components on a custom VLSI chip. The machine have separate instruction and data space, three levels of pipelining, and execute an instruction every clock cycle. 1 ref.

  18. Text Mining for Protein Docking

    PubMed Central

    Badal, Varsha D.; Kundrotas, Petras J.; Vakser, Ilya A.

    2015-01-01

    The rapidly growing amount of publicly available information from biomedical research is readily accessible on the Internet, providing a powerful resource for predictive biomolecular modeling. The accumulated data on experimentally determined structures transformed structure prediction of proteins and protein complexes. Instead of exploring the enormous search space, predictive tools can simply proceed to the solution based on similarity to the existing, previously determined structures. A similar major paradigm shift is emerging due to the rapidly expanding amount of information, other than experimentally determined structures, which still can be used as constraints in biomolecular structure prediction. Automated text mining has been widely used in recreating protein interaction networks, as well as in detecting small ligand binding sites on protein structures. Combining and expanding these two well-developed areas of research, we applied the text mining to structural modeling of protein-protein complexes (protein docking). Protein docking can be significantly improved when constraints on the docking mode are available. We developed a procedure that retrieves published abstracts on a specific protein-protein interaction and extracts information relevant to docking. The procedure was assessed on protein complexes from Dockground (http://dockground.compbio.ku.edu). The results show that correct information on binding residues can be extracted for about half of the complexes. The amount of irrelevant information was reduced by conceptual analysis of a subset of the retrieved abstracts, based on the bag-of-words (features) approach. Support Vector Machine models were trained and validated on the subset. The remaining abstracts were filtered by the best-performing models, which decreased the irrelevant information for ~ 25% complexes in the dataset. The extracted constraints were incorporated in the docking protocol and tested on the Dockground unbound benchmark set, significantly increasing the docking success rate. PMID:26650466

  19. Text mining in livestock animal science: introducing the potential of text mining to animal sciences.

    PubMed

    Sahadevan, S; Hofmann-Apitius, M; Schellander, K; Tesfaye, D; Fluck, J; Friedrich, C M

    2012-10-01

    In biological research, establishing the prior art by searching and collecting information already present in the domain has equal importance as the experiments done. To obtain a complete overview about the relevant knowledge, researchers mainly rely on 2 major information sources: i) various biological databases and ii) scientific publications in the field. The major difference between the 2 information sources is that information from databases is available, typically well structured and condensed. The information content in scientific literature is vastly unstructured; that is, dispersed among the many different sections of scientific text. The traditional method of information extraction from scientific literature occurs by generating a list of relevant publications in the field of interest and manually scanning these texts for relevant information, which is very time consuming. It is more than likely that in using this "classical" approach the researcher misses some relevant information mentioned in the literature or has to go through biological databases to extract further information. Text mining and named entity recognition methods have already been used in human genomics and related fields as a solution to this problem. These methods can process and extract information from large volumes of scientific text. Text mining is defined as the automatic extraction of previously unknown and potentially useful information from text. Named entity recognition (NER) is defined as the method of identifying named entities (names of real world objects; for example, gene/protein names, drugs, enzymes) in text. In animal sciences, text mining and related methods have been briefly used in murine genomics and associated fields, leaving behind other fields of animal sciences, such as livestock genomics. The aim of this work was to develop an information retrieval platform in the livestock domain focusing on livestock publications and the recognition of relevant data from cattle and pigs. For this purpose, the rather noncomprehensive resources of pig and cattle gene and protein terminologies were enriched with orthologue synonyms, integrated in the NER platform, ProMiner, which is successfully used in human genomics domain. Based on the performance tests done, the present system achieved a fair performance with precision 0.64, recall 0.74, and F(1) measure of 0.69 in a test scenario based on cattle literature. PMID:22665627

  20. A Howardite-Eucrite-Diogenite (HED) Meteorite Compendium: Summarizing Samples of ASteroid 4 Vesta in Preparation for the Dawn Mission

    NASA Technical Reports Server (NTRS)

    Garber, J. M.; Righter, K.

    2011-01-01

    The Howardite-Eucrite-Diogenite (HED) suite of achondritic meteorites, thought to originate from asteroid 4 Vesta, has recently been summarized into a meteorite compendium. This compendium will serve as a guide for researchers interested in further analysis of HEDs, and we expect that interest in these samples will greatly increase with the planned arrival of the Dawn Mission at Vesta in August 2011. The focus of this abstract/poster is to (1) introduce and describe HED samples from both historical falls and Antarctic finds, and (2) provide information on unique HED samples available for study from the Antarctic Meteorite Collection at JSC, including the vesicular eucrite PCA91007, the olivine diogenite EETA79002, and the paired ALH polymict eucrites.

  1. Preferences of Knowledge Users for Two Formats of Summarizing Results from Systematic Reviews: Infographics and Critical Appraisals

    PubMed Central

    Crick, Katelynn; Hartling, Lisa

    2015-01-01

    Objectives To examine and compare preferences of knowledge users for two different formats of summarizing results from systematic reviews: infographics and critical appraisals. Design Cross-sectional. Setting Annual members meeting of a Network of Centres of Excellence in Knowledge Mobilization called TREKK (Translating Emergency Knowledge for Kids). TREKK is a national network of researchers, clinicians, health consumers, and relevant organizations with the goal of mobilizing knowledge to improve emergency care for children. Participants Members of the TREKK Network attending the annual meeting in October 2013. Outcome Measures Overall preference for infographic vs. critical appraisal format. Members rating of each format on a 10-point Likert scale for clarity, comprehensibility, and aesthetic appeal. Members impressions of the appropriateness of the two formats for their professional role and for other audiences. Results Among 64 attendees, 58 members provided feedback (91%). Overall, their preferred format was divided with 24/47 (51%) preferring the infographic to the critical appraisal. Preference varied by professional role, with 15/22 (68%) of physicians preferring the critical appraisal and 8/12 (67%) of nurses preferring the infographic. The critical appraisal was rated higher for clarity (mean 7.8 vs. 7.0; p = 0.03), while the infographic was rated higher for aesthetic appeal (mean 7.2 vs. 5.0; p<0.001). There was no difference between formats for comprehensibility (mean 7.6 critical appraisal vs. 7.1 infographic; p = 0.09). Respondents indicated the infographic would be most useful for patients and their caregivers, while the critical appraisal would be most useful for their professional roles. Conclusions Infographics are considered more aesthetically appealing for summarizing evidence; however, critical appraisal formats are considered clearer and more comprehensible. Our findings show differences in terms of audience-specific preferences for presentation of research results. This study supports other research indicating that tools for knowledge dissemination and translation need to be targeted to specific end users preferences and needs. PMID:26466099

  2. Offsite radiation doses summarized from Hanford environmental monitoring reports for the years 1957-1984. [Contains glossary

    SciTech Connect

    Soldat, J.K.; Price, K.R.; McCormack, W.D.

    1986-02-01

    Since 1957, evaluations of offsite impacts from each year of operation have been summarized in publicly available, annual environmental reports. These evaluations included estimates of potential radiation exposure to members of the public, either in terms of percentages of the then permissible limits or in terms of radiation dose. The estimated potential radiation doses to maximally exposed individuals from each year of Hanford operations are summarized in a series of tables and figures. The applicable standard for radiation dose to an individual for whom the maximum exposure was estimated is also shown. Although the estimates address potential radiation doses to the public from each year of operations at Hanford between 1957 and 1984, their sum will not produce an accurate estimate of doses accumulated over this time period. The estimates were the best evaluations available at the time to assess potential dose from the current year of operation as well as from any radionuclides still present in the environment from previous years of operation. There was a constant striving for improved evaluation of the potential radiation doses received by members of the public, and as a result the methods and assumptions used to estimate doses were periodically modified to add new pathways of exposure and to increase the accuracy of the dose calculations. Three conclusions were reached from this review: radiation doses reported for the years 1957 through 1984 for the maximum individual did not exceed the applicable dose standards; radiation doses reported over the past 27 years are not additive because of the changing and inconsistent methods used; and results from environmental monitoring and the associated dose calculations reported over the 27 years from 1957 through 1984 do not suggest a significant dose contribution from the buildup in the environment of radioactive materials associated with Hanford operations.

  3. Automatic temperature controlled retinal photocoagulation.

    PubMed

    Schlott, Kerstin; Koinzer, Stefan; Ptaszynski, Lars; Bever, Marco; Baade, Alex; Roider, Johann; Birngruber, Reginald; Brinkmann, Ralf

    2012-06-01

    Laser coagulation is a treatment method for many retinal diseases. Due to variations in fundus pigmentation and light scattering inside the eye globe, different lesion strengths are often achieved. The aim of this work is to realize an automatic feedback algorithm to generate desired lesion strengths by controlling the retinal temperature increase with the irradiation time. Optoacoustics afford non-invasive retinal temperature monitoring during laser treatment. A 75 ns/523 nm Q-switched Nd:YLF laser was used to excite the temperature-dependent pressure amplitudes, which were detected at the cornea by an ultrasonic transducer embedded in a contact lens. A 532 nm continuous wave Nd:YAG laser served for photocoagulation. The ED50 temperatures, for which the probability of ophthalmoscopically visible lesions after one hour in vivo in rabbits was 50%, varied from 63C for 20 ms to 49C for 400 ms. Arrhenius parameters were extracted as ?E=273 J mol(-1) and A=3 x 10(44) s(-1). Control algorithms for mild and strong lesions were developed, which led to average lesion diameters of 162 34 ?m and 189 34 ?m, respectively. It could be demonstrated that the sizes of the automatically controlled lesions were widely independent of the treatment laser power and the retinal pigmentation. PMID:22734753

  4. Automatic Inspection In Industry Today

    NASA Astrophysics Data System (ADS)

    Brook, Richard A.

    1989-02-01

    With increasing competition in the manufacturing industries product quality is becoming even more important. The shortcomings of human inspectors in many applications are well know, however, the eye/brain combination is very powerful and difficult to replace. At best, any system only simulates a small subset of the human's operations. The economic justification for installing automatic inspection is often difficult without previous applications experience. It therefore calls for confidence and long-term vision by those making the decisions. Over the last ten years the use of such systems has increased as the technology involved has matured and the risks have diminished. There is now a complete spectrum of industrial applications from simple, low-cost systems using standard sensors and computer hardware to the higher cost, custom-designed systems using novel sensors and processing hardware. The underlying growth in enabling technology has been in many areas; sensors and sensing techniques, signal processing and data processing have all moved forward rapidly. This paper will examine the currrent state of automatic inspection and look to the future. The use of expert systems is an obvious candidate. Parallel processing, giving massive increases in the speed of data reduction, is also likely to play a major role in future systems.

  5. Automatic Inspection In Industry Today

    NASA Astrophysics Data System (ADS)

    Brook, Richard A.

    1989-03-01

    With increasing competition in the manufacturing industries product quality is becoming even more important. The shortcomings of human inspectors in many applications are well know, however, the eye/brain combination is very powerful and difficult to replace. At best, any system only simulates a small subset of the human's operations. The economic justification for installing automatic inspection is often difficult without previous applications experience. It therefore calls for confidence and long-term vision by those making the decisions. Over the last ten years the use of such systems has increased as the technology involved has matured and the risks have diminished. There is now a complete spectrum of industrial applications from simple, low-cost systems using standard sensors and computer hardware to the higher cost, custom-designed systems using novel sensors and processing hardware. The underlying growth in enabling technology has been in many areas; sensors and sensing techniques, signal processing and data processing have all moved forward rapidly. This paper will examine the currrent state of automatic inspection and look to the future. The use of expert systems is an obvious candidate. Parallel processing, giving massive increases in the speed of data reduction, is also likely to play a major role in future systems.

  6. Automatic Computer Mapping of Terrain

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.

    1971-01-01

    Computer processing of 17 wavelength bands of visible, reflective infrared, and thermal infrared scanner spectrometer data, and of three wavelength bands derived from color aerial film has resulted in successful automatic computer mapping of eight or more terrain classes in a Yellowstone National Park test site. The tests involved: (1) supervised and non-supervised computer programs; (2) special preprocessing of the scanner data to reduce computer processing time and cost, and improve the accuracy; and (3) studies of the effectiveness of the proposed Earth Resources Technology Satellite (ERTS) data channels in the automatic mapping of the same terrain, based on simulations, using the same set of scanner data. The following terrain classes have been mapped with greater than 80 percent accuracy in a 12-square-mile area with 1,800 feet of relief; (1) bedrock exposures, (2) vegetated rock rubble, (3) talus, (4) glacial kame meadow, (5) glacial till meadow, (6) forest, (7) bog, and (8) water. In addition, shadows of clouds and cliffs are depicted, but were greatly reduced by using preprocessing techniques.

  7. Automatic visible watermarking of images

    NASA Astrophysics Data System (ADS)

    Rao, A. Ravishankar; Braudaway, Gordon W.; Mintzer, Frederick C.

    1998-04-01

    Visible image watermarking has become an important and widely used technique to identify ownership and protect copyrights to images. A visible image watermark immediately identifies the owner of an image, and if properly constructed, can deter subsequent unscrupulous use of the image. The insertion of a visible watermark should satisfy two conflicting conditions: the intensity of the watermark should be strong enough to be perceptible, yet it should be light enough to be unobtrusive and not mar the beauty of the original image. Typically such an adjustment is made manually, and human intervention is required to set the intensity of the watermark at the right level. This is fine for a few images, but is unsuitable for a large collection of images. Thus, it is desirable to have a technique to automatically adjust the intensity of the watermark based on some underlying property of each image. This will allow a large number of images to be automatically watermarked, this increasing the throughput of the watermarking stage. In this paper we show that the measurement of image texture can be successfully used to automate the adjustment of watermark intensity. A linear regression model is used to predict subjective assessments of correct watermark intensity based on image texture measurements.

  8. Automatic temperature controlled retinal photocoagulation

    NASA Astrophysics Data System (ADS)

    Schlott, Kerstin; Koinzer, Stefan; Ptaszynski, Lars; Bever, Marco; Baade, Alex; Roider, Johann; Birngruber, Reginald; Brinkmann, Ralf

    2012-06-01

    Laser coagulation is a treatment method for many retinal diseases. Due to variations in fundus pigmentation and light scattering inside the eye globe, different lesion strengths are often achieved. The aim of this work is to realize an automatic feedback algorithm to generate desired lesion strengths by controlling the retinal temperature increase with the irradiation time. Optoacoustics afford non-invasive retinal temperature monitoring during laser treatment. A 75 ns/523 nm Q-switched Nd:YLF laser was used to excite the temperature-dependent pressure amplitudes, which were detected at the cornea by an ultrasonic transducer embedded in a contact lens. A 532 nm continuous wave Nd:YAG laser served for photocoagulation. The ED50 temperatures, for which the probability of ophthalmoscopically visible lesions after one hour in vivo in rabbits was 50%, varied from 63C for 20 ms to 49C for 400 ms. Arrhenius parameters were extracted as ?E=273 J mol-1 and A=3.1044 s-1. Control algorithms for mild and strong lesions were developed, which led to average lesion diameters of 162+/-34 ?m and 189+/-34 ?m, respectively. It could be demonstrated that the sizes of the automatically controlled lesions were widely independent of the treatment laser power and the retinal pigmentation.

  9. Automatic testing of speech recognition.

    PubMed

    Francart, Tom; Moonen, Marc; Wouters, Jan

    2009-02-01

    Speech reception tests are commonly administered by manually scoring the oral response of the subject. This requires a test supervisor to be continuously present. To avoid this, a subject can type the response, after which it can be scored automatically. However, spelling errors may then be counted as recognition errors, influencing the test results. We demonstrate an autocorrection approach based on two scoring algorithms to cope with spelling errors. The first algorithm deals with sentences and is based on word scores. The second algorithm deals with single words and is based on phoneme scores. Both algorithms were evaluated with a corpus of typed answers based on three different Dutch speech materials. The percentage of differences between automatic and manual scoring was determined, in addition to the mean difference in speech recognition threshold. The sentence correction algorithm performed at a higher accuracy than commonly obtained with these speech materials. The word correction algorithm performed better than the human operator. Both algorithms can be used in practice and allow speech reception tests with open set speech materials over the internet. PMID:19219692

  10. Populating the Semantic Web by Macro-reading Internet Text

    NASA Astrophysics Data System (ADS)

    Mitchell, Tom M.; Betteridge, Justin; Carlson, Andrew; Hruschka, Estevam; Wang, Richard

    A key question regarding the future of the semantic web is "how will we acquire structured information to populate the semantic web on a vast scale?" One approach is to enter this information manually. A second approach is to take advantage of pre-existing databases, and to develop common ontologies, publishing standards, and reward systems to make this data widely accessible. We consider here a third approach: developing software that automatically extracts structured information from unstructured text present on the web. We also describe preliminary results demonstrating that machine learning algorithms can learn to extract tens of thousands of facts to populate a diverse ontology, with imperfect but reasonably good accuracy.

  11. Sleep automatism: clinical study in forensic nursing.

    PubMed

    Hamer, B A; Payne, A

    1993-01-01

    The authors describe sleep automatism as it pertains to forensic nursing. A plea of sane automatism may result in an acquittal and, as a defense, creates a very interesting medical legal circumstance. A case study is presented to illustrate the necessity for nursing to know how to assess for this dissociative state, to understand the legal implications, and to identify nursing issues relevant to sleep automatism defenses. The clinical and personality characteristics on which evaluations should be based are also outlined. PMID:8516096

  12. Reverse automatic differentiation of modular FORTRAN programs

    SciTech Connect

    Horwedel, J.E.

    1992-03-01

    Several software systems are available for implementing automatic differentiation of computer programs. The forward mode of automatic differentiation is limited by computational intensity and computer memory. The reverse mode, or adjoint approach, is limited by computer memory and disk storage. A modular technique for derivative computation that can significantly reduce memory required to compute derivatives in a complex FORTRAN model using the reverse mode of automatic differentiation is discussed and demonstrated.

  13. Differences in Text Structure and Its Implications for Assessment of Struggling Readers

    ERIC Educational Resources Information Center

    Deane, Paul; Sheehan, Kathleen M.; Sabatini, John; Futagi, Yoko; Kostin, Irene

    2006-01-01

    One source of potential difficulty for struggling readers is the variability of texts across grade levels. This article explores the use of automatic natural language processing techniques to identify dimensions of variation within a corpus of school-appropriate texts. Specifically, we asked: Are there identifiable dimensions of lexical and

  14. An NLP Framework for Non-Topical Text Analysis in Urdu--A Resource Poor Language

    ERIC Educational Resources Information Center

    Mukund, Smruthi

    2012-01-01

    Language plays a very important role in understanding the culture and mindset of people. Given the abundance of electronic multilingual data, it is interesting to see what insight can be gained by automatic analysis of text. This in turn calls for text analysis which is focused on non-topical information such as emotions being expressed that is in

  15. An NLP Framework for Non-Topical Text Analysis in Urdu--A Resource Poor Language

    ERIC Educational Resources Information Center

    Mukund, Smruthi

    2012-01-01

    Language plays a very important role in understanding the culture and mindset of people. Given the abundance of electronic multilingual data, it is interesting to see what insight can be gained by automatic analysis of text. This in turn calls for text analysis which is focused on non-topical information such as emotions being expressed that is in…

  16. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, t?pn; mdl, Vclav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  17. ANPS - AUTOMATIC NETWORK PROGRAMMING SYSTEM

    NASA Technical Reports Server (NTRS)

    Schroer, B. J.

    1994-01-01

    Development of some of the space program's large simulation projects -- like the project which involves simulating the countdown sequence prior to spacecraft liftoff -- requires the support of automated tools and techniques. The number of preconditions which must be met for a successful spacecraft launch and the complexity of their interrelationship account for the difficulty of creating an accurate model of the countdown sequence. Researchers developed ANPS for the Nasa Marshall Space Flight Center to assist programmers attempting to model the pre-launch countdown sequence. Incorporating the elements of automatic programming as its foundation, ANPS aids the user in defining the problem and then automatically writes the appropriate simulation program in GPSS/PC code. The program's interactive user dialogue interface creates an internal problem specification file from user responses which includes the time line for the countdown sequence, the attributes for the individual activities which are part of a launch, and the dependent relationships between the activities. The program's automatic simulation code generator receives the file as input and selects appropriate macros from the library of software modules to generate the simulation code in the target language GPSS/PC. The user can recall the problem specification file for modification to effect any desired changes in the source code. ANPS is designed to write simulations for problems concerning the pre-launch activities of space vehicles and the operation of ground support equipment and has potential for use in developing network reliability models for hardware systems and subsystems. ANPS was developed in 1988 for use on IBM PC or compatible machines. The program requires at least 640 KB memory and one 360 KB disk drive, PC DOS Version 2.0 or above, and GPSS/PC System Version 2.0 from Minuteman Software. The program is written in Turbo Prolog Version 2.0. GPSS/PC is a trademark of Minuteman Software. Turbo Prolog is a trademark of Borland International. IBM PC and PS DOS are registered trademarks of International Business Machines Corporation.

  18. Temporal reasoning over clinical text: the state of the art

    PubMed Central

    Sun, Weiyi; Rumshisky, Anna; Uzuner, Ozlem

    2013-01-01

    Objectives To provide an overview of the problem of temporal reasoning over clinical text and to summarize the state of the art in clinical natural language processing for this task. Target audience This overview targets medical informatics researchers who are unfamiliar with the problems and applications of temporal reasoning over clinical text. Scope We review the major applications of text-based temporal reasoning, describe the challenges for software systems handling temporal information in clinical text, and give an overview of the state of the art. Finally, we present some perspectives on future research directions that emerged during the recent community-wide challenge on text-based temporal reasoning in the clinical domain. PMID:23676245

  19. Keyword Extraction from Arabic Legal Texts

    ERIC Educational Resources Information Center

    Rammal, Mahmoud; Bahsoun, Zeinab; Al Achkar Jabbour, Mona

    2015-01-01

    Purpose: The purpose of this paper is to apply local grammar (LG) to develop an indexing system which automatically extracts keywords from titles of Lebanese official journals. Design/methodology/approach: To build LG for our system, the first word that plays the determinant role in understanding the meaning of a title is analyzed and grouped as

  20. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  1. Automatic interpretation of oblique ionograms

    NASA Astrophysics Data System (ADS)

    Ippolito, Alessandro; Scotto, Carlo; Francis, Matthew; Settimi, Alessandro; Cesaroni, Claudio

    2015-03-01

    We present an algorithm for the identification of trace characteristics of oblique ionograms allowing determination of the Maximum Usable Frequency (MUF) for communication between the transmitter and receiver. The algorithm automatically detects and rejects poor quality ionograms. We performed an exploratory test of the algorithm using data from a campaign of oblique soundings between Rome, Italy (41.90 N, 12.48 E) and Chania, Greece (35.51 N, 24.01 E) and also between Kalkarindji, Australia (17.43 S, 130.81 E) and Culgoora, Australia (30.30 S, 149.55 E). The success of these tests demonstrates the applicability of the method to ionograms recorded by different ionosondes in various helio and geophysical conditions.

  2. Automatic transmission for a vehicle

    SciTech Connect

    Moroto, S.; Sakakibara, S.

    1986-12-09

    An automatic transmission is described for a vehicle, comprising: a coupling means having an input shaft and an output shaft; a belt type continuously-variable speed transmission system having an input pulley mounted coaxially on a first shaft, an output pulley mounted coaxially on a second shaft and a belt extending between the first and second pulleys to transfer power, each of the first and second pulleys having a fixed sheave and a movable sheave. The first shaft is disposed coaxially with and rotatably coupled with the output shaft of the coupling means, the second shaft being disposed side by side and in parallel with the first shaft; a planetary gear mechanism; a forward-reverse changeover mechanism and a low-high speed changeover mechanism.

  3. Automatic Synthesis Of Greedy Programs

    NASA Astrophysics Data System (ADS)

    Bhansali, Sanjay; Miriyala, Kanth; Harandi, Mehdi T.

    1989-03-01

    This paper describes a knowledge based approach to automatically generate Lisp programs using the Greedy method of algorithm design. The system's knowledge base is composed of heuristics for recognizing problems amenable to the Greedy method and knowledge about the Greedy strategy itself (i.e., rules for local optimization, constraint satisfaction, candidate ordering and candidate selection). The system has been able to generate programs for a wide variety of problems including the job-scheduling problem, the 0-1 knapsack problem, the minimal spanning tree problem, and the problem of arranging files on tape to minimize access time. For the special class of problems called matroids, the synthesized program provides optimal solutions, whereas for most other problems the solutions are near-optimal.

  4. Automatic Mechetronic Wheel Light Device

    DOEpatents

    Khan, Mohammed John Fitzgerald

    2004-09-14

    A wheel lighting device for illuminating a wheel of a vehicle to increase safety and enhance aesthetics. The device produces the appearance of a "ring of light" on a vehicle's wheels as the vehicle moves. The "ring of light" can automatically change in color and/or brightness according to a vehicle's speed, acceleration, jerk, selection of transmission gears, and/or engine speed. The device provides auxiliary indicator lights by producing light in conjunction with a vehicle's turn signals, hazard lights, alarm systems, and etc. The device comprises a combination of mechanical and electronic components and can be placed on the outer or inner surface of a wheel or made integral to a wheel or wheel cover. The device can be configured for all vehicle types, and is electrically powered by a vehicle's electrical system and/or battery.

  5. Automatic Nanodesign Using Evolutionary Techniques

    NASA Technical Reports Server (NTRS)

    Globus, Al; Saini, Subhash (Technical Monitor)

    1998-01-01

    Many problems associated with the development of nanotechnology require custom designed molecules. We use genetic graph software, a new development, to automatically evolve molecules of interest when only the requirements are known. Genetic graph software designs molecules, and potentially nanoelectronic circuits, given a fitness function that determines which of two molecules is better. A set of molecules, the first generation, is generated at random then tested with the fitness function, Subsequent generations are created by randomly choosing two parent molecules with a bias towards high scoring molecules, tearing each molecules in two at random, and mating parts from the mother and father to create two children. This procedure is repeated until a satisfactory molecule is found. An atom pair similarity test is currently used as the fitness function to evolve molecules similar to existing pharmaceuticals.

  6. Automatic home medical product recommendation.

    PubMed

    Luo, Gang; Thomas, Selena B; Tang, Chunqiang

    2012-04-01

    Web-based personal health records (PHRs) are being widely deployed. To improve PHR's capability and usability, we proposed the concept of intelligent PHR (iPHR). In this paper, we use automatic home medical product recommendation as a concrete application to demonstrate the benefits of introducing intelligence into PHRs. In this new application domain, we develop several techniques to address the emerging challenges. Our approach uses treatment knowledge and nursing knowledge, and extends the language modeling method to (1) construct a topic-selection input interface for recommending home medical products, (2) produce a global ranking of Web pages retrieved by multiple queries, and (3) provide diverse search results. We demonstrate the effectiveness of our techniques using USMLE medical exam cases. PMID:20703712

  7. Automatic Sequencing for Experimental Protocols

    NASA Astrophysics Data System (ADS)

    Hsieh, Paul F.; Stern, Ivan

    We present a paradigm and implementation of a system for the specification of the experimental protocols to be used for the calibration of AXAF mirrors. For the mirror calibration, several thousand individual measurements need to be defined. For each measurement, over one hundred parameters need to be tabulated for the facility test conductor and several hundred instrument parameters need to be set. We provide a high level protocol language which allows for a tractable representation of the measurement protocol. We present a procedure dispatcher which automatically sequences a protocol more accurately and more rapidly than is possible by an unassisted human operator. We also present back-end tools to generate printed procedure manuals and database tables required for review by the AXAF program. This paradigm has been tested and refined in the calibration of detectors to be used in mirror calibration.

  8. Computerized automatic tip scanning operation

    SciTech Connect

    Nishikawa, K.; Fukushima, T.; Nakai, H.; Yanagisawa, A.

    1984-02-01

    In BWR nuclear power stations the Traversing Incore Probe (TIP) system is one of the most important components in reactor monitoring and control. In previous TIP systems, however, operators have suffered from the complexity of operation and long operation time required. The system presented in this paper realizes the automatic operation of the TIP system by monitoring and driving it with a process computer. This system significantly reduces the burden on customer operators and improves plant efficiency by simplifying the operating procedure, augmenting the accuracy of the measured data, and shortening operating time. The process computer is one of the PODIA (Plant Operation by Displayed Information Automation) systems. This computer transfers control signals to the TIP control panel, which in turn drives equipment by microprocessor control. The process computer contains such components as the CRT/KB unit, the printer plotter, the hard copier, and the message typers required for efficient man-machine communications. Its operation and interface properties are described.

  9. Automatic Image Interpolation Using Homography

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Leh; Tang, Cheng-Yuan; Hor, Maw-Kae; Liu, Chi-Tsung

    2010-12-01

    While taking photographs, we often face the problem that unwanted foreground objects (e.g., vehicles, signs, and pedestrians) occlude the main subject(s). We propose to apply image interpolation (also known as inpainting) techniques to remove unwanted objects in the photographs and to automatically patch the vacancy after the unwanted objects are removed. When given only a single image, if the information loss after the unwanted objects in images being removed is too great, the patching results are usually unsatisfactory. The proposed inpainting techniques employ the homographic constraints in geometry to incorporate multiple images taken from different viewpoints. Our experiment results showed that the proposed techniques could effectively reduce process in searching for potential patches from multiple input images and decide the best patches for the missing regions.

  10. Automatic Detection of Terminology Evolution

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Nina

    As archives contain documents that span over a long period of time, the language used to create these documents and the language used for querying the archive can differ. This difference is due to evolution in both terminology and semantics and will cause a significant number of relevant documents being omitted. A static solution is to use query expansion based on explicit knowledge banks such as thesauri or ontologies. However as we are able to archive resources with more varied terminology, it will be infeasible to use only explicit knowledge for this purpose. There exist only few or no thesauri covering very domain specific terminologies or slang as used in blogs etc. In this Ph.D. thesis we focus on automatically detecting terminology evolution in a completely unsupervised manner as described in this technical paper.

  11. Automatic thermal switch. [spacecraft applications

    NASA Technical Reports Server (NTRS)

    Cunningham, J. W.; Wing, L. D. (inventors)

    1983-01-01

    An automatic thermal switch to control heat flow includes two thermally conductive plates and a thermally conductive switch saddle pivotally mounted to the first plate. A flexible heat carrier is connected between the switch saddle and the second plate. A phase-change power unit, including a piston coupled to the switch saddle, is in thermal contact with the first thermally conductive plate. A biasing element biases the switch saddle in a predetermined position with respect to the first plate. When the phase-change power unit is actuated by an increase in heat transmitted through the first place, the piston extends and causes the switch saddle to pivot, thereby varying the thermal conduction between the two plates through the switch saddle and flexible heat carrier. The biasing element, switch saddle, and piston can be arranged to provide either a normally closed or normally opened thermally conductive path between the two plates.

  12. Automatic blocking of nested loops

    NASA Technical Reports Server (NTRS)

    Schreiber, Robert; Dongarra, Jack J.

    1990-01-01

    Blocked algorithms have much better properties of data locality and therefore can be much more efficient than ordinary algorithms when a memory hierarchy is involved. On the other hand, they are very difficult to write and to tune for particular machines. The reorganization is considered of nested loops through the use of known program transformations in order to create blocked algorithms automatically. The program transformations used are strip mining, loop interchange, and a variant of loop skewing in which invertible linear transformations (with integer coordinates) of the loop indices are allowed. Some problems are solved concerning the optimal application of these transformations. It is shown, in a very general setting, how to choose a nearly optimal set of transformed indices. It is then shown, in one particular but rather frequently occurring situation, how to choose an optimal set of block sizes.

  13. Automatic orientation correction for radiographs

    NASA Astrophysics Data System (ADS)

    Luo, Hui; Luo, Jiebo; Wang, Xiaohui

    2006-03-01

    In picture archiving and communications systems (PACS), images need to be displayed in standardized ways for radiologists' interpretations. However, for most radiographs acquired by computed radiography (CR), digital radiography (DR), or digitized films, the image orientation is undetermined because of the variation of examination conditions and patient situations. To address this problem, an automatic orientation correction method is presented. It first detects the most indicative region for orientation in a radiograph, and then extracts a set of low-level visual features sensitive to rotation from the region. Based on these features, a trained classifier based on a support vector machine is employed to recognize the correct orientation of the radiograph and reorient it to a desired position. A large-scale experiment has been conducted on more than 12,000 radiographs covering a large variety of body parts and projections to validate the method. The overall performance is quite promising, with the success rate of orientation correction reaching 95.2%.

  14. Automatic communication signal monitoring system

    NASA Technical Reports Server (NTRS)

    Bernstein, A. J. (inventor)

    1978-01-01

    A system is presented for automatic monitoring of a communication signal in the RF or IF spectrum utilizing a superheterodyne receiver technique with a VCO to select and sweep the frequency band of interest. A first memory is used to store one band sweep as a reference for continual comparison with subsequent band sweeps. Any deviation of a subsequent band sweep by more than a predetermined tolerance level produces an alarm signal which causes the band sweep data temporarily stored in one of two buffer memories to be transferred to long-term store while the other buffer memory is switched to its store mode to assume the task of temporarily storing subsequent band sweeps.

  15. Automatic electronic fish tracking system

    NASA Technical Reports Server (NTRS)

    Osborne, P. W.; Hoffman, E.; Merriner, J. V.; Richards, C. E.; Lovelady, R. W.

    1976-01-01

    A newly developed electronic fish tracking system to automatically monitor the movements and migratory habits of fish is reported. The system is aimed particularly at studies of effects on fish life of industrial facilities which use rivers or lakes to dump their effluents. Location of fish is acquired by means of acoustic links from the fish to underwater Listening Stations, and by radio links which relay tracking information to a shore-based Data Base. Fish over 4 inches long may be tracked over a 5 x 5 mile area. The electronic fish tracking system provides the marine scientist with electronics which permit studies that were not practical in the past and which are cost-effective compared to manual methods.

  16. Automatic force balance calibration system

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T. (Inventor)

    1996-01-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  17. Automatic insulation resistance testing apparatus

    DOEpatents

    Wyant, Francis J.; Nowlen, Steven P.; Luker, Spencer M.

    2005-06-14

    An apparatus and method for automatic measurement of insulation resistances of a multi-conductor cable. In one embodiment of the invention, the apparatus comprises a power supply source, an input measuring means, an output measuring means, a plurality of input relay controlled contacts, a plurality of output relay controlled contacts, a relay controller and a computer. In another embodiment of the invention the apparatus comprises a power supply source, an input measuring means, an output measuring means, an input switching unit, an output switching unit and a control unit/data logger. Embodiments of the apparatus of the invention may also incorporate cable fire testing means. The apparatus and methods of the present invention use either voltage or current for input and output measured variables.

  18. Automatic toilet seat lowering apparatus

    DOEpatents

    Guerty, Harold G. (Palm Beach Gardens, FL)

    1994-09-06

    A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat. A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat.

  19. Automatic AVHRR image navigation software

    NASA Technical Reports Server (NTRS)

    Baldwin, Dan; Emery, William

    1992-01-01

    This is the final report describing the work done on the project entitled Automatic AVHRR Image Navigation Software funded through NASA-Washington, award NAGW-3224, Account 153-7529. At the onset of this project, we had developed image navigation software capable of producing geo-registered images from AVHRR data. The registrations were highly accurate but required a priori knowledge of the spacecraft's axes alignment deviations, commonly known as attitude. The three angles needed to describe the attitude are called roll, pitch, and yaw, and are the components of the deviations in the along scan, along track and about center directions. The inclusion of the attitude corrections in the navigation software results in highly accurate georegistrations, however, the computation of the angles is very tedious and involves human interpretation for several steps. The technique also requires easily identifiable ground features which may not be available due to cloud cover or for ocean data. The current project was motivated by the need for a navigation system which was automatic and did not require human intervention or ground control points. The first step in creating such a system must be the ability to parameterize the spacecraft's attitude. The immediate goal of this project was to study the attitude fluctuations and determine if they displayed any systematic behavior which could be modeled or parameterized. We chose a period in 1991-1992 to study the attitude of the NOAA 11 spacecraft using data from the Tiros receiving station at the Colorado Center for Astrodynamic Research (CCAR) at the University of Colorado.

  20. Video summarization based tele-endoscopy: a service to efficiently manage visual data generated during wireless capsule endoscopy procedure.

    PubMed

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-09-01

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use. More importantly, WCE combined with mobile computing ensures rapid transmission of diagnostic data to hospitals and enables off-site senior gastroenterologists to offer timely decision making support. However, during this WCE process, video data are produced in huge amounts, but only a limited amount of data is actually useful for diagnosis. The sharing and analysis of this video data becomes a challenging task due the constraints such as limited memory, energy, and communication capability. In order to facilitate efficient WCE data collection and browsing tasks, we present a video summarization-based tele-endoscopy service that estimates the semantically relevant video frames from the perspective of gastroenterologists. For this purpose, image moments, curvature, and multi-scale contrast are computed and are fused to obtain the saliency map of each frame. This saliency map is used to select keyframes. The proposed tele-endoscopy service selects keyframes based on their relevance to the disease diagnosis. This ensures the sending of diagnostically relevant frames to the gastroenterologist instead of sending all the data, thus saving transmission costs and bandwidth. The proposed framework also saves storage costs as well as the precious time of doctors in browsing patient's information. The qualitative and quantitative results are encouraging and show that the proposed service provides video keyframes to the gastroenterologists without discarding important information. PMID:25037715

  1. Neutron and X-Ray Effects on Small Intestine Summarized by Using a Mathematical Model or Paradigm

    NASA Astrophysics Data System (ADS)

    Carr, K. E.; McCullough, J. S.; Nunn, S.; Hume, S. P.; Nelson, A. C.

    1991-03-01

    The responses of intestinal tissues to ionizing radiation can be described by comparing irradiated cell populations qualitatively or quantitatively with corresponding controls. This paper describes quantitative data obtained from resin-embedded sections of neutron-irradiated mouse small intestine at different times after treatment. Information is collected by counting cells or structures present per complete circumference. The data are assessed by using standard statistical tests, which show that early mitotic arrest precedes changes in goblet, absorptive, endocrine and stromal cells and a decrease in crypt numbers. The data can also produce ratios of irradiated: control figures for cells or structural elements. These ratios, along with tissue area measurements, can be used to summarize the structural damage as a composite graph and table, including a total figure, known as the Morphological Index. This is used to quantify the temporal response of the wall as a whole and to compare the effects of different qualities of radiation, here X-ray and cyclotron-produced neutron radiations. It is possible that such analysis can be used predictively along with other reference data to identify the treatment, dose and time required to produce observed tissue damage.

  2. Reading and Writing to Learn in Secondary Education: Online Processing Activity and Written Products in Summarizing and Synthesizing Tasks

    ERIC Educational Resources Information Center

    Mateos, Mar; Martin, Elena; Villalon, Ruth; Luna, Maria

    2008-01-01

    The research reported here employed a multiple-case study methodology to assess the online cognitive and metacognitive activities of 15-year-old secondary students as they read informational texts and wrote a new text in order to learn, and the relation of these activities to the written products they were asked to generate. To investigate the

  3. Automatic Figure Classification in Bioscience Literature

    PubMed Central

    Kim, Daehyun; Ramesh, Balaji Polepalli; Yu, Hong

    2011-01-01

    Millions of figures appear in biomedical articles, and it is important to develop an intelligent figure search engine to return relevant figures based on user entries. In this study we report a figure classifier that automatically classifies biomedical figures into five predefined figure types: Gel-image, Image-of-thing, Graph, Model, and Mix. The classifier explored rich image features and integrated them with text features. We performed feature selection and explored different classification models, including a rule-based figure classifier, a supervised machine-learning classifier, and a multi-model classifier, the latter of which integrated the first two classifiers. Our results show that feature selection improved figure classification and the novel image features we explored were the best among image features that we have examined. Our results also show that integrating text and image features achieved better performance than using either of them individually. The best system is a multi-model classifier which combines the rule-based hierarchical classifier and a support vector machine (SVM) based classifier, achieving a 76.7% F1-score for five-type classification. We demonstrated our system at http://figureclassification.askhermes.org/. PMID:21645638

  4. Automated Extraction of Statistical Expressions from Text for Information Compilation

    NASA Astrophysics Data System (ADS)

    Mori, Tatsunori; Fujioka, Atsushi; Murata, Ichiro

    In order to summarize trend information in document and visualize it, we have to have a method to automatically extract statistical information from document. In this paper, we investigate automated extraction of statistical information, especially, expressions of name of statistical information. First, we classify those expressions into three categories, namely, the action type, the attribute type, and the definition type. Second, the internal structures of them are examined. According to the internal structures, we defined an XML tag set to annotate each part of names of statistical information. As a feasibility study of automated extraction of them, we conducted an experiment in which parts of names of statistics are extracted by using a standard chunking algorithm. The experimental result shows that the parts of names of statistics defined by the tag set can be extracted with good accuracy in the case that we can prepare a training corpus of the domain similar to target documents. On the other hand, the extraction accuracy will be degraded when we cannot prepare such a training corpus.

  5. Terminologies for text-mining; an experiment in the lipoprotein metabolism domain

    PubMed Central

    Alexopoulou, Dimitra; Wchter, Thomas; Pickersgill, Laura; Eyre, Cecilia; Schroeder, Michael

    2008-01-01

    Background The engineering of ontologies, especially with a view to a text-mining use, is still a new research field. There does not yet exist a well-defined theory and technology for ontology construction. Many of the ontology design steps remain manual and are based on personal experience and intuition. However, there exist a few efforts on automatic construction of ontologies in the form of extracted lists of terms and relations between them. Results We share experience acquired during the manual development of a lipoprotein metabolism ontology (LMO) to be used for text-mining. We compare the manually created ontology terms with the automatically derived terminology from four different automatic term recognition (ATR) methods. The top 50 predicted terms contain up to 89% relevant terms. For the top 1000 terms the best method still generates 51% relevant terms. In a corpus of 3066 documents 53% of LMO terms are contained and 38% can be generated with one of the methods. Conclusions Given high precision, automatic methods can help decrease development time and provide significant support for the identification of domain-specific vocabulary. The coverage of the domain vocabulary depends strongly on the underlying documents. Ontology development for text mining should be performed in a semi-automatic way; taking ATR results as input and following the guidelines we described. Availability The TFIDF term recognition is available as Web Service, described at PMID:18460175

  6. Annual Report: Automatic Informative Abstracting and Extracting.

    ERIC Educational Resources Information Center

    Earl, L. L.; And Others

    The development of automatic indexing, abstracting, and extracting systems is investigated. Part I describes the development of tools for making syntactic and semantic distinctions of potential use in automatic indexing and extracting. One of these tools is a program for syntactic analysis (i.e., parsing) of English, the other is a dictionary of

  7. ANNUAL REPORT-AUTOMATIC INDEXING AND ABSTRACTING.

    ERIC Educational Resources Information Center

    Lockheed Missiles and Space Co., Palo Alto, CA. Electronic Sciences Lab.

    THE INVESTIGATION IS CONCERNED WITH THE DEVELOPMENT OF AUTOMATIC INDEXING, ABSTRACTING, AND EXTRACTING SYSTEMS. BASIC INVESTIGATIONS IN ENGLISH MORPHOLOGY, PHONETICS, AND SYNTAX ARE PURSUED AS NECESSARY MEANS TO THIS END. IN THE FIRST SECTION THE THEORY AND DESIGN OF THE "SENTENCE DICTIONARY" EXPERIMENT IN AUTOMATIC EXTRACTION IS OUTLINED. SOME OF

  8. Automatic star-horizon angle measurement system

    NASA Technical Reports Server (NTRS)

    Koerber, K.; Koso, D. A.; Nardella, P. C.

    1969-01-01

    Automatic star horizontal angle measuring aid for general navigational use incorporates an Apollo type sextant. The eyepiece of the sextant is replaced with two light detectors and appropriate circuitry. The device automatically determines the angle between a navigational star and a unique point on the earths horizon as seen on a spacecraft.

  9. Automatic Grading of Spreadsheet and Database Skills

    ERIC Educational Resources Information Center

    Kovacic, Zlatko J.; Green, John Steven

    2012-01-01

    Growing enrollment in distance education has increased student-to-lecturer ratios and, therefore, increased the workload of the lecturer. This growing enrollment has resulted in mounting efforts to develop automatic grading systems in an effort to reduce this workload. While research in the design and development of automatic grading systems has a

  10. 47 CFR 87.219 - Automatic operations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Automatic operations. 87.219 Section 87.219 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES AVIATION SERVICES Aeronautical Advisory Stations (Unicoms) 87.219 Automatic operations. (a) A station operator need not...

  11. 47 CFR 87.219 - Automatic operations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Automatic operations. 87.219 Section 87.219 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES AVIATION SERVICES Aeronautical Advisory Stations (Unicoms) 87.219 Automatic operations. (a) A station operator need not...

  12. 47 CFR 87.219 - Automatic operations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Automatic operations. 87.219 Section 87.219 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES AVIATION SERVICES Aeronautical Advisory Stations (Unicoms) 87.219 Automatic operations. (a) A station operator need not...

  13. Automaticity and Attentional Processes in Aging.

    ERIC Educational Resources Information Center

    Madden, David J.; Mitchell, David B.

    In recent research, two qualitatively different classes of mental operations have been identified. The performance of one type of cognitive task requires attention, in the sense of mental effort, for its execution, while the second type can be performed automatically, independent of attentional control. Further research has shown that automatic

  14. Automatic Item Generation of Probability Word Problems

    ERIC Educational Resources Information Center

    Holling, Heinz; Bertling, Jonas P.; Zeuch, Nina

    2009-01-01

    Mathematical word problems represent a common item format for assessing student competencies. Automatic item generation (AIG) is an effective way of constructing many items with predictable difficulties, based on a set of predefined task parameters. The current study presents a framework for the automatic generation of probability word problems

  15. Automatic Contour Tracking in Ultrasound Images

    ERIC Educational Resources Information Center

    Li, Min; Kambhamettu, Chandra; Stone, Maureen

    2005-01-01

    In this paper, a new automatic contour tracking system, EdgeTrak, for the ultrasound image sequences of human tongue is presented. The images are produced by a head and transducer support system (HATS). The noise and unrelated high-contrast edges in ultrasound images make it very difficult to automatically detect the correct tongue surfaces. In

  16. Automatic Contour Tracking in Ultrasound Images

    ERIC Educational Resources Information Center

    Li, Min; Kambhamettu, Chandra; Stone, Maureen

    2005-01-01

    In this paper, a new automatic contour tracking system, EdgeTrak, for the ultrasound image sequences of human tongue is presented. The images are produced by a head and transducer support system (HATS). The noise and unrelated high-contrast edges in ultrasound images make it very difficult to automatically detect the correct tongue surfaces. In…

  17. Hierarchical Concept Indexing of Full-Text Documents in the Unified Medical Language System Information Sources Map.

    ERIC Educational Resources Information Center

    Wright, Lawrence W.; Nardini, Holly K. Grossetta; Aronson, Alan R.; Rindflesch, Thomas C.

    1999-01-01

    Describes methods for applying natural-language processing for automatic concept-based indexing of full text and methods for exploiting the structure and hierarchy of full-text documents to a large collection of full-text documents drawn from the Health Services/Technology Assessment Text database at the National Library of Medicine. Examines how

  18. Hierarchical Concept Indexing of Full-Text Documents in the Unified Medical Language System Information Sources Map.

    ERIC Educational Resources Information Center

    Wright, Lawrence W.; Nardini, Holly K. Grossetta; Aronson, Alan R.; Rindflesch, Thomas C.

    1999-01-01

    Describes methods for applying natural-language processing for automatic concept-based indexing of full text and methods for exploiting the structure and hierarchy of full-text documents to a large collection of full-text documents drawn from the Health Services/Technology Assessment Text database at the National Library of Medicine. Examines how…

  19. Summarizing results on the performance of a selective set of atmospheric plasma jets for separation of photons and reactive particles

    NASA Astrophysics Data System (ADS)

    Schneider, Simon; Jarzina, Fabian; Lackmann, Jan-Wilm; Golda, Judith; Layes, Vincent; Schulz-von der Gathen, Volker; Bandow, Julia Elisabeth; Benedikt, Jan

    2015-11-01

    A microscale atmospheric-pressure plasma jet is a remote plasma jet, where plasma-generated reactive particles and photons are involved in substrate treatment. Here, we summarize our efforts to develop and characterize a particle- or photon-selective set of otherwise identical jets. In that way, the reactive species or photons can be used separately or in combination to study their isolated or combined effects to test whether the effects are additive or synergistic. The final version of the set of three jets—particle-jet, photon-jet and combined jet—is introduced. This final set realizes the highest reproducibility of the photon and particle fluxes, avoids turbulent gas flow, and the fluxes of the selected plasma-emitted components are almost identical in the case of all jets, while the other component is effectively blocked, which was verified by optical emission spectroscopy and mass spectrometry. Schlieren-imaging and a fluid dynamics simulation show the stability of the gas flow. The performance of these selective jets is demonstrated with the example of the treatment of E. coli bacteria with the different components emitted by a He-only, a He/N2 and a He/O2 plasma. Additionally, measurements of the vacuum UV photon spectra down to the wavelength of 50 nm can be made with the photon-jet and the relative comparison of spectral intensities among different gas mixtures is reported here. The results will show that the vacuum UV photons can lead to the inactivation of the E.coli bacteria.

  20. Mobile Text Messaging for Health: A Systematic Review of Reviews

    PubMed Central

    Hall, Amanda K.; Cole-Lewis, Heather; Bernhardt, Jay M.

    2015-01-01

    The aim of this systematic review of reviews is to identify mobile text-messaging interventions designed for health improvement and behavior change and to derive recommendations for practice. We have compiled and reviewed existing systematic research reviews and meta-analyses to organize and summarize the text-messaging intervention evidence base, identify best-practice recommendations based on findings from multiple reviews, and explore implications for future research. Our review found that the majority of published text-messaging interventions were effective when addressing diabetes self-management, weight loss, physical activity, smoking cessation, and medication adherence for antiretroviral therapy. However, we found limited evidence across the population of studies and reviews to inform recommended intervention characteristics. Although strong evidence supports the value of integrating text-messaging interventions into public health practice, additional research is needed to establish longer-term intervention effects, identify recommended intervention characteristics, and explore issues of cost-effectiveness. PMID:25785892

  1. Mobile text messaging for health: a systematic review of reviews.

    PubMed

    Hall, Amanda K; Cole-Lewis, Heather; Bernhardt, Jay M

    2015-03-18

    The aim of this systematic review of reviews is to identify mobile text-messaging interventions designed for health improvement and behavior change and to derive recommendations for practice. We have compiled and reviewed existing systematic research reviews and meta-analyses to organize and summarize the text-messaging intervention evidence base, identify best-practice recommendations based on findings from multiple reviews, and explore implications for future research. Our review found that the majority of published text-messaging interventions were effective when addressing diabetes self-management, weight loss, physical activity, smoking cessation, and medication adherence for antiretroviral therapy. However, we found limited evidence across the population of studies and reviews to inform recommended intervention characteristics. Although strong evidence supports the value of integrating text-messaging interventions into public health practice, additional research is needed to establish longer-term intervention effects, identify recommended intervention characteristics, and explore issues of cost-effectiveness. PMID:25785892

  2. Semi-automatic object geometry estimation for image personalization

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-01-01

    Digital printing brings about a host of benefits, one of which is the ability to create short runs of variable, customized content. One form of customization that is receiving much attention lately is in photofinishing applications, whereby personalized calendars, greeting cards, and photo books are created by inserting text strings into images. It is particularly interesting to estimate the underlying geometry of the surface and incorporate the text into the image content in an intelligent and natural way. Current solutions either allow fixed text insertion schemes into preprocessed images, or provide manual text insertion tools that are time consuming and aimed only at the high-end graphic designer. It would thus be desirable to provide some level of automation in the image personalization process. We propose a semi-automatic image personalization workflow which includes two scenarios: text insertion and text replacement. In both scenarios, the underlying surfaces are assumed to be planar. A 3-D pinhole camera model is used for rendering text, whose parameters are estimated by analyzing existing structures in the image. Techniques in image processing and computer vison such as the Hough transform, the bilateral filter, and connected component analysis are combined, along with necessary user inputs. In particular, the semi-automatic workflow is implemented as an image personalization tool, which is presented in our companion paper.1 Experimental results including personalized images for both scenarios are shown, which demonstrate the effectiveness of our algorithms.

  3. Semi-automatic development of Payload Operations Control Center software

    NASA Technical Reports Server (NTRS)

    Ballin, Sidney

    1988-01-01

    This report summarizes the current status of CTA's investigation of methods and tools for automating the software development process in NASA Goddard Space Flight Center, Code 500. The emphasis in this effort has been on methods and tools in support of software reuse. The most recent phase of the effort has been a domain analysis of Payload Operations Control Center (POCC) software. This report summarizes the results of the domain analysis, and proposes an approach to semi-automatic development of POCC Application Processor (AP) software based on these results. The domain analysis enabled us to abstract, from specific systems, the typical components of a POCC AP. We were also able to identify patterns in the way one AP might be different from another. These two perspectives--aspects that tend to change from AP to AP, and aspects that tend to remain the same--suggest an overall approach to the reuse of POCC AP software. We found that different parts of an AP require different development technologies. We propose a hybrid approach that combines constructive and generative technologies. Constructive methods emphasize the assembly of pre-defined reusable components. Generative methods provide for automated generation of software from specifications in a very-high-level language (VHLL).

  4. A unified framework for multioriented text detection and recognition.

    PubMed

    Yao, Cong; Bai, Xiang; Liu, Wenyu

    2014-11-01

    High level semantics embodied in scene texts are both rich and clear and thus can serve as important cues for a wide range of vision applications, for instance, image understanding, image indexing, video search, geolocation, and automatic navigation. In this paper, we present a unified framework for text detection and recognition in natural images. The contributions of this paper are threefold: 1) text detection and recognition are accomplished concurrently using exactly the same features and classification scheme; 2) in contrast to methods in the literature, which mainly focus on horizontal or near-horizontal texts, the proposed system is capable of localizing and reading texts of varying orientations; and 3) a new dictionary search method is proposed, to correct the recognition errors usually caused by confusions among similar yet different characters. As an additional contribution, a novel image database with texts of different scales, colors, fonts, and orientations in diverse real-world scenarios, is generated and released. Extensive experiments on standard benchmarks as well as the proposed database demonstrate that the proposed system achieves highly competitive performance, especially on multioriented texts. PMID:25203989

  5. Comprehending Technical Texts: Predicting and Defining Unfamiliar Terms

    PubMed Central

    Elhadad, Noemie

    2006-01-01

    We investigate how to improve access to medical literature for health consumers. Our focus is on medical terminology. We present a method to predict automatically in a given text which medical terms are unlikely to be understood by a lay reader. Our method, which is linguistically motivated and fully unsupervised, relies on how common a specific term is in texts that we already know are familiar to a lay reader. Once a term is identified as unfamiliar, an appropriate definition is mined from the Web to be provided to the reader. Our experiments show that the prediction and the addition of definitions significantly improve lay readers comprehension of sentences containing technical medical terms. PMID:17238339

  6. Inferring Group Processes from Computer-Mediated Affective Text Analysis

    SciTech Connect

    Schryver, Jack C; Begoli, Edmon; Jose, Ajith; Griffin, Christopher

    2011-02-01

    Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Several useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.

  7. 47 CFR 25.281 - Automatic Transmitter Identification System (ATIS).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false Automatic Transmitter Identification System... CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Operations 25.281 Automatic Transmitter... identified through the use of an automatic transmitter identification system as specified below....

  8. 47 CFR 25.281 - Automatic Transmitter Identification System (ATIS).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 2 2012-10-01 2012-10-01 false Automatic Transmitter Identification System... CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Operations 25.281 Automatic Transmitter... identified through the use of an automatic transmitter identification system as specified below....

  9. 47 CFR 25.281 - Automatic Transmitter Identification System (ATIS).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Automatic Transmitter Identification System... CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Operations 25.281 Automatic Transmitter... identified through the use of an automatic transmitter identification system as specified below....

  10. Semi Automatic Ontology Instantiation in the domain of Risk Management

    NASA Astrophysics Data System (ADS)

    Makki, Jawad; Alquier, Anne-Marie; Prince, Violaine

    One of the challenging tasks in the context of Ontological Engineering is to automatically or semi-automatically support the process of Ontology Learning and Ontology Population from semi-structured documents (texts). In this paper we describe a Semi-Automatic Ontology Instantiation method from natural language text, in the domain of Risk Management. This method is composed from three steps 1 ) Annotation with part-of-speech tags, 2) Semantic Relation Instances Extraction, 3) Ontology instantiation process. It's based on combined NLP techniques using human intervention between steps 2 and 3 for control and validation. Since it heavily relies on linguistic knowledge it is not domain dependent which is a good feature for portability between the different fields of risk management application. The proposed methodology uses the ontology of the PRIMA1 project (supported by the European community) as a Generic Domain Ontology and populates it via an available corpus. A first validation of the approach is done through an experiment with Chemical Fact Sheets from Environmental Protection Agency2.

  11. Control systems for automatic transmissions

    SciTech Connect

    Yamamoto, K.; Baba, F.

    1988-11-29

    This patent describes a control system for an automatic transmission employed in a vehicle comprising: a torque converter coupled with an output shaft of an engine, a power transmitting gear arrangement disposed at an output end of the torque converter, speed change means for changing over power transmitting paths to one from another in the power transmitting gear arrangement to give rise to speed change, a valve arrangement for controlling an operation fluid to be supplied to and drained from the speed change means, revolving speed sensing means for detecting the revolving speed of one of input and output portions of the power transmitting gear arrangement when the shifting-down to a low speed with which an engine brake effect is to be obtained from a high speed in the power transmitting gear arrangement is carried out; and control means for selecting directly a first low speed with which the engine brake effect can be obtained as the low speed of a target of the shifting-down when the revolving speed detected by the revolving speed.

  12. Automatic transmission system for vehicles

    SciTech Connect

    Takefuta, H.

    1987-02-24

    An automatic transmission system is described for vehicles having a friction clutch coupled to an internal combustion engine, a speed-change-gear type transmission coupled to the clutch, a first actuator for operating the clutch in response to an electric signal, and a second actuator for operating the transmission in response to an electric signal. A means is also included for producing at least one condition data indicative of the condition of operation of the vehicle and a control means responsive to at least the condition data for controlling the operation of the first and second actuators in order to carry out the gear change operation of the transmission. The control means includes: (1) a storing means for storing a first data representing a first gear change map showing gear change characteristics for obtaining economical running and a second data representing a second gear change map showing gear change characteristics for obtaining high-power-output running; (2) a signal generating means which has an operation lever movable along a predetermined gear shift pattern used for manual operation and generates a command signal indicative of the position of the operation lever on the gear shift pattern; and (3) means responsive to the command signal and the condition data for controlling the first and second actuators so as to carry out a gear change operation in one mode among a first control mode in which the transmission is shifted to the gear position corresponding to the position of the operation lever.

  13. Automatic segmentation of psoriasis lesions

    NASA Astrophysics Data System (ADS)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  14. Automatic transmission for motor vehicles

    SciTech Connect

    Miura, M.; Sakakibara, S.

    1989-06-27

    An automatic transmission for a motor vehicle is described, comprising: a transmission housing; a hydraulic torque converter having rotational axes, an input shaft, an output shaft and a direct coupling clutch for directly coupling the input shaft to the output shaft; an auxiliary transmission mechanism provided coaxially with the hydraulic torque converter and having an input shaft, an output shaft with an input end and an output end and an overdrive mechanism of planetary gear type having a reduction ratio smaller than 1, the input shaft and the output shaft of the auxiliary transmission being located close to and on the side of the hydraulic torque converter with respect to the auxiliary transmission, respectively, and being coupled with a planetary gear carrier and a ring gear of the overdrive mechanism, respectively, a one-way clutch being provided between the planetary gear carrier and a sun gear of the overdrive mechanism, a clutch being provided between the planetary gear carrier and a position radially and outwardly of the one-way clutch for engaging the disengaging the planetary carrier and the sun gear, a brake being provided between the transmission housing and the sun gear and positioned radially and outwardly of the clutch for controlling engagement of the sun gear with a stationary portion of the transmission housing, and the output end of the output shaft being disposed between the auxiliary transmission mechanism and the hydraulic torque converter.

  15. Actuator for automatic cruising system

    SciTech Connect

    Suzuki, K.

    1989-03-07

    An actuator for an automatic cruising system is described, comprising: a casing; a control shaft provided in the casing for rotational movement; a control motor for driving the control shaft; an input shaft; an electromagnetic clutch and a reduction gear which are provided between the control motor and the control shaft; and an external linkage mechanism operatively connected to the control shaft; wherein the reduction gear is a type of Ferguson's mechanical paradox gear having a pinion mounted on the input shaft always connected to the control motor; a planetary gear meshing with the pinion so as to revolve around the pinion; a static internal gear meshing with the planetary gear and connected with the electromagnetic clutch for movement to a position restricting rotation of the static internal gear; and a rotary internal gear fixed on the control shaft and meshed with the planetary gear, the rotary internal gear having a number of teeth slightly different from a number of teeth of the static internal gear; and the electromagnetic clutch has a tubular electromagnetic coil coaxially provided around the input shaft and an engaging means for engaging and disengaging with the static internal gear in accordance with on-off operation of the electromagnetic coil.

  16. Automatic locking orthotic knee device

    NASA Technical Reports Server (NTRS)

    Weddendorf, Bruce C. (Inventor)

    1993-01-01

    An articulated tang in clevis joint for incorporation in newly manufactured conventional strap-on orthotic knee devices or for replacing such joints in conventional strap-on orthotic knee devices is discussed. The instant tang in clevis joint allows the user the freedom to extend and bend the knee normally when no load (weight) is applied to the knee and to automatically lock the knee when the user transfers weight to the knee, thus preventing a damaged knee from bending uncontrollably when weight is applied to the knee. The tang in clevis joint of the present invention includes first and second clevis plates, a tang assembly and a spacer plate secured between the clevis plates. Each clevis plate includes a bevelled serrated upper section. A bevelled shoe is secured to the tank in close proximity to the bevelled serrated upper section of the clevis plates. A coiled spring mounted within an oblong bore of the tang normally urges the shoes secured to the tang out of engagement with the serrated upper section of each clevic plate to allow rotation of the tang relative to the clevis plate. When weight is applied to the joint, the load compresses the coiled spring, the serrations on each clevis plate dig into the bevelled shoes secured to the tang to prevent relative movement between the tang and clevis plates. A shoulder is provided on the tang and the spacer plate to prevent overextension of the joint.

  17. Automatic Systems For Spectroradiometric Measurements

    NASA Astrophysics Data System (ADS)

    Goebel, David G.; Schneider, William E.

    1982-02-01

    Traditionally, determining the spectroradiometric output of a light source has been a long, tedious, rather time consuming process. Many hours could be spent in calibrating the spectroradiometer for spectral response, collecting data on the source under investigation and in properly reducing the data. Due to rapid advances in electronic technology, it is now possible to obtain an automated spectroradiometer system at a modest cost which can be interfaced to a relatively inexpensive programable desk top calculator or microcomputer to provide a system which can automatically conduct a spectral scan and provide a real-time printout or display of spectral output under program control. This paper will cover the basic parameters to consider when selecting a spectroradiometer and also the features a spectroradiometer should have in order to be completely automated and capable of operating under programed control. In addition, requirements for measuring a wide range of light sources ranging from continuous or steady state sources from sunlight to starlight and also pulsed sources such as Xenon flash lamps and pulsed LED's will be briefly described. Finally, some of the present commercially available automated spectroradiometer systems will be described.

  18. Automatic Weather Station (AWS) Lidar

    NASA Technical Reports Server (NTRS)

    Rall, Jonathan A.R.; Abshire, James B.; Spinhirne, James D.; Smith, David E. (Technical Monitor)

    2000-01-01

    An autonomous, low-power atmospheric lidar instrument is being developed at NASA Goddard Space Flight Center. This compact, portable lidar will operate continuously in a temperature controlled enclosure, charge its own batteries through a combination of a small rugged wind generator and solar panels, and transmit its data from remote locations to ground stations via satellite. A network of these instruments will be established by co-locating them at remote Automatic Weather Station (AWS) sites in Antarctica under the auspices of the National Science Foundation (NSF). The NSF Office of Polar Programs provides support to place the weather stations in remote areas of Antarctica in support of meteorological research and operations. The AWS meteorological data will directly benefit the analysis of the lidar data while a network of ground based atmospheric lidar will provide knowledge regarding the temporal evolution and spatial extent of Type la polar stratospheric clouds (PSC). These clouds play a crucial role in the annual austral springtime destruction of stratospheric ozone over Antarctica, i.e. the ozone hole. In addition, the lidar will monitor and record the general atmospheric conditions (transmission and backscatter) of the overlying atmosphere which will benefit the Geoscience Laser Altimeter System (GLAS). Prototype lidar instruments have been deployed to the Amundsen-Scott South Pole Station (1995-96, 2000) and to an Automated Geophysical Observatory site (AGO 1) in January 1999. We report on data acquired with these instruments, instrument performance, and anticipated performance of the AWS Lidar.

  19. Ekofisk automatic GPS subsidence measurements

    SciTech Connect

    Mes, M.J.; Landau, H.; Luttenberger, C.

    1996-10-01

    A fully automatic GPS satellite-based procedure for the reliable measurement of subsidence of several platforms in almost real time is described. Measurements are made continuously on platforms in the North Sea Ekofisk Field area. The procedure also yields rate measurements, which are also essential for confirming platform safety, planning of remedial work, and verification of subsidence models. GPS measurements are more attractive than seabed pressure-gauge-based platform subsidence measurements-they are much cheaper to install and maintain and not subject to gauge drift. GPS measurements were coupled to oceanographic quantities such as the platform deck clearance, which leads to less complex offshore survey procedures. Ekofisk is an oil and gas field in the southern portion of the Norwegian North Sea. Late in 1984, it was noticed that the Ekofisk platform decks were closer to the sea surface than when the platforms were installed-subsidence was the only logical explanation. After the subsidence phenomenon was recognized, an accurate measurement method was needed to measure progression of subsidence and the associated subsidence rate. One available system for which no further development was needed, was the NAVSTAR GPS-measurements started in March 1985.

  20. Automatic segmentation of the colon

    NASA Astrophysics Data System (ADS)

    Wyatt, Christopher L.; Ge, Yaorong; Vining, David J.

    1999-05-01

    Virtual colonoscopy is a minimally invasive technique that enables detection of colorectal polyps and cancer. Normally, a patient's bowel is prepared with colonic lavage and gas insufflation prior to computed tomography (CT) scanning. An important step for 3D analysis of the image volume is segmentation of the colon. The high-contrast gas/tissue interface that exists in the colon lumen makes segmentation of the majority of the colon relatively easy; however, two factors inhibit automatic segmentation of the entire colon. First, the colon is not the only gas-filled organ in the data volume: lungs, small bowel, and stomach also meet this criteria. User-defined seed points placed in the colon lumen have previously been required to spatially isolate only the colon. Second, portions of the colon lumen may be obstructed by peristalsis, large masses, and/or residual feces. These complicating factors require increased user interaction during the segmentation process to isolate additional colon segments. To automate the segmentation of the colon, we have developed a method to locate seed points and segment the gas-filled lumen with no user supervision. We have also developed an automated approach to improve lumen segmentation by digitally removing residual contrast-enhanced fluid resulting from a new bowel preparation that liquefies and opacifies any residual feces.