Cohen, Andrew R; Bjornsson, Christopher S; Temple, Sally; Banker, Gary; Roysam, Badrinath
2009-08-01
An algorithmic information-theoretic method is presented for object-level summarization of meaningful changes in image sequences. Object extraction and tracking data are represented as an attributed tracking graph (ATG). Time courses of object states are compared using an adaptive information distance measure, aided by a closed-form multidimensional quantization. The notion of meaningful summarization is captured by using the gap statistic to estimate the randomness deficiency from algorithmic statistics. The summary is the clustering result and feature subset that maximize the gap statistic. This approach was validated on four bioimaging applications: 1) It was applied to a synthetic data set containing two populations of cells differing in the rate of growth, for which it correctly identified the two populations and the single feature out of 23 that separated them; 2) it was applied to 59 movies of three types of neuroprosthetic devices being inserted in the brain tissue at three speeds each, for which it correctly identified insertion speed as the primary factor affecting tissue strain; 3) when applied to movies of cultured neural progenitor cells, it correctly distinguished neurons from progenitors without requiring the use of a fixative stain; and 4) when analyzing intracellular molecular transport in cultured neurons undergoing axon specification, it automatically confirmed the role of kinesins in axon specification.
Generalized minimum dominating set and application in automatic text summarization
NASA Astrophysics Data System (ADS)
Xu, Yi-Zhi; Zhou, Hai-Jun
2016-03-01
For a graph formed by vertices and weighted edges, a generalized minimum dominating set (MDS) is a vertex set of smallest cardinality such that the summed weight of edges from each outside vertex to vertices in this set is equal to or larger than certain threshold value. This generalized MDS problem reduces to the conventional MDS problem in the limiting case of all the edge weights being equal to the threshold value. We treat the generalized MDS problem in the present paper by a replica-symmetric spin glass theory and derive a set of belief-propagation equations. As a practical application we consider the problem of extracting a set of sentences that best summarize a given input text document. We carry out a preliminary test of the statistical physics-inspired method to this automatic text summarization problem.
On the Application of Syntactic Methodologies in Automatic Text Analysis.
ERIC Educational Resources Information Center
Salton, Gerard; And Others
1990-01-01
Summarizes various linguistic approaches proposed for document analysis in information retrieval environments. Topics discussed include syntactic analysis; use of machine-readable dictionary information; knowledge base construction; the PLNLP English Grammar (PEG) system; phrase normalization; and statistical and syntactic phrase evaluation used…
Automatic Text Structuring and Summarization.
ERIC Educational Resources Information Center
Salton, Gerard; And Others
1997-01-01
Discussion of the use of information retrieval techniques for automatic generation of semantic hypertext links focuses on automatic text summarization. Topics include World Wide Web links, text segmentation, and evaluation of text summarization by comparing automatically generated abstracts with manually prepared abstracts. (Author/LRW)
NASA Astrophysics Data System (ADS)
Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.
2017-01-01
Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.
Lacson, Ronilda C; Barzilay, Regina; Long, William J
2006-10-01
Spoken medical dialogue is a valuable source of information for patients and caregivers. This work presents a first step towards automatic analysis and summarization of spoken medical dialogue. We first abstract a dialogue into a sequence of semantic categories using linguistic and contextual features integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). We then describe and implement a summarizer that utilizes this automatically induced structure. Our evaluation results indicate that automatically generated summaries exhibit high resemblance to summaries written by humans. In addition, task-based evaluation shows that physicians can reasonably answer questions related to patient care by looking at the automatically generated summaries alone, in contrast to the physicians' performance when they were given summaries from a naïve summarizer (p<0.05). This work demonstrates the feasibility of automatically structuring and summarizing spoken medical dialogue.
Automatic and user-centric approaches to video summary evaluation
NASA Astrophysics Data System (ADS)
Taskiran, Cuneyt M.; Bentley, Frank
2007-01-01
Automatic video summarization has become an active research topic in content-based video processing. However, not much emphasis has been placed on developing rigorous summary evaluation methods and developing summarization systems based on a clear understanding of user needs, obtained through user centered design. In this paper we address these two topics and propose an automatic video summary evaluation algorithm adapted from teh text summarization domain.
Automatic Keyframe Summarization of User-Generated Video
2014-06-01
using the framework presented in this paper. 12 Scenery Technology has been developed that classifies the genre of a video. Here, video genres are...types of videos that shares similarities in content and structure. Many genres of video footage exist. Some examples include news, sports, movies...cartoons, and commercials. Rasheed et al. [42] classify video genres (comedy, action, drama, and horror) with low-level video statistics, such as average
Evaluation of automatic video summarization systems
NASA Astrophysics Data System (ADS)
Taskiran, Cuneyt M.
2006-01-01
Compact representations of video, or video summaries, data greatly enhances efficient video browsing. However, rigorous evaluation of video summaries generated by automatic summarization systems is a complicated process. In this paper we examine the summary evaluation problem. Text summarization is the oldest and most successful summarization domain. We show some parallels between these to domains and introduce methods and terminology. Finally, we present results for a comprehensive evaluation summary that we have performed.
Fiszman, Marcelo; Demner-Fushman, Dina; Kilicoglu, Halil; Rindflesch, Thomas C.
2009-01-01
As the number of electronic biomedical textual resources increases, it becomes harder for physicians to find useful answers at the point of care. Information retrieval applications provide access to databases; however, little research has been done on using automatic summarization to help navigate the documents returned by these systems. After presenting a semantic abstraction automatic summarization system for MEDLINE citations, we concentrate on evaluating its ability to identify useful drug interventions for fifty-three diseases. The evaluation methodology uses existing sources of evidence-based medicine as surrogates for a physician-annotated reference standard. Mean average precision (MAP) and a clinical usefulness score developed for this study were computed as performance metrics. The automatic summarization system significantly outperformed the baseline in both metrics. The MAP gain was 0.17 (p < 0.01) and the increase in the overall score of clinical usefulness was 0.39 (p < 0.05). PMID:19022398
Kansas environmental and resource study: A Great Plains model, tasks 1-6
NASA Technical Reports Server (NTRS)
Haralick, R. M.; Kanemasu, E. T.; Morain, S. A.; Yarger, H. L. (Principal Investigator); Ulaby, F. T.; Shanmugam, K. S.; Williams, D. L.; Mccauley, J. R.; Mcnaughton, J. L.
1972-01-01
There are no author identified significant results in this report. Environmental and resources investigations in Kansas utilizing ERTS-1 imagery are summarized for the following areas: (1) use of feature extraction techniqued for texture context information in ERTS imagery; (2) interpretation and automatic image enhancement; (3) water use, production, and disease detection and predictions for wheat; (4) ERTS-1 agricultural statistics; (5) monitoring fresh water resources; and (6) ground pattern analysis in the Great Plains.
Model Considerations for Memory-based Automatic Music Transcription
NASA Astrophysics Data System (ADS)
Albrecht, Štěpán; Šmídl, Václav
2009-12-01
The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.
Autoclass: An automatic classification system
NASA Technical Reports Server (NTRS)
Stutz, John; Cheeseman, Peter; Hanson, Robin
1991-01-01
The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.
An Overview of Biomolecular Event Extraction from Scientific Documents
Vanegas, Jorge A.; Matos, Sérgio; González, Fabio; Oliveira, José L.
2015-01-01
This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed. PMID:26587051
Automatic Summarization as a Combinatorial Optimization Problem
NASA Astrophysics Data System (ADS)
Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki
We derived the oracle summary with the highest ROUGE score that can be achieved by integrating sentence extraction with sentence compression from the reference abstract. The analysis results of the oracle revealed that summarization systems have to assign an appropriate compression rate for each sentence in the document. In accordance with this observation, this paper proposes a summarization method as a combinatorial optimization: selecting the set of sentences that maximize the sum of the sentence scores from the pool which consists of the sentences with various compression rates, subject to length constrains. The score of the sentence is defined by its compression rate, content words and positional information. The parameters for the compression rates and positional information are optimized by minimizing the loss between score of oracles and that of candidates. The results obtained from TSC-2 corpus showed that our method outperformed the previous systems with statistical significance.
In vivo Comet assay--statistical analysis and power calculations of mice testicular cells.
Hansen, Merete Kjær; Sharma, Anoop Kumar; Dybdahl, Marianne; Boberg, Julie; Kulahci, Murat
2014-11-01
The in vivo Comet assay is a sensitive method for evaluating DNA damage. A recurrent concern is how to analyze the data appropriately and efficiently. A popular approach is to summarize the raw data into a summary statistic prior to the statistical analysis. However, consensus on which summary statistic to use has yet to be reached. Another important consideration concerns the assessment of proper sample sizes in the design of Comet assay studies. This study aims to identify a statistic suitably summarizing the % tail DNA of mice testicular samples in Comet assay studies. A second aim is to provide curves for this statistic outlining the number of animals and gels to use. The current study was based on 11 compounds administered via oral gavage in three doses to male mice: CAS no. 110-26-9, CAS no. 512-56-1, CAS no. 111873-33-7, CAS no. 79-94-7, CAS no. 115-96-8, CAS no. 598-55-0, CAS no. 636-97-5, CAS no. 85-28-9, CAS no. 13674-87-8, CAS no. 43100-38-5 and CAS no. 60965-26-6. Testicular cells were examined using the alkaline version of the Comet assay and the DNA damage was quantified as % tail DNA using a fully automatic scoring system. From the raw data 23 summary statistics were examined. A linear mixed-effects model was fitted to the summarized data and the estimated variance components were used to generate power curves as a function of sample size. The statistic that most appropriately summarized the within-sample distributions was the median of the log-transformed data, as it most consistently conformed to the assumptions of the statistical model. Power curves for 1.5-, 2-, and 2.5-fold changes of the highest dose group compared to the control group when 50 and 100 cells were scored per gel are provided to aid in the design of future Comet assay studies on testicular cells. Copyright © 2014 Elsevier B.V. All rights reserved.
GenePublisher: Automated analysis of DNA microarray data.
Knudsen, Steen; Workman, Christopher; Sicheritz-Ponten, Thomas; Friis, Carsten
2003-07-01
GenePublisher, a system for automatic analysis of data from DNA microarray experiments, has been implemented with a web interface at http://www.cbs.dtu.dk/services/GenePublisher. Raw data are uploaded to the server together with a specification of the data. The server performs normalization, statistical analysis and visualization of the data. The results are run against databases of signal transduction pathways, metabolic pathways and promoter sequences in order to extract more information. The results of the entire analysis are summarized in report form and returned to the user.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, K.A.; Neuman, M.C.; Simmonds, D.D.
An effective method for detecting computer misuse is the automatic monitoring and analysis of on-line user activity. This activity is reflected in the system audit record, in the system vulnerability posture, and in other evidence found through active testing of the system. During the last several years we have implemented an automatic misuse detection system at Los Alamos. This is the Network Anomaly Detection and Intrusion Reporter (NADIR). We are currently expanding NADIR to include processing of the Cray UNICOS operating system. This new component is called the UNICOS Realtime NADIR, or UNICORN. UNICORN summarizes user activity and system configurationmore » in statistical profiles. It compares these profiles to expert rules that define security policy and improper or suspicious behavior. It reports suspicious behavior to security auditors and provides tools to aid in follow-up investigations. The first phase of UNICORN development is nearing completion, and will be operational in late 1994.« less
A State-Of-The-Art Survey on Automatic Indexing.
ERIC Educational Resources Information Center
Liebesny, Felix
This survey covers the literature relating to automatic indexing techniques, services, and applications published during 1969-1973. Works are summarized and described in the areas of: (1) general papers on automatic indexing; (2) KWIC indexes; (3) KWIC variants listed alphabetically by acronym with descriptions; (4) other KWIC variants arranged by…
Automatic summarization of soccer highlights using audio-visual descriptors.
Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc
2015-01-01
Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.
Testing & Evaluation of Close-Range SAR for Monitoring & Automatically Detecting Pavement Conditions
DOT National Transportation Integrated Search
2012-01-01
This report summarizes activities in support of the DOT contract on Testing & Evaluating Close-Range SAR for Monitoring & Automatically Detecting Pavement Conditions & Improve Visual Inspection Procedures. The work of this project was performed by Dr...
Bayesian classification theory
NASA Technical Reports Server (NTRS)
Hanson, Robin; Stutz, John; Cheeseman, Peter
1991-01-01
The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework and using various mathematical and algorithmic approximations, the AutoClass system searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit or share model parameters though a class hierarchy. We summarize the mathematical foundations of AutoClass.
ERIC Educational Resources Information Center
Salton, G.
1980-01-01
Summarizes studies of pseudoclassification, a process of utilizing user relevance assessments of certain documents with respect to certain queries to build term classes designed to retrieve relevant documents. Conclusions are reached concerning the effectiveness and feasibility of constructing term classifications based on human relevance…
Application of nonlinear transformations to automatic flight control
NASA Technical Reports Server (NTRS)
Meyer, G.; Su, R.; Hunt, L. R.
1984-01-01
The theory of transformations of nonlinear systems to linear ones is applied to the design of an automatic flight controller for the UH-1H helicopter. The helicopter mathematical model is described and it is shown to satisfy the necessary and sufficient conditions for transformability. The mapping is constructed, taking the nonlinear model to canonical form. The performance of the automatic control system in a detailed simulation on the flight computer is summarized.
Pattern-Based Extraction of Argumentation from the Scientific Literature
ERIC Educational Resources Information Center
White, Elizabeth K.
2010-01-01
As the number of publications in the biomedical field continues its exponential increase, techniques for automatically summarizing information from this body of literature have become more diverse. In addition, the targets of summarization have become more subtle; initial work focused on extracting the factual assertions from full-text papers,…
Automatic video summarization driven by a spatio-temporal attention model
NASA Astrophysics Data System (ADS)
Barland, R.; Saadane, A.
2008-02-01
According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.
NASA Technical Reports Server (NTRS)
Bizzell, R. M.; Feiveson, A. H.; Hall, F. G.; Bauer, M. E.; Davis, B. J.; Malila, W. A.; Rice, D. P.
1975-01-01
The CITARS was an experiment designed to quantitatively evaluate crop identification performance for corn and soybeans in various environments using a well-defined set of automatic data processing (ADP) techniques. Each technique was applied to data acquired to recognize and estimate proportions of corn and soybeans. The CITARS documentation summarizes, interprets, and discusses the crop identification performances obtained using (1) different ADP procedures; (2) a linear versus a quadratic classifier; (3) prior probability information derived from historic data; (4) local versus nonlocal recognition training statistics and the associated use of preprocessing; (5) multitemporal data; (6) classification bias and mixed pixels in proportion estimation; and (7) data with differnt site characteristics, including crop, soil, atmospheric effects, and stages of crop maturity.
NASA Technical Reports Server (NTRS)
1983-01-01
This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.
Graph-based biomedical text summarization: An itemset mining and sentence clustering approach.
Nasr Azadani, Mozhgan; Ghadiri, Nasser; Davoodijam, Ensieh
2018-06-12
Automatic text summarization offers an efficient solution to access the ever-growing amounts of both scientific and clinical literature in the biomedical domain by summarizing the source documents while maintaining their most informative contents. In this paper, we propose a novel graph-based summarization method that takes advantage of the domain-specific knowledge and a well-established data mining technique called frequent itemset mining. Our summarizer exploits the Unified Medical Language System (UMLS) to construct a concept-based model of the source document and mapping the document to the concepts. Then, it discovers frequent itemsets to take the correlations among multiple concepts into account. The method uses these correlations to propose a similarity function based on which a represented graph is constructed. The summarizer then employs a minimum spanning tree based clustering algorithm to discover various subthemes of the document. Eventually, it generates the final summary by selecting the most informative and relative sentences from all subthemes within the text. We perform an automatic evaluation over a large number of summaries using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metrics. The results demonstrate that the proposed summarization system outperforms various baselines and benchmark approaches. The carried out research suggests that the incorporation of domain-specific knowledge and frequent itemset mining equips the summarization system in a better way to address the informativeness measurement of the sentences. Moreover, clustering the graph nodes (sentences) can enable the summarizer to target different main subthemes of a source document efficiently. The evaluation results show that the proposed approach can significantly improve the performance of the summarization systems in the biomedical domain. Copyright © 2018. Published by Elsevier Inc.
Summarization as the base for text assessment
NASA Astrophysics Data System (ADS)
Karanikolas, Nikitas N.
2015-02-01
We present a model that apply shallow text summarization as a cheap (in resources needed) process for Automatic (machine based) free text answer Assessment (AA). The evaluation of the proposed method induces the inference that the Conventional Assessment (CA, man made assessment of free text answers) does not have an obvious mechanical replacement. However, this is a research challenge.
Review of automatic detection of pig behaviours by using image analysis
NASA Astrophysics Data System (ADS)
Han, Shuqing; Zhang, Jianhua; Zhu, Mengshuai; Wu, Jianzhai; Kong, Fantao
2017-06-01
Automatic detection of lying, moving, feeding, drinking, and aggressive behaviours of pigs by means of image analysis can save observation input by staff. It would help staff make early detection of diseases or injuries of pigs during breeding and improve management efficiency of swine industry. This study describes the progress of pig behaviour detection based on image analysis and advancement in image segmentation of pig body, segmentation of pig adhesion and extraction of pig behaviour characteristic parameters. Challenges for achieving automatic detection of pig behaviours were summarized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christoph, G.G; Jackson, K.A.; Neuman, M.C.
An effective method for detecting computer misuse is the automatic auditing and analysis of on-line user activity. This activity is reflected in the system audit record, by changes in the vulnerability posture of the system configuration, and in other evidence found through active testing of the system. In 1989 we started developing an automatic misuse detection system for the Integrated Computing Network (ICN) at Los Alamos National Laboratory. Since 1990 this system has been operational, monitoring a variety of network systems and services. We call it the Network Anomaly Detection and Intrusion Reporter, or NADIR. During the last year andmore » a half, we expanded NADIR to include processing of audit and activity records for the Cray UNICOS operating system. This new component is called the UNICOS Real-time NADIR, or UNICORN. UNICORN summarizes user activity and system configuration information in statistical profiles. In near real-time, it can compare current activity to historical profiles and test activity against expert rules that express our security policy and define improper or suspicious behavior. It reports suspicious behavior to security auditors and provides tools to aid in follow-up investigations. UNICORN is currently operational on four Crays in Los Alamos` main computing network, the ICN.« less
Ramanujam, Nedunchelian; Kaliappan, Manivannan
2016-01-01
Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach. PMID:27034971
E.W. Fobes; R.W. Rowe
1968-01-01
A system for classifying wood-using industries and recording pertinent statistics for automatic data processing is described. Forms and coding instructions for recording data of primary processing plants are included.
NASA Technical Reports Server (NTRS)
White, W. F.; Clark, L.
1980-01-01
The flight performance of the Terminal Configured Vehicle airplane is summarized. Demonstration automatic approaches and landings utilizing time reference scanning beam microwave landing system (TRSB/MLS) guidance are presented. The TRSB/MLS was shown to provide the terminal area guidance necessary for flying curved automatic approaches with final legs as short as 2 km.
Automatic Classification of Medical Text: The Influence of Publication Form1
Cole, William G.; Michael, Patricia A.; Stewart, James G.; Blois, Marsden S.
1988-01-01
Previous research has shown that within the domain of medical journal abstracts the statistical distribution of words is neither random nor uniform, but is highly characteristic. Many words are used mainly or solely by one medical specialty or when writing about one particular level of description. Due to this regularity of usage, automatic classification within journal abstracts has proved quite successful. The present research asks two further questions. It investigates whether this statistical regularity and automatic classification success can also be achieved in medical textbook chapters. It then goes on to see whether the statistical distribution found in textbooks is sufficiently similar to that found in abstracts to permit accurate classification of abstracts based solely on previous knowledge of textbooks. 14 textbook chapters and 45 MEDLINE abstracts were submitted to an automatic classification program that had been trained only on chapters drawn from a standard textbook series. Statistical analysis of the properties of abstracts vs. chapters revealed important differences in word use. Automatic classification performance was good for chapters, but poor for abstracts.
Kuretzki, Carlos Henrique; Campos, Antônio Carlos Ligocki; Malafaia, Osvaldo; Soares, Sandramara Scandelari Kusano de Paula; Tenório, Sérgio Bernardo; Timi, Jorge Rufino Ribas
2016-03-01
The use of information technology is often applied in healthcare. With regard to scientific research, the SINPE(c) - Integrated Electronic Protocols was created as a tool to support researchers, offering clinical data standardization. By the time, SINPE(c) lacked statistical tests obtained by automatic analysis. Add to SINPE(c) features for automatic realization of the main statistical methods used in medicine . The study was divided into four topics: check the interest of users towards the implementation of the tests; search the frequency of their use in health care; carry out the implementation; and validate the results with researchers and their protocols. It was applied in a group of users of this software in their thesis in the strict sensu master and doctorate degrees in one postgraduate program in surgery. To assess the reliability of the statistics was compared the data obtained both automatically by SINPE(c) as manually held by a professional in statistics with experience with this type of study. There was concern for the use of automatic statistical tests, with good acceptance. The chi-square, Mann-Whitney, Fisher and t-Student were considered as tests frequently used by participants in medical studies. These methods have been implemented and thereafter approved as expected. The incorporation of the automatic SINPE (c) Statistical Analysis was shown to be reliable and equal to the manually done, validating its use as a research tool for medical research.
The report summarizes the progress in the design and construction of automatic equipment for synchronizing cell division in culture by periodic...Concurrent experiments in hypothermic synchronization of algal cell division are reported.
A Discriminative Sentence Compression Method as Combinatorial Optimization Problem
NASA Astrophysics Data System (ADS)
Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki
In the study of automatic summarization, the main research topic was `important sentence extraction' but nowadays `sentence compression' is a hot research topic. Conventional sentence compression methods usually transform a given sentence into a parse tree or a dependency tree, and modify them to get a shorter sentence. However, this method is sometimes too rigid. In this paper, we regard sentence compression as an combinatorial optimization problem that extracts an optimal subsequence of words. Hori et al. also proposed a similar method, but they used only a small number of features and their weights were tuned by hand. We introduce a large number of features such as part-of-speech bigrams and word position in the sentence. Furthermore, we train the system by discriminative learning. According to our experiments, our method obtained better score than other methods with statistical significance.
Computerized adaptive control weld skate with CCTV weld guidance project
NASA Technical Reports Server (NTRS)
Wall, W. A.
1976-01-01
This report summarizes progress of the automatic computerized weld skate development portion of the Computerized Weld Skate with Closed Circuit Television (CCTV) Arc Guidance Project. The main goal of the project is to develop an automatic welding skate demonstration model equipped with CCTV weld guidance. The three main goals of the overall project are to: (1) develop a demonstration model computerized weld skate system, (2) develop a demonstration model automatic CCTV guidance system, and (3) integrate the two systems into a demonstration model of computerized weld skate with CCTV weld guidance for welding contoured parts.
Cognitive Factors in Hypnotic Susceptibility
ERIC Educational Resources Information Center
Palmer, Robert D.; Field, Peter B.
1971-01-01
This research explored the influence of cognitive variables on susceptibility to hypnosis. The three variables of concern in the present study are automatization, attention, and body experience. The results are summarized. (Author)
Software engineering and simulation
NASA Technical Reports Server (NTRS)
Zhang, Shou X.; Schroer, Bernard J.; Messimer, Sherri L.; Tseng, Fan T.
1990-01-01
This paper summarizes the development of several automatic programming systems for discrete event simulation. Emphasis is given on the model development, or problem definition, and the model writing phases of the modeling life cycle.
Static Methods in the Design of Nonlinear Automatic Control Systems,
1984-06-27
227 Chapter VI. Ways of Decrease of the Number of Statistical Nodes During the Research of Nonlinear Systems...at present occupies the central place. This region of research was called the statistical dynamics of nonlinear H automatic control systems...receives further development in the numerous research of Soviet and C foreign scientists. Special role in the development of the statistical dynamics of
Automatic Text Summarization for Indonesian Language Using TextTeaser
NASA Astrophysics Data System (ADS)
Gunawan, D.; Pasaribu, A.; Rahmat, R. F.; Budiarto, R.
2017-04-01
Text summarization is one of the solution for information overload. Reducing text without losing the meaning not only can save time to read, but also maintain the reader’s understanding. One of many algorithms to summarize text is TextTeaser. Originally, this algorithm is intended to be used for text in English. However, due to TextTeaser algorithm does not consider the meaning of the text, we implement this algorithm for text in Indonesian language. This algorithm calculates four elements, such as title feature, sentence length, sentence position and keyword frequency. We utilize TextRank, an unsupervised and language independent text summarization algorithm, to evaluate the summarized text yielded by TextTeaser. The result shows that the TextTeaser algorithm needs more improvement to obtain better accuracy.
Automatic Neural Processing of Disorder-Related Stimuli in Social Anxiety Disorder: Faces and More
Schulz, Claudia; Mothes-Lasch, Martin; Straube, Thomas
2013-01-01
It has been proposed that social anxiety disorder (SAD) is associated with automatic information processing biases resulting in hypersensitivity to signals of social threat such as negative facial expressions. However, the nature and extent of automatic processes in SAD on the behavioral and neural level is not entirely clear yet. The present review summarizes neuroscientific findings on automatic processing of facial threat but also other disorder-related stimuli such as emotional prosody or negative words in SAD. We review initial evidence for automatic activation of the amygdala, insula, and sensory cortices as well as for automatic early electrophysiological components. However, findings vary depending on tasks, stimuli, and neuroscientific methods. Only few studies set out to examine automatic neural processes directly and systematic attempts are as yet lacking. We suggest that future studies should: (1) use different stimulus modalities, (2) examine different emotional expressions, (3) compare findings in SAD with other anxiety disorders, (4) use more sophisticated experimental designs to investigate features of automaticity systematically, and (5) combine different neuroscientific methods (such as functional neuroimaging and electrophysiology). Finally, the understanding of neural automatic processes could also provide hints for therapeutic approaches. PMID:23745116
A hierarchical structure for automatic meshing and adaptive FEM analysis
NASA Technical Reports Server (NTRS)
Kela, Ajay; Saxena, Mukul; Perucchio, Renato
1987-01-01
A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.
Sen Sarma, Moushumi; Arcoleo, David; Khetani, Radhika S; Chee, Brant; Ling, Xu; He, Xin; Jiang, Jing; Mei, Qiaozhu; Zhai, ChengXiang; Schatz, Bruce
2011-07-01
With the rapid decrease in cost of genome sequencing, the classification of gene function is becoming a primary problem. Such classification has been performed by human curators who read biological literature to extract evidence. BeeSpace Navigator is a prototype software for exploratory analysis of gene function using biological literature. The software supports an automatic analogue of the curator process to extract functions, with a simple interface intended for all biologists. Since extraction is done on selected collections that are semantically indexed into conceptual spaces, the curation can be task specific. Biological literature containing references to gene lists from expression experiments can be analyzed to extract concepts that are computational equivalents of a classification such as Gene Ontology, yielding discriminating concepts that differentiate gene mentions from other mentions. The functions of individual genes can be summarized from sentences in biological literature, to produce results resembling a model organism database entry that is automatically computed. Statistical frequency analysis based on literature phrase extraction generates offline semantic indexes to support these gene function services. The website with BeeSpace Navigator is free and open to all; there is no login requirement at www.beespace.illinois.edu for version 4. Materials from the 2010 BeeSpace Software Training Workshop are available at www.beespace.illinois.edu/bstwmaterials.php.
Automatic Brake Slack Adjusters on Transit Buses
DOT National Transportation Integrated Search
1985-04-01
The purpose of this report is to provide transit bus agencies with current information on automatic brake slack adjusters. Rather than a comprehensive statistical survey, this report is intended to provide an overview of the states of automatic slack...
Realistic radio communications in pilot simulator training
DOT National Transportation Integrated Search
2000-12-01
This report summarizes the first-year efforts of assessing the requirement and feasibility of simulating radio communication automatically. A review of the training and crew resource/task management literature showed both practical and theoretical su...
Nick, Todd G
2007-01-01
Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.
Learning to rank-based gene summary extraction.
Shang, Yue; Hao, Huihui; Wu, Jiajin; Lin, Hongfei
2014-01-01
In recent years, the biomedical literature has been growing rapidly. These articles provide a large amount of information about proteins, genes and their interactions. Reading such a huge amount of literature is a tedious task for researchers to gain knowledge about a gene. As a result, it is significant for biomedical researchers to have a quick understanding of the query concept by integrating its relevant resources. In the task of gene summary generation, we regard automatic summary as a ranking problem and apply the method of learning to rank to automatically solve this problem. This paper uses three features as a basis for sentence selection: gene ontology relevance, topic relevance and TextRank. From there, we obtain the feature weight vector using the learning to rank algorithm and predict the scores of candidate summary sentences and obtain top sentences to generate the summary. ROUGE (a toolkit for summarization of automatic evaluation) was used to evaluate the summarization result and the experimental results showed that our method outperforms the baseline techniques. According to the experimental result, the combination of three features can improve the performance of summary. The application of learning to rank can facilitate the further expansion of features for measuring the significance of sentences.
MeSH indexing based on automatically generated summaries.
Jimeno-Yepes, Antonio J; Plaza, Laura; Mork, James G; Aronson, Alan R; Díaz, Alberto
2013-06-26
MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading.
Travtek Evaluation Task C3: Camera Car Study
DOT National Transportation Integrated Search
1998-11-01
A "biometric" technology is an automatic method for the identification, or identity verification, of an individual based on physiological or behavioral characteristics. The primary objective of the study summarized in this tech brief was to make reco...
The research of full automatic oil filtering control technology of high voltage insulating oil
NASA Astrophysics Data System (ADS)
Gong, Gangjun; Zhang, Tong; Yan, Guozeng; Zhang, Han; Chen, Zhimin; Su, Chang
2017-09-01
In this paper, the design scheme of automatic oil filter control system for transformer oil in UHV substation is summarized. The scheme specifically includes the typical double tank filter connection control method of the transformer oil of the UHV substation, which distinguishes the single port and the double port connection structure of the oil tank. Finally, the design scheme of the temperature sensor and respirator is given in detail, and the detailed evaluation and application scenarios are given for reference.
Automatic Extraction of Highway Traffic Data From Aerial Photographs
DOT National Transportation Integrated Search
1997-01-01
This is the fifth and final report provided to fulfill the statutory requirement to periodically summarize the progress of the Intelligent Transportation Systems (ITS) program administered by the U.S. Department of Transportation (DOT). In the Transp...
Computer automation for feedback system design
NASA Technical Reports Server (NTRS)
1975-01-01
Mathematical techniques and explanations of various steps used by an automated computer program to design feedback systems are summarized. Special attention was given to refining the automatic evaluation suboptimal loop transmission and the translation of time to frequency domain specifications.
Modis, SeaWIFS, and Pathfinder funded activities
NASA Technical Reports Server (NTRS)
Evans, Robert H.
1995-01-01
MODIS (Moderate Resolution Imaging Spectrometer), SeaWIFS (Sea-viewing Wide Field Sensor), Pathfinder, and DSP (Digital Signal Processor) objectives are summarized. An overview of current progress is given for the automatic processing database, client/server status, matchup database, and DSP support.
NASA Astrophysics Data System (ADS)
Pries, V. V.; Proskuriakov, N. E.
2018-04-01
To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.
Li, J L; Deng, H; Lai, D B; Xu, F; Chen, J; Gao, G; Recker, R R; Deng, H W
2001-07-01
To efficiently manipulate large amounts of genotype data generated with fluorescently labeled dinucleotide markers, we developed a Microsoft database management system, named. offers several advantages. First, it accommodates the dynamic nature of the accumulations of genotype data during the genotyping process; some data need to be confirmed or replaced by repeat lab procedures. By using, the raw genotype data can be imported easily and continuously and incorporated into the database during the genotyping process that may continue over an extended period of time in large projects. Second, almost all of the procedures are automatic, including autocomparison of the raw data read by different technicians from the same gel, autoadjustment among the allele fragment-size data from cross-runs or cross-platforms, autobinning of alleles, and autocompilation of genotype data for suitable programs to perform inheritance check in pedigrees. Third, provides functions to track electrophoresis gel files to locate gel or sample sources for any resultant genotype data, which is extremely helpful for double-checking consistency of raw and final data and for directing repeat experiments. In addition, the user-friendly graphic interface of renders processing of large amounts of data much less labor-intensive. Furthermore, has built-in mechanisms to detect some genotyping errors and to assess the quality of genotype data that then are summarized in the statistic reports automatically generated by. The can easily handle >500,000 genotype data entries, a number more than sufficient for typical whole-genome linkage studies. The modules and programs we developed for the can be extended to other database platforms, such as Microsoft SQL server, if the capability to handle still greater quantities of genotype data simultaneously is desired.
Automatic Fringe Detection for Oil Film Interferometry Measurement of Skin Friction
NASA Technical Reports Server (NTRS)
Naughton, Jonathan W.; Decker, Robert K.; Jafari, Farhad
2001-01-01
This report summarizes two years of work on investigating algorithms for automatically detecting fringe patterns in images acquired using oil-drop interferometry for the determination of skin friction. Several different analysis methods were tested, and a combination of a windowed Fourier transform followed by a correlation was found to be most effective. The implementation of this method is discussed and details of the process are described. The results indicate that this method shows promise for automating the fringe detection process, but further testing is required.
[Advances in automatic detection technology for images of thin blood film of malaria parasite].
Juan-Sheng, Zhang; Di-Qiang, Zhang; Wei, Wang; Xiao-Guang, Wei; Zeng-Guo, Wang
2017-05-05
This paper reviews the computer vision and image analysis studies aiming at automated diagnosis or screening of malaria in microscope images of thin blood film smears. On the basis of introducing the background and significance of automatic detection technology, the existing detection technologies are summarized and divided into several steps, including image acquisition, pre-processing, morphological analysis, segmentation, count, and pattern classification components. Then, the principles and implementation methods of each step are given in detail. In addition, the promotion and application in automatic detection technology of thick blood film smears are put forwarded as questions worthy of study, and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.
NASA Technical Reports Server (NTRS)
Coker, A. E.; Higer, A. L.; Rogers, R. H.; Shah, N. J.; Reed, L. E.; Walker, S.
1975-01-01
The techniques used and the results achieved in the successful application of Skylab Multispectral Scanner (EREP S-192) high-density digital tape data for the automatic categorizing and mapping of land-water cover types in the Green Swamp of Florida were summarized. Data was provided from Skylab pass number 10 on 13 June 1973. Significant results achieved included the automatic mapping of a nine-category and a three-category land-water cover map of the Green Swamp. The land-water cover map was used to make interpretations of a hydrologic condition in the Green Swamp. This type of use marks a significant breakthrough in the processing and utilization of EREP S-192 data.
Bridging the semantic gap in sports
NASA Astrophysics Data System (ADS)
Li, Baoxin; Errico, James; Pan, Hao; Sezan, M. Ibrahim
2003-01-01
One of the major challenges facing current media management systems and the related applications is the so-called "semantic gap" between the rich meaning that a user desires and the shallowness of the content descriptions that are automatically extracted from the media. In this paper, we address the problem of bridging this gap in the sports domain. We propose a general framework for indexing and summarizing sports broadcast programs. The framework is based on a high-level model of sports broadcast video using the concept of an event, defined according to domain-specific knowledge for different types of sports. Within this general framework, we develop automatic event detection algorithms that are based on automatic analysis of the visual and aural signals in the media. We have successfully applied the event detection algorithms to different types of sports including American football, baseball, Japanese sumo wrestling, and soccer. Event modeling and detection contribute to the reduction of the semantic gap by providing rudimentary semantic information obtained through media analysis. We further propose a novel approach, which makes use of independently generated rich textual metadata, to fill the gap completely through synchronization of the information-laden textual data with the basic event segments. An MPEG-7 compliant prototype browsing system has been implemented to demonstrate semantic retrieval and summarization of sports video.
Automatic Coding of Short Text Responses via Clustering in Educational Assessment
ERIC Educational Resources Information Center
Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank
2016-01-01
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…
Summarizing an Ontology: A "Big Knowledge" Coverage Approach.
Zheng, Ling; Perl, Yehoshua; Elhanan, Gai; Ochs, Christopher; Geller, James; Halper, Michael
2017-01-01
Maintenance and use of a large ontology, consisting of thousands of knowledge assertions, are hampered by its scope and complexity. It is important to provide tools for summarization of ontology content in order to facilitate user "big picture" comprehension. We present a parameterized methodology for the semi-automatic summarization of major topics in an ontology, based on a compact summary of the ontology, called an "aggregate partial-area taxonomy", followed by manual enhancement. An experiment is presented to test the effectiveness of such summarization measured by coverage of a given list of major topics of the corresponding application domain. SNOMED CT's Specimen hierarchy is the test-bed. A domain-expert provided a list of topics that serves as a gold standard. The enhanced results show that the aggregate taxonomy covers most of the domain's main topics.
MeSH indexing based on automatically generated summaries
2013-01-01
Background MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. Results We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Conclusions Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading. PMID:23802936
DiffNet: automatic differential functional summarization of dE-MAP networks.
Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes
2014-10-01
The study of genetic interaction networks that respond to changing conditions is an emerging research problem. Recently, Bandyopadhyay et al. (2010) proposed a technique to construct a differential network (dE-MAPnetwork) from two static gene interaction networks in order to map the interaction differences between them under environment or condition change (e.g., DNA-damaging agent). This differential network is then manually analyzed to conclude that DNA repair is differentially effected by the condition change. Unfortunately, manual construction of differential functional summary from a dE-MAP network that summarizes all pertinent functional responses is time-consuming, laborious and error-prone, impeding large-scale analysis on it. To this end, we propose DiffNet, a novel data-driven algorithm that leverages Gene Ontology (go) annotations to automatically summarize a dE-MAP network to obtain a high-level map of functional responses due to condition change. We tested DiffNet on the dynamic interaction networks following MMS treatment and demonstrated the superiority of our approach in generating differential functional summaries compared to state-of-the-art graph clustering methods. We studied the effects of parameters in DiffNet in controlling the quality of the summary. We also performed a case study that illustrates its utility. Copyright © 2014 Elsevier Inc. All rights reserved.
Indexing, Browsing, and Searching of Digital Video.
ERIC Educational Resources Information Center
Smeaton, Alan F.
2004-01-01
Presents a literature review that covers the following topics related to indexing, browsing, and searching of digital video: video coding and standards; conventional approaches to accessing digital video; automatically structuring and indexing digital video; searching, browsing, and summarization; measurement and evaluation of the effectiveness of…
2013-01-01
Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733
Term-Weighting Approaches in Automatic Text Retrieval.
ERIC Educational Resources Information Center
Salton, Gerard; Buckley, Christopher
1988-01-01
Summarizes the experimental evidence that indicates that text indexing systems based on the assignment of appropriately weighted single terms produce retrieval results superior to those obtained with more elaborate text representations, and provides baseline single term indexing models with which more elaborate content analysis procedures can be…
Finding Meaning: Sense Inventories for Improved Word Sense Disambiguation
ERIC Educational Resources Information Center
Brown, Susan Windisch
2010-01-01
The deep semantic understanding necessary for complex natural language processing tasks, such as automatic question-answering or text summarization, would benefit from highly accurate word sense disambiguation (WSD). This dissertation investigates what makes an appropriate and effective sense inventory for WSD. Drawing on theories and…
Automatic, computerized testing of bolts
NASA Technical Reports Server (NTRS)
Carlucci, J., Jr.; Lobb, V. B.; Stoller, F. W.
1970-01-01
System for testing bolts with various platings, lubricants, nuts, and tightening procedures tests 200 fasteners, and processes and summarizes the results, within one month. System measures input torque, nut rotation, bolt clamping force, bolt shank twist, and bolt elongation, data is printed in report form. Test apparatus is described.
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
NASA Astrophysics Data System (ADS)
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
Challenges Facing Crop Production And (Some) Potential Solutions
NASA Astrophysics Data System (ADS)
Schnable, P. S.
2017-12-01
To overcome some of the myriad challenges facing sustainable crop production we are seeking to develop statistical models that will predict crop performance in diverse agronomic environments. Crop phenotypes such as yield and drought tolerance are controlled by genotype, environment (considered broadly) and their interaction (GxE). As a consequence of the next generation sequencing revolution genotyping data are now available for a wide diversity of accessions in each of the major crops. The necessary volumes of phenotypic data, however, remain limiting and our understanding of molecular basis of GxE is minimal. To address this limitation, we are collaborating with engineers to construct new sensors and robots to automatically collect large volumes of phenotypic data. Two types of high-throughput, high-resolution, field-based phenotyping systems and new sensors will be described. Some of these technologies will be introduced within the context of the Genomes to Fields Initiative. Progress towards developing predictive models will be briefly summarized. An administrative structure that fosters transdisciplinary collaborations will be briefly described.
Smith, Joseph M.; Mather, Martha E.
2012-01-01
Ecological indicators are science-based tools used to assess how human activities have impacted environmental resources. For monitoring and environmental assessment, existing species assemblage data can be used to make these comparisons through time or across sites. An impediment to using assemblage data, however, is that these data are complex and need to be simplified in an ecologically meaningful way. Because multivariate statistics are mathematical relationships, statistical groupings may not make ecological sense and will not have utility as indicators. Our goal was to define a process to select defensible and ecologically interpretable statistical simplifications of assemblage data in which researchers and managers can have confidence. For this, we chose a suite of statistical methods, compared the groupings that resulted from these analyses, identified convergence among groupings, then we interpreted the groupings using species and ecological guilds. When we tested this approach using a statewide stream fish dataset, not all statistical methods worked equally well. For our dataset, logistic regression (Log), detrended correspondence analysis (DCA), cluster analysis (CL), and non-metric multidimensional scaling (NMDS) provided consistent, simplified output. Specifically, the Log, DCA, CL-1, and NMDS-1 groupings were ≥60% similar to each other, overlapped with the fluvial-specialist ecological guild, and contained a common subset of species. Groupings based on number of species (e.g., Log, DCA, CL and NMDS) outperformed groupings based on abundance [e.g., principal components analysis (PCA) and Poisson regression]. Although the specific methods that worked on our test dataset have generality, here we are advocating a process (e.g., identifying convergent groupings with redundant species composition that are ecologically interpretable) rather than the automatic use of any single statistical tool. We summarize this process in step-by-step guidance for the future use of these commonly available ecological and statistical methods in preparing assemblage data for use in ecological indicators.
New auto-segment method of cerebral hemorrhage
NASA Astrophysics Data System (ADS)
Wang, Weijiang; Shen, Tingzhi; Dang, Hua
2007-12-01
A novel method for Computerized tomography (CT) cerebral hemorrhage (CH) image automatic segmentation is presented in the paper, which uses expert system that models human knowledge about the CH automatic segmentation problem. The algorithm adopts a series of special steps and extracts some easy ignored CH features which can be found by statistic results of mass real CH images, such as region area, region CT number, region smoothness and some statistic CH region relationship. And a seven steps' extracting mechanism will ensure these CH features can be got correctly and efficiently. By using these CH features, a decision tree which models the human knowledge about the CH automatic segmentation problem has been built and it will ensure the rationality and accuracy of the algorithm. Finally some experiments has been taken to verify the correctness and reasonable of the automatic segmentation, and the good correct ratio and fast speed make it possible to be widely applied into practice.
46 CFR 62.50-30 - Additional requirements for periodically unattended machinery plants.
Code of Federal Regulations, 2013 CFR
2013-10-01
... automatically and continuously charged. (e) Assistance-needed alarm. The engineer's assistance-needed alarm (see... period of time necessary for an engineer to respond at the ECC from the machinery spaces or engineers... engineers' accommodations. Other than fire or flooding alarms, this may be accomplished by summarized visual...
46 CFR 62.50-30 - Additional requirements for periodically unattended machinery plants.
Code of Federal Regulations, 2011 CFR
2011-10-01
... automatically and continuously charged. (e) Assistance-needed alarm. The engineer's assistance-needed alarm (see... period of time necessary for an engineer to respond at the ECC from the machinery spaces or engineers... engineers' accommodations. Other than fire or flooding alarms, this may be accomplished by summarized visual...
46 CFR 62.50-30 - Additional requirements for periodically unattended machinery plants.
Code of Federal Regulations, 2012 CFR
2012-10-01
... automatically and continuously charged. (e) Assistance-needed alarm. The engineer's assistance-needed alarm (see... period of time necessary for an engineer to respond at the ECC from the machinery spaces or engineers... engineers' accommodations. Other than fire or flooding alarms, this may be accomplished by summarized visual...
46 CFR 62.50-30 - Additional requirements for periodically unattended machinery plants.
Code of Federal Regulations, 2010 CFR
2010-10-01
... automatically and continuously charged. (e) Assistance-needed alarm. The engineer's assistance-needed alarm (see... period of time necessary for an engineer to respond at the ECC from the machinery spaces or engineers... engineers' accommodations. Other than fire or flooding alarms, this may be accomplished by summarized visual...
ERIC Educational Resources Information Center
Nehm, Ross H.; Ha, Minsu; Mayfield, Elijah
2012-01-01
This study explored the use of machine learning to automatically evaluate the accuracy of students' written explanations of evolutionary change. Performance of the Summarization Integrated Development Environment (SIDE) program was compared to human expert scoring using a corpus of 2,260 evolutionary explanations written by 565 undergraduate…
Multi-scale radiomic analysis of sub-cortical regions in MRI related to autism, gender and age
NASA Astrophysics Data System (ADS)
Chaddad, Ahmad; Desrosiers, Christian; Toews, Matthew
2017-03-01
We propose using multi-scale image textures to investigate links between neuroanatomical regions and clinical variables in MRI. Texture features are derived at multiple scales of resolution based on the Laplacian-of-Gaussian (LoG) filter. Three quantifier functions (Average, Standard Deviation and Entropy) are used to summarize texture statistics within standard, automatically segmented neuroanatomical regions. Significance tests are performed to identify regional texture differences between ASD vs. TDC and male vs. female groups, as well as correlations with age (corrected p < 0.05). The open-access brain imaging data exchange (ABIDE) brain MRI dataset is used to evaluate texture features derived from 31 brain regions from 1112 subjects including 573 typically developing control (TDC, 99 females, 474 males) and 539 Autism spectrum disorder (ASD, 65 female and 474 male) subjects. Statistically significant texture differences between ASD vs. TDC groups are identified asymmetrically in the right hippocampus, left choroid-plexus and corpus callosum (CC), and symmetrically in the cerebellar white matter. Sex-related texture differences in TDC subjects are found in primarily in the left amygdala, left cerebellar white matter, and brain stem. Correlations between age and texture in TDC subjects are found in the thalamus-proper, caudate and pallidum, most exhibiting bilateral symmetry.
Automatic identification of bacterial types using statistical imaging methods
NASA Astrophysics Data System (ADS)
Trattner, Sigal; Greenspan, Hayit; Tepper, Gapi; Abboud, Shimon
2003-05-01
The objective of the current study is to develop an automatic tool to identify bacterial types using computer-vision and statistical modeling techniques. Bacteriophage (phage)-typing methods are used to identify and extract representative profiles of bacterial types, such as the Staphylococcus Aureus. Current systems rely on the subjective reading of plaque profiles by human expert. This process is time-consuming and prone to errors, especially as technology is enabling the increase in the number of phages used for typing. The statistical methodology presented in this work, provides for an automated, objective and robust analysis of visual data, along with the ability to cope with increasing data volumes.
Automatic identification of bullet signatures based on consecutive matching striae (CMS) criteria.
Chu, Wei; Thompson, Robert M; Song, John; Vorburger, Theodore V
2013-09-10
The consecutive matching striae (CMS) numeric criteria for firearm and toolmark identifications have been widely accepted by forensic examiners, although there have been questions concerning its observer subjectivity and limited statistical support. In this paper, based on signal processing and extraction, a model for the automatic and objective counting of CMS is proposed. The position and shape information of the striae on the bullet land is represented by a feature profile, which is used for determining the CMS number automatically. Rapid counting of CMS number provides a basis for ballistics correlations with large databases and further statistical and probability analysis. Experimental results in this report using bullets fired from ten consecutively manufactured barrels support this developed model. Published by Elsevier Ireland Ltd.
Highlight summarization in golf videos using audio signals
NASA Astrophysics Data System (ADS)
Kim, Hyoung-Gook; Kim, Jin Young
2008-01-01
In this paper, we present an automatic summarization of highlights in golf videos based on audio information alone without video information. The proposed highlight summarization system is carried out based on semantic audio segmentation and detection on action units from audio signals. Studio speech, field speech, music, and applause are segmented by means of sound classification. Swing is detected by the methods of impulse onset detection. Sounds like swing and applause form a complete action unit, while studio speech and music parts are used to anchor the program structure. With the advantage of highly precise detection of applause, highlights are extracted effectively. Our experimental results obtain high classification precision on 18 golf games. It proves that the proposed system is very effective and computationally efficient to apply the technology to embedded consumer electronic devices.
The personal aircraft: Status and issues
NASA Technical Reports Server (NTRS)
Anders, Scott G.; Asbury, Scott C.; Brentner, Kenneth S.; Bushnell, Dennis M.; Glass, Christopher E.; Hodges, William T.; Morris, Shelby J., Jr.; Scott, Michael A.
1994-01-01
Paper summarizes the status of personal air transportation with emphasis upon VTOL and converticar capability. The former obviates the need for airport operations for personal aircraft whereas the latter provides both ground and air capability in the same vehicle. Fully automatic operation, ATC and navigation is stressed along with consideration of acoustic, environmental and cost issues.
Development of analysis techniques for remote sensing of vegetation resources
NASA Technical Reports Server (NTRS)
Draeger, W. C.
1972-01-01
Various data handling and analysis techniques are summarized for evaluation of ERTS-A and supporting high flight imagery. These evaluations are concerned with remote sensors applied to wildland and agricultural vegetation resource inventory problems. Monitoring California's annual grassland, automatic texture analysis, agricultural ground data collection techniques, and spectral measurements are included.
Visual Search by Children with and without ADHD
ERIC Educational Resources Information Center
Mullane, Jennifer C.; Klein, Raymond M.
2008-01-01
Objective: To summarize the literature that has employed visual search tasks to assess automatic and effortful selective visual attention in children with and without ADHD. Method: Seven studies with a combined sample of 180 children with ADHD (M age = 10.9) and 193 normally developing children (M age = 10.8) are located. Results: Using a…
The Development of Reading for Comprehension: An Information Processing Analysis. Final Report.
ERIC Educational Resources Information Center
Schadler, Margaret; Juola, James F.
This report summarizes research performed at the Universtiy of Kansas that involved several topics related to reading and learning to read, including the development of automatic word recognition processes, reading for comprehension, and the development of new computer technologies designed to facilitate the reading process. The first section…
NASA Technical Reports Server (NTRS)
Coggeshall, M. E.; Hoffer, R. M.
1973-01-01
Remote sensing equipment and automatic data processing techniques were employed as aids in the institution of improved forest resource management methods. On the basis of automatically calculated statistics derived from manually selected training samples, the feature selection processor of LARSYS selected, upon consideration of various groups of the four available spectral regions, a series of channel combinations whose automatic classification performances (for six cover types, including both deciduous and coniferous forest) were tested, analyzed, and further compared with automatic classification results obtained from digitized color infrared photography.
The transition to increased automaticity during finger sequence learning in adult males who stutter.
Smits-Bandstra, Sarah; De Nil, Luc; Rochon, Elizabeth
2006-01-01
The present study compared the automaticity levels of persons who stutter (PWS) and persons who do not stutter (PNS) on a practiced finger sequencing task under dual task conditions. Automaticity was defined as the amount of attention required for task performance. Twelve PWS and 12 control subjects practiced finger tapping sequences under single and then dual task conditions. Control subjects performed the sequencing task significantly faster and less variably under single versus dual task conditions while PWS' performance was consistently slow and variable (comparable to the dual task performance of control subjects) under both conditions. Control subjects were significantly more accurate on a colour recognition distracter task than PWS under dual task conditions. These results suggested that control subjects transitioned to quick, accurate and increasingly automatic performance on the sequencing task after practice, while PWS did not. Because most stuttering treatment programs for adults include practice and automatization of new motor speech skills, findings of this finger sequencing study and future studies of speech sequence learning may have important implications for how to maximize stuttering treatment effectiveness. As a result of this activity, the participant will be able to: (1) Define automaticity and explain the importance of dual task paradigms to investigate automaticity; (2) Relate the proposed relationship between motor learning and automaticity as stated by the authors; (3) Summarize the reviewed literature concerning the performance of PWS on dual tasks; and (4) Explain why the ability to transition to automaticity during motor learning may have important clinical implications for stuttering treatment effectiveness.
Moradi, Milad; Ghadiri, Nasser
2018-01-01
Automatic text summarization tools help users in the biomedical domain to acquire their intended information from various textual resources more efficiently. Some of biomedical text summarization systems put the basis of their sentence selection approach on the frequency of concepts extracted from the input text. However, it seems that exploring other measures rather than the raw frequency for identifying valuable contents within an input document, or considering correlations existing between concepts, may be more useful for this type of summarization. In this paper, we describe a Bayesian summarization method for biomedical text documents. The Bayesian summarizer initially maps the input text to the Unified Medical Language System (UMLS) concepts; then it selects the important ones to be used as classification features. We introduce six different feature selection approaches to identify the most important concepts of the text and select the most informative contents according to the distribution of these concepts. We show that with the use of an appropriate feature selection approach, the Bayesian summarizer can improve the performance of biomedical summarization. Using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) toolkit, we perform extensive evaluations on a corpus of scientific papers in the biomedical domain. The results show that when the Bayesian summarizer utilizes the feature selection methods that do not use the raw frequency, it can outperform the biomedical summarizers that rely on the frequency of concepts, domain-independent and baseline methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Enhancing biomedical text summarization using semantic relation extraction.
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.
Automatic control of a primary electric thrust subsystem
NASA Technical Reports Server (NTRS)
Macie, T. W.; Macmedan, M. L.
1975-01-01
A concept for automatic control of the thrust subsystem has been developed by JPL and participating NASA Centers. This paper reports on progress in implementing the concept at JPL. Control of the Thrust Subsystem (TSS) is performed by the spacecraft computer command subsystem, and telemetry data is extracted by the spacecraft flight data subsystem. The Data and Control Interface Unit, an element of the TSS, provides the interface with the individual elements of the TSS. The control philosophy and implementation guidelines are presented. Control requirements are listed, and the control mechanism, including the serial digital data intercommunication system, is outlined. The paper summarizes progress to Fall 1974.
NASA Astrophysics Data System (ADS)
Granzer, T.; Reegen, P.; Strassmeier, K. G.
2001-12-01
We present a summary of five years of continuous operation of the University of Vienna twin Automatic Photoelectric Telescopes (APTs) -- Wolfgang and Amadeus. These two telescopes are part of the Fairborn Observatory facility located in the Sonoran desert close to Washington Camp in southern Arizona. The detection and distinction procedure between weather-induced data-quality loss and systematic data-quality loss turned out to be a crucial task. Therefore, special emphasis is laid on the data quality monitoring tools developed throughout the years. Furthermore, we summarize the scientific highlights from the first five years of operation
Research at Yale in Natural Language Processing. Research Report #84.
ERIC Educational Resources Information Center
Schank, Roger C.
This report summarizes the capabilities of five computer programs at Yale that do automatic natural language processing as of the end of 1976. For each program an introduction to its overall intent is given, followed by the input/output, a short discussion of the research underlying the program, and a prognosis for future development. The programs…
Calibrating Item Families and Summarizing the Results Using Family Expected Response Functions
ERIC Educational Resources Information Center
Sinharay, Sandip; Johnson, Matthew S.; Williamson, David M.
2003-01-01
Item families, which are groups of related items, are becoming increasingly popular in complex educational assessments. For example, in automatic item generation (AIG) systems, a test may consist of multiple items generated from each of a number of item models. Item calibration or scoring for such an assessment requires fitting models that can…
Monolithic ceramic analysis using the SCARE program
NASA Technical Reports Server (NTRS)
Manderscheid, Jane M.
1988-01-01
The Structural Ceramics Analysis and Reliability Evaluation (SCARE) computer program calculates the fast fracture reliability of monolithic ceramic components. The code is a post-processor to the MSC/NASTRAN general purpose finite element program. The SCARE program automatically accepts the MSC/NASTRAN output necessary to compute reliability. This includes element stresses, temperatures, volumes, and areas. The SCARE program computes two-parameter Weibull strength distributions from input fracture data for both volume and surface flaws. The distributions can then be used to calculate the reliability of geometrically complex components subjected to multiaxial stress states. Several fracture criteria and flaw types are available for selection by the user, including out-of-plane crack extension theories. The theoretical basis for the reliability calculations was proposed by Batdorf. These models combine linear elastic fracture mechanics (LEFM) with Weibull statistics to provide a mechanistic failure criterion. Other fracture theories included in SCARE are the normal stress averaging technique and the principle of independent action. The objective of this presentation is to summarize these theories, including their limitations and advantages, and to provide a general description of the SCARE program, along with example problems.
Independent component analysis for automatic note extraction from musical trills
NASA Astrophysics Data System (ADS)
Brown, Judith C.; Smaragdis, Paris
2004-05-01
The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.
B-737 Linear Autoland Simulink Model
NASA Technical Reports Server (NTRS)
Belcastro, Celeste (Technical Monitor); Hogge, Edward F.
2004-01-01
The Linear Autoland Simulink model was created to be a modular test environment for testing of control system components in commercial aircraft. The input variables, physical laws, and referenced frames used are summarized. The state space theory underlying the model is surveyed and the location of the control actuators described. The equations used to realize the Dryden gust model to simulate winds and gusts are derived. A description of the pseudo-random number generation method used in the wind gust model is included. The longitudinal autopilot, lateral autopilot, automatic throttle autopilot, engine model and automatic trim devices are considered as subsystems. The experience in converting the Airlabs FORTRAN aircraft control system simulation to a graphical simulation tool (Matlab/Simulink) is described.
Functional Differences between Statistical Learning with and without Explicit Training
ERIC Educational Resources Information Center
Batterink, Laura J.; Reber, Paul J.; Paller, Ken A.
2015-01-01
Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and…
Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak
2013-01-01
The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Management of natural resources through automatic cartographic inventory
NASA Technical Reports Server (NTRS)
Rey, P.; Gourinard, Y.; Cambou, F. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Significant results of the ARNICA program from August 1972 - January 1973 have been: (1) establishment of image to object correspondence codes for all types of soil use and forestry in northern Spain; (2) establishment of a transfer procedure between qualitative (remote identification and remote interpretation) and quantitative (numerization, storage, automatic statistical cartography) use of images; (3) organization of microdensitometric data processing and automatic cartography software; and (4) development of a system for measuring reflectance simultaneous with imagery.
MPEG content summarization based on compressed domain feature analysis
NASA Astrophysics Data System (ADS)
Sugano, Masaru; Nakajima, Yasuyuki; Yanagihara, Hiromasa
2003-11-01
This paper addresses automatic summarization of MPEG audiovisual content on compressed domain. By analyzing semantically important low-level and mid-level audiovisual features, our method universally summarizes the MPEG-1/-2 contents in the form of digest or highlight. The former is a shortened version of an original, while the latter is an aggregation of important or interesting events. In our proposal, first, the incoming MPEG stream is segmented into shots and the above features are derived from each shot. Then the features are adaptively evaluated in an integrated manner, and finally the qualified shots are aggregated into a summary. Since all the processes are performed completely on compressed domain, summarization is achieved at very low computational cost. The experimental results show that news highlights and sports highlights in TV baseball games can be successfully extracted according to simple shot transition models. As for digest extraction, subjective evaluation proves that meaningful shots are extracted from content without a priori knowledge, even if it contains multiple genres of programs. Our method also has the advantage of generating an MPEG-7 based description such as summary and audiovisual segments in the course of summarization.
Enhancing Biomedical Text Summarization Using Semantic Relation Extraction
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336
Keefe, Donald J.
1980-01-01
An automatically sweeping circuit for searching for an evoked response in an output signal in time with respect to a trigger input. Digital counters are used to activate a detector at precise intervals, and monitoring is repeated for statistical accuracy. If the response is not found then a different time window is examined until the signal is found.
Different Manhattan project: automatic statistical model generation
NASA Astrophysics Data System (ADS)
Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore
2002-03-01
We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.
Automatic Evaluations and Exercising: Systematic Review and Implications for Future Research.
Schinkoeth, Michaela; Antoniewicz, Franziska
2017-01-01
The general purpose of this systematic review was to summarize, structure and evaluate the findings on automatic evaluations of exercising. Studies were eligible for inclusion if they reported measuring automatic evaluations of exercising with an implicit measure and assessed some kind of exercise variable. Fourteen nonexperimental and six experimental studies (out of a total N = 1,928) were identified and rated by two independent reviewers. The main study characteristics were extracted and the grade of evidence for each study evaluated. First, results revealed a large heterogeneity in the applied measures to assess automatic evaluations of exercising and the exercise variables. Generally, small to large-sized significant relations between automatic evaluations of exercising and exercise variables were identified in the vast majority of studies. The review offers a systematization of the various examined exercise variables and prompts to differentiate more carefully between actually observed exercise behavior (proximal exercise indicator) and associated physiological or psychological variables (distal exercise indicator). Second, a lack of transparent reported reflections on the differing theoretical basis leading to the use of specific implicit measures was observed. Implicit measures should be applied purposefully, taking into consideration the individual advantages or disadvantages of the measures. Third, 12 studies were rated as providing first-grade evidence (lowest grade of evidence), five represent second-grade and three were rated as third-grade evidence. There is a dramatic lack of experimental studies, which are essential for illustrating the cause-effect relation between automatic evaluations of exercising and exercise and investigating under which conditions automatic evaluations of exercising influence behavior. Conclusions about the necessity of exercise interventions targeted at the alteration of automatic evaluations of exercising should therefore not be drawn too hastily.
Automatic Evaluations and Exercising: Systematic Review and Implications for Future Research
Schinkoeth, Michaela; Antoniewicz, Franziska
2017-01-01
The general purpose of this systematic review was to summarize, structure and evaluate the findings on automatic evaluations of exercising. Studies were eligible for inclusion if they reported measuring automatic evaluations of exercising with an implicit measure and assessed some kind of exercise variable. Fourteen nonexperimental and six experimental studies (out of a total N = 1,928) were identified and rated by two independent reviewers. The main study characteristics were extracted and the grade of evidence for each study evaluated. First, results revealed a large heterogeneity in the applied measures to assess automatic evaluations of exercising and the exercise variables. Generally, small to large-sized significant relations between automatic evaluations of exercising and exercise variables were identified in the vast majority of studies. The review offers a systematization of the various examined exercise variables and prompts to differentiate more carefully between actually observed exercise behavior (proximal exercise indicator) and associated physiological or psychological variables (distal exercise indicator). Second, a lack of transparent reported reflections on the differing theoretical basis leading to the use of specific implicit measures was observed. Implicit measures should be applied purposefully, taking into consideration the individual advantages or disadvantages of the measures. Third, 12 studies were rated as providing first-grade evidence (lowest grade of evidence), five represent second-grade and three were rated as third-grade evidence. There is a dramatic lack of experimental studies, which are essential for illustrating the cause-effect relation between automatic evaluations of exercising and exercise and investigating under which conditions automatic evaluations of exercising influence behavior. Conclusions about the necessity of exercise interventions targeted at the alteration of automatic evaluations of exercising should therefore not be drawn too hastily. PMID:29250022
micromap: A Package for Linked Micromaps
The R package micromap is used to create linked micromaps, which display statistical summaries associated with areal units, or polygons. Linked micromaps provide a means to simultaneously summarize and display both statistical and geographic distributions by linking statistical ...
A random-sum Wilcoxon statistic and its application to analysis of ROC and LROC data.
Tang, Liansheng Larry; Balakrishnan, N
2011-01-01
The Wilcoxon-Mann-Whitney statistic is commonly used for a distribution-free comparison of two groups. One requirement for its use is that the sample sizes of the two groups are fixed. This is violated in some of the applications such as medical imaging studies and diagnostic marker studies; in the former, the violation occurs since the number of correctly localized abnormal images is random, while in the latter the violation is due to some subjects not having observable measurements. For this reason, we propose here a random-sum Wilcoxon statistic for comparing two groups in the presence of ties, and derive its variance as well as its asymptotic distribution for large sample sizes. The proposed statistic includes the regular Wilcoxon rank-sum statistic. Finally, we apply the proposed statistic for summarizing location response operating characteristic data from a liver computed tomography study, and also for summarizing diagnostic accuracy of biomarker data.
Information retrieval and terminology extraction in online resources for patients with diabetes.
Seljan, Sanja; Baretić, Maja; Kucis, Vlasta
2014-06-01
Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall, precision and f-measure.
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294
ERIC Educational Resources Information Center
Jones, Daniel; Alexa, Melina
As part of the development of a completely sub-symbolic machine translation system, a method for automatically identifying German compounds was developed. Given a parallel bilingual corpus, German compounds are identified along with their English word groupings by statistical processing alone. The underlying principles and the design process are…
Kavuluru, Ramakanth; Han, Sifei; Harris, Daniel
2017-01-01
Diagnosis codes are extracted from medical records for billing and reimbursement and for secondary uses such as quality control and cohort identification. In the US, these codes come from the standard terminology ICD-9-CM derived from the international classification of diseases (ICD). ICD-9 codes are generally extracted by trained human coders by reading all artifacts available in a patient’s medical record following specific coding guidelines. To assist coders in this manual process, this paper proposes an unsupervised ensemble approach to automatically extract ICD-9 diagnosis codes from textual narratives included in electronic medical records (EMRs). Earlier attempts on automatic extraction focused on individual documents such as radiology reports and discharge summaries. Here we use a more realistic dataset and extract ICD-9 codes from EMRs of 1000 inpatient visits at the University of Kentucky Medical Center. Using named entity recognition (NER), graph-based concept-mapping of medical concepts, and extractive text summarization techniques, we achieve an example based average recall of 0.42 with average precision 0.47; compared with a baseline of using only NER, we notice a 12% improvement in recall with the graph-based approach and a 7% improvement in precision using the extractive text summarization approach. Although diagnosis codes are complex concepts often expressed in text with significant long range non-local dependencies, our present work shows the potential of unsupervised methods in extracting a portion of codes. As such, our findings are especially relevant for code extraction tasks where obtaining large amounts of training data is difficult. PMID:28748227
Keyphrase based Evaluation of Automatic Text Summarization
NASA Astrophysics Data System (ADS)
Elghannam, Fatma; El-Shishtawy, Tarek
2015-05-01
The development of methods to deal with the informative contents of the text units in the matching process is a major challenge in automatic summary evaluation systems that use fixed n-gram matching. The limitation causes inaccurate matching between units in a peer and reference summaries. The present study introduces a new Keyphrase based Summary Evaluator KpEval for evaluating automatic summaries. The KpEval relies on the keyphrases since they convey the most important concepts of a text. In the evaluation process, the keyphrases are used in their lemma form as the matching text unit. The system was applied to evaluate different summaries of Arabic multi-document data set presented at TAC2011. The results showed that the new evaluation technique correlates well with the known evaluation systems: Rouge1, Rouge2, RougeSU4, and AutoSummENG MeMoG. KpEval has the strongest correlation with AutoSummENG MeMoG, Pearson and spearman correlation coefficient measures are 0.8840, 0.9667 respectively.
Efficient reordering of PROLOG programs
NASA Technical Reports Server (NTRS)
Gooley, Markian M.; Wah, Benjamin W.
1989-01-01
PROLOG programs are often inefficient: execution corresponds to a depth-first traversal of an AND/OR graph; traversing subgraphs in another order can be less expensive. It is shown how the reordering of clauses within PROLOG predicates, and especially of goals within clauses, can prevent unnecessary search. The characterization and detection of restrictions on reordering is discussed. A system of calling modes for PROLOG, geared to reordering, is proposed, and ways to infer them automatically are discussed. The information needed for safe reordering is summarized, and which types can be inferred automatically and which must be provided by the user are considered. An improved method for determining a good order for the goals of PROLOG clauses is presented and used as the basis for a reordering system.
Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal
Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; al-Diri, Bashir; Cheung, Carol Y.; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M.; Jelinek, Herbert F.; Meriaudeau, Fabrice; Quellec, Gwénolé; MacGillivray, Tom; Dhillon, Bal
2013-01-01
This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433
Attitudes as Object-Evaluation Associations of Varying Strength
Fazio, Russell H.
2009-01-01
Historical developments regarding the attitude concept are reviewed, and set the stage for consideration of a theoretical perspective that views attitude, not as a hypothetical construct, but as evaluative knowledge. A model of attitudes as object-evaluation associations of varying strength is summarized, along with research supporting the model’s contention that at least some attitudes are represented in memory and activated automatically upon the individual’s encountering the attitude object. The implications of the theoretical perspective for a number of recent discussions related to the attitude concept are elaborated. Among these issues are the notion of attitudes as “constructions,” the presumed malleability of automatically-activated attitudes, correspondence between implicit and explicit measures of attitude, and postulated dual or multiple attitudes. PMID:19424447
NASA Technical Reports Server (NTRS)
Shiva, S. G.
1978-01-01
Several high level languages which evolved over the past few years for describing and simulating the structure and behavior of digital systems, on digital computers are assessed. The characteristics of the four prominent languages (CDL, DDL, AHPL, ISP) are summarized. A criterion for selecting a suitable hardware description language for use in an automatic integrated circuit design environment is provided.
NASA Technical Reports Server (NTRS)
Desoer, C. A.; Polak, E.; Zadeh, L. A.
1974-01-01
A series of research projects is briefly summarized which includes investigations in the following areas: (1) mathematical programming problems for large system and infinite-dimensional spaces, (2) bounded-input bounded-output stability, (3) non-parametric approximations, and (4) differential games. A list of reports and papers which were published over the ten year period of research is included.
Demonstration of Self-Training Autonomous Neural Networks in Space Vehicle Docking Simulations
NASA Technical Reports Server (NTRS)
Patrick, M. Clinton; Thaler, Stephen L.; Stevenson-Chavis, Katherine
2006-01-01
Neural Networks have been under examination for decades in many areas of research, with varying degrees of success and acceptance. Key goals of computer learning, rapid problem solution, and automatic adaptation have been elusive at best. This paper summarizes efforts at NASA's Marshall Space Flight Center harnessing such technology to autonomous space vehicle docking for the purpose of evaluating applicability to future missions.
Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.
Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2010-11-01
Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.
FAMA: Fast Automatic MOOG Analysis
NASA Astrophysics Data System (ADS)
Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella
2014-02-01
FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.
Validation tools for image segmentation
NASA Astrophysics Data System (ADS)
Padfield, Dirk; Ross, James
2009-02-01
A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.
Creating a medical dictionary using word alignment: the influence of sources and resources.
Nyström, Mikael; Merkel, Magnus; Petersson, Håkan; Ahlfeldt, Hans
2007-11-23
Automatic word alignment of parallel texts with the same content in different languages is among other things used to generate dictionaries for new translations. The quality of the generated word alignment depends on the quality of the input resources. In this paper we report on automatic word alignment of the English and Swedish versions of the medical terminology systems ICD-10, ICF, NCSP, KSH97-P and parts of MeSH and how the terminology systems and type of resources influence the quality. We automatically word aligned the terminology systems using static resources, like dictionaries, statistical resources, like statistically derived dictionaries, and training resources, which were generated from manual word alignment. We varied which part of the terminology systems that we used to generate the resources, which parts that we word aligned and which types of resources we used in the alignment process to explore the influence the different terminology systems and resources have on the recall and precision. After the analysis, we used the best configuration of the automatic word alignment for generation of candidate term pairs. We then manually verified the candidate term pairs and included the correct pairs in an English-Swedish dictionary. The results indicate that more resources and resource types give better results but the size of the parts used to generate the resources only partly affects the quality. The most generally useful resources were generated from ICD-10 and resources generated from MeSH were not as general as other resources. Systematic inter-language differences in the structure of the terminology system rubrics make the rubrics harder to align. Manually created training resources give nearly as good results as a union of static resources, statistical resources and training resources and noticeably better results than a union of static resources and statistical resources. The verified English-Swedish dictionary contains 24,000 term pairs in base forms. More resources give better results in the automatic word alignment, but some resources only give small improvements. The most important type of resource is training and the most general resources were generated from ICD-10.
Creating a medical dictionary using word alignment: The influence of sources and resources
Nyström, Mikael; Merkel, Magnus; Petersson, Håkan; Åhlfeldt, Hans
2007-01-01
Background Automatic word alignment of parallel texts with the same content in different languages is among other things used to generate dictionaries for new translations. The quality of the generated word alignment depends on the quality of the input resources. In this paper we report on automatic word alignment of the English and Swedish versions of the medical terminology systems ICD-10, ICF, NCSP, KSH97-P and parts of MeSH and how the terminology systems and type of resources influence the quality. Methods We automatically word aligned the terminology systems using static resources, like dictionaries, statistical resources, like statistically derived dictionaries, and training resources, which were generated from manual word alignment. We varied which part of the terminology systems that we used to generate the resources, which parts that we word aligned and which types of resources we used in the alignment process to explore the influence the different terminology systems and resources have on the recall and precision. After the analysis, we used the best configuration of the automatic word alignment for generation of candidate term pairs. We then manually verified the candidate term pairs and included the correct pairs in an English-Swedish dictionary. Results The results indicate that more resources and resource types give better results but the size of the parts used to generate the resources only partly affects the quality. The most generally useful resources were generated from ICD-10 and resources generated from MeSH were not as general as other resources. Systematic inter-language differences in the structure of the terminology system rubrics make the rubrics harder to align. Manually created training resources give nearly as good results as a union of static resources, statistical resources and training resources and noticeably better results than a union of static resources and statistical resources. The verified English-Swedish dictionary contains 24,000 term pairs in base forms. Conclusion More resources give better results in the automatic word alignment, but some resources only give small improvements. The most important type of resource is training and the most general resources were generated from ICD-10. PMID:18036221
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S; Wu, Y; Chang, X
Purpose: A novel computer software system, namely APDV (Automatic Pre-Delivery Verification), has been developed for verifying patient treatment plan parameters right prior to treatment deliveries in order to automatically detect and prevent catastrophic errors. Methods: APDV is designed to continuously monitor new DICOM plan files on the TMS computer at the treatment console. When new plans to be delivered are detected, APDV checks the consistencies of plan parameters and high-level plan statistics using underlying rules and statistical properties based on given treatment site, technique and modality. These rules were quantitatively derived by retrospectively analyzing all the EBRT treatment plans ofmore » the past 8 years at authors’ institution. Therapists and physicists will be notified with a warning message displayed on the TMS computer if any critical errors are detected, and check results, confirmation, together with dismissal actions will be saved into database for further review. Results: APDV was implemented as a stand-alone program using C# to ensure required real time performance. Mean values and standard deviations were quantitatively derived for various plan parameters including MLC usage, MU/cGy radio, beam SSD, beam weighting, and the beam gantry angles (only for lateral targets) per treatment site, technique and modality. 2D-based rules of combined MU/cGy ratio and averaged SSD values were also derived using joint probabilities of confidence error ellipses. The statistics of these major treatment plan parameters quantitatively evaluate the consistency of any treatment plans which facilitates automatic APDV checking procedures. Conclusion: APDV could be useful in detecting and preventing catastrophic errors immediately before treatment deliveries. Future plan including automatic patient identify and patient setup checks after patient daily images are acquired by the machine and become available on the TMS computer. This project is supported by the Agency for Healthcare Research and Quality (AHRQ) under award 1R01HS0222888. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
Kim, Won-Seok; Zeng, Pengcheng; Shi, Jian Qing; Lee, Youngjo; Paik, Nam-Jong
2017-01-01
Motion analysis of the hyoid bone via videofluoroscopic study has been used in clinical research, but the classical manual tracking method is generally labor intensive and time consuming. Although some automatic tracking methods have been developed, masked points could not be tracked and smoothing and segmentation, which are necessary for functional motion analysis prior to registration, were not provided by the previous software. We developed software to track the hyoid bone motion semi-automatically. It works even in the situation where the hyoid bone is masked by the mandible and has been validated in dysphagia patients with stroke. In addition, we added the function of semi-automatic smoothing and segmentation. A total of 30 patients' data were used to develop the software, and data collected from 17 patients were used for validation, of which the trajectories of 8 patients were partly masked. Pearson correlation coefficients between the manual and automatic tracking are high and statistically significant (0.942 to 0.991, P-value<0.0001). Relative errors between automatic tracking and manual tracking in terms of the x-axis, y-axis and 2D range of hyoid bone excursion range from 3.3% to 9.2%. We also developed an automatic method to segment each hyoid bone trajectory into four phases (elevation phase, anterior movement phase, descending phase and returning phase). The semi-automatic hyoid bone tracking from VFSS data by our software is valid compared to the conventional manual tracking method. In addition, the ability of automatic indication to switch the automatic mode to manual mode in extreme cases and calibration without attaching the radiopaque object is convenient and useful for users. Semi-automatic smoothing and segmentation provide further information for functional motion analysis which is beneficial to further statistical analysis such as functional classification and prognostication for dysphagia. Therefore, this software could provide the researchers in the field of dysphagia with a convenient, useful, and all-in-one platform for analyzing the hyoid bone motion. Further development of our method to track the other swallowing related structures or objects such as epiglottis and bolus and to carry out the 2D curve registration may be needed for a more comprehensive functional data analysis for dysphagia with big data.
A review of approaches to identifying patient phenotype cohorts using electronic health records
Shivade, Chaitanya; Raghavan, Preethi; Fosler-Lussier, Eric; Embi, Peter J; Elhadad, Noemie; Johnson, Stephen B; Lai, Albert M
2014-01-01
Objective To summarize literature describing approaches aimed at automatically identifying patients with a common phenotype. Materials and methods We performed a review of studies describing systems or reporting techniques developed for identifying cohorts of patients with specific phenotypes. Every full text article published in (1) Journal of American Medical Informatics Association, (2) Journal of Biomedical Informatics, (3) Proceedings of the Annual American Medical Informatics Association Symposium, and (4) Proceedings of Clinical Research Informatics Conference within the past 3 years was assessed for inclusion in the review. Only articles using automated techniques were included. Results Ninety-seven articles met our inclusion criteria. Forty-six used natural language processing (NLP)-based techniques, 24 described rule-based systems, 41 used statistical analyses, data mining, or machine learning techniques, while 22 described hybrid systems. Nine articles described the architecture of large-scale systems developed for determining cohort eligibility of patients. Discussion We observe that there is a rise in the number of studies associated with cohort identification using electronic medical records. Statistical analyses or machine learning, followed by NLP techniques, are gaining popularity over the years in comparison with rule-based systems. Conclusions There are a variety of approaches for classifying patients into a particular phenotype. Different techniques and data sources are used, and good performance is reported on datasets at respective institutions. However, no system makes comprehensive use of electronic medical records addressing all of their known weaknesses. PMID:24201027
International Community Development Statistical Bulletin. Spring 1968 General Edition.
ERIC Educational Resources Information Center
Community Development Foundation, New York, NY.
The Spring 1968 general edition of the International Community Development Statistical Bulletin describes its reporting system based on the International Standard Classification of Community Development Activities and a special project registration and progress form; briefly summarizes overall international data; and presents statistics on…
Summary of data reported to CDC's national automated biosurveillance system, 2008
2010-01-01
Background BioSense is the US national automated biosurveillance system. Data regarding chief complaints and diagnoses are automatically pre-processed into 11 broader syndromes (e.g., respiratory) and 78 narrower sub-syndromes (e.g., asthma). The objectives of this report are to present the types of illness and injury that can be studied using these data and the frequency of visits for the syndromes and sub-syndromes in the various data types; this information will facilitate use of the system and comparison with other systems. Methods For each major data source, we summarized information on the facilities, timeliness, patient demographics, and rates of visits for each syndrome and sub-syndrome. Results In 2008, the primary data sources were the 333 US Department of Defense, 770 US Veterans Affairs, and 532 civilian hospital emergency department facilities. Median times from patient visit to record receipt at CDC were 2.2 days, 2.0 days, and 4 hours for these sources respectively. Among sub-syndromes, we summarize mean 2008 visit rates in 45 infectious disease categories, 11 injury categories, 7 chronic disease categories, and 15 other categories. Conclusions We present a systematic summary of data that is automatically available to public health departments for monitoring and responding to emergencies. PMID:20500863
How to Create Automatically Graded Spreadsheets for Statistics Courses
ERIC Educational Resources Information Center
LoSchiavo, Frank M.
2016-01-01
Instructors often use spreadsheet software (e.g., Microsoft Excel) in their statistics courses so that students can gain experience conducting computerized analyses. Unfortunately, students tend to make several predictable errors when programming spreadsheets. Without immediate feedback, programming errors are likely to go undetected, and as a…
An unsupervised method for summarizing egocentric sport videos
NASA Astrophysics Data System (ADS)
Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec
2015-12-01
People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.
Automatic Bone Drilling - More Precise, Reliable and Safe Manipulation in the Orthopaedic Surgery
NASA Astrophysics Data System (ADS)
Boiadjiev, George; Kastelov, Rumen; Boiadjiev, Tony; Delchev, Kamen; Zagurski, Kazimir
2016-06-01
Bone drilling manipulation often occurs in the orthopaedic surgery. By statistics, nowadays, about one million people only in Europe need such an operation every year, where bone implants are inserted. Almost always, the drilling is performed handily, which cannot avoid the subjective factor influence. The question of subjective factor reduction has its answer - automatic bone drilling. The specific features and problems of orthopaedic drilling manipulation are considered in this work. The automatic drilling is presented according the possibilities of robotized system Orthopaedic Drilling Robot (ODRO) for assuring the manipulation accuracy, precision, reliability and safety.
ERIC Educational Resources Information Center
Hoachlander, Gary; And Others
In the fall of 1995 the National Center for Education Statistics (NCES) held a conference to stimulate dialogue about future developments in the fields of education, statistical methodology, and technology and the implications of these developments for the nation's education statistics program. This paper summarizes and synthesizes the results of…
Integrating hidden Markov model and PRAAT: a toolbox for robust automatic speech transcription
NASA Astrophysics Data System (ADS)
Kabir, A.; Barker, J.; Giurgiu, M.
2010-09-01
An automatic time-aligned phone transcription toolbox of English speech corpora has been developed. Especially the toolbox would be very useful to generate robust automatic transcription and able to produce phone level transcription using speaker independent models as well as speaker dependent models without manual intervention. The system is based on standard Hidden Markov Models (HMM) approach and it was successfully experimented over a large audiovisual speech corpus namely GRID corpus. One of the most powerful features of the toolbox is the increased flexibility in speech processing where the speech community would be able to import the automatic transcription generated by HMM Toolkit (HTK) into a popular transcription software, PRAAT, and vice-versa. The toolbox has been evaluated through statistical analysis on GRID data which shows that automatic transcription deviates by an average of 20 ms with respect to manual transcription.
DeTEXT: A Database for Evaluating Text Extraction from Biomedical Literature Figures
Yin, Xu-Cheng; Yang, Chun; Pei, Wei-Yi; Man, Haixia; Zhang, Jun; Learned-Miller, Erik; Yu, Hong
2015-01-01
Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information. A high-quality ground truth standard can greatly facilitate the development of an automated system. This article describes DeTEXT: A database for evaluating text extraction from biomedical literature figures. It is the first publicly available, human-annotated, high quality, and large-scale figure-text dataset with 288 full-text articles, 500 biomedical figures, and 9308 text regions. This article describes how figures were selected from open-access full-text biomedical articles and how annotation guidelines and annotation tools were developed. We also discuss the inter-annotator agreement and the reliability of the annotations. We summarize the statistics of the DeTEXT data and make available evaluation protocols for DeTEXT. Finally we lay out challenges we observed in the automated detection and recognition of figure text and discuss research directions in this area. DeTEXT is publicly available for downloading at http://prir.ustb.edu.cn/DeTEXT/. PMID:25951377
A probabilistic model of emphysema based on granulometry analysis
NASA Astrophysics Data System (ADS)
Marcos, J. V.; Nava, R.; Cristobal, G.; Munoz-Barrutia, A.; Escalante-Ramírez, B.; Ortiz-de-Solórzano, C.
2013-11-01
Emphysema is associated with the destruction of lung parenchyma, resulting in abnormal enlargement of airspaces. Accurate quantification of emphysema is required for a better understanding of the disease as well as for the assessment of drugs and treatments. In the present study, a novel method for emphysema characterization from histological lung images is proposed. Elastase-induced mice were used to simulate the effect of emphysema on the lungs. A database composed of 50 normal and 50 emphysematous lung patches of size 512 x 512 pixels was used in our experiments. The purpose is to automatically identify those patches containing emphysematous tissue. The proposed approach is based on the use of granulometry analysis, which provides the pattern spectrum describing the distribution of airspaces in the lung region under evaluation. The profile of the spectrum was summarized by a set of statistical features. A logistic regression model was then used to estimate the probability for a patch to be emphysematous from this feature set. An accuracy of 87% was achieved by our method in the classification between normal and emphysematous samples. This result shows the utility of our granulometry-based method to quantify the lesions due to emphysema.
Primal/dual linear programming and statistical atlases for cartilage segmentation.
Glocker, Ben; Komodakis, Nikos; Paragios, Nikos; Glaser, Christian; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel approach for automatic segmentation of cartilage using a statistical atlas and efficient primal/dual linear programming. To this end, a novel statistical atlas construction is considered from registered training examples. Segmentation is then solved through registration which aims at deforming the atlas such that the conditional posterior of the learned (atlas) density is maximized with respect to the image. Such a task is reformulated using a discrete set of deformations and segmentation becomes equivalent to finding the set of local deformations which optimally match the model to the image. We evaluate our method on 56 MRI data sets (28 used for the model and 28 used for evaluation) and obtain a fully automatic segmentation of patella cartilage volume with an overlap ratio of 0.84 with a sensitivity and specificity of 94.06% and 99.92%, respectively.
Teaching Statistics with Minitab II.
ERIC Educational Resources Information Center
Ryan, T. A., Jr.; And Others
Minitab is a statistical computing system which uses simple language, produces clear output, and keeps track of bookkeeping automatically. Error checking with English diagnostics and inclusion of several default options help to facilitate use of the system by students. Minitab II is an improved and expanded version of the original Minitab which…
Theoretical Frameworks for Math Fact Fluency
ERIC Educational Resources Information Center
Arnold, Katherine
2012-01-01
Recent education statistics indicate persistent low math scores for our nation's students. This drop in math proficiency includes deficits in basic number sense and automaticity of math facts. The decrease has been recorded across all grade levels with the elementary levels showing the greatest loss (National Center for Education Statistics,…
North American Library Education; Directory and Statistics 1971-1973.
ERIC Educational Resources Information Center
Weintraub, D. Kathryn, Ed.; Reed, Sarah R., Ed.
Five separate articles summarize library education at the graduate, undergraduate, and technical assistant levels in the United States and library education in Canada and other parts of North America. Statistical tables are included within the explanatory essays. Over 30 pages of statistical tables give information on specific institutions. The…
Improving Statistical Skills through Students' Participation in the Development of Resources
ERIC Educational Resources Information Center
Biza, Irene; Vande Hey, Eugénie
2015-01-01
This paper summarizes the evaluation of a project that involved undergraduate mathematics students in the development of teaching and learning resources for statistics modules taught in various departments of a university. This evaluation regards students' participation in the project and its impact on their learning of statistics, as…
Instinctive analytics for coalition operations (Conference Presentation)
NASA Astrophysics Data System (ADS)
de Mel, Geeth R.; La Porta, Thomas; Pham, Tien; Pearson, Gavin
2017-05-01
The success of future military coalition operations—be they combat or humanitarian—will increasingly depend on a system's ability to share data and processing services (e.g. aggregation, summarization, fusion), and automatically compose services in support of complex tasks at the network edge. We call such an infrastructure instinctive—i.e., an infrastructure that reacts instinctively to address the analytics task at hand. However, developing such an infrastructure is made complex for the coalition environment due to its dynamism both in terms of user requirements and service availability. In order to address the above challenge, in this paper, we highlight our research vision and sketch some initial solutions into the problem domain. Specifically, we propose means to (1) automatically infer formal task requirements from mission specifications; (2) discover data, services, and their features automatically to satisfy the identified requirements; (3) create and augment shared domain models automatically; (4) efficiently offload services to the network edge and across coalition boundaries adhering to their computational properties and costs; and (5) optimally allocate and adjust services while respecting the constraints of operating environment and service fit. We envision that the research will result in a framework which enables self-description, discover, and assemble capabilities to both data and services in support of coalition mission goals.
A review of NASA international programs
NASA Technical Reports Server (NTRS)
1979-01-01
A synoptic overview of NASA's international activities to January 1979 is presented. The cooperating countries and international organizations are identified. Topics covered include (1) cooperative arrangements for ground-based, spaceborne, airborne, rocket-borne, and balloon-borne ventures, joint development, and aeronautical R & D; (2) reimbursable launchings; (3) tracking and data acquisition; and (4) personnel exchanges. International participation in NASA's Earth resources investigations is summarized in the appendix. A list of automatic picture transmission stations is included.
Vasconcelos, Maria J M; Ventura, Sandra M R; Freitas, Diamantino R S; Tavares, João Manuel R S
2012-03-01
The morphological and dynamic characterisation of the vocal tract during speech production has been gaining greater attention due to the motivation of the latest improvements in magnetic resonance (MR) imaging; namely, with the use of higher magnetic fields, such as 3.0 Tesla. In this work, the automatic study of the vocal tract from 3.0 Tesla MR images was assessed through the application of statistical deformable models. Therefore, the primary goal focused on the analysis of the shape of the vocal tract during the articulation of European Portuguese sounds, followed by the evaluation of the results concerning the automatic segmentation, i.e. identification of the vocal tract in new MR images. In what concerns speech production, this is the first attempt to automatically characterise and reconstruct the vocal tract shape of 3.0 Tesla MR images by using deformable models; particularly, by using active and appearance shape models. The achieved results clearly evidence the adequacy and advantage of the automatic analysis of the 3.0 Tesla MR images of these deformable models in order to extract the vocal tract shape and assess the involved articulatory movements. These achievements are mostly required, for example, for a better knowledge of speech production, mainly of patients suffering from articulatory disorders, and to build enhanced speech synthesizer models.
Shu, Jie; Dolman, G E; Duan, Jiang; Qiu, Guoping; Ilyas, Mohammad
2016-04-27
Colour is the most important feature used in quantitative immunohistochemistry (IHC) image analysis; IHC is used to provide information relating to aetiology and to confirm malignancy. Statistical modelling is a technique widely used for colour detection in computer vision. We have developed a statistical model of colour detection applicable to detection of stain colour in digital IHC images. Model was first trained by massive colour pixels collected semi-automatically. To speed up the training and detection processes, we removed luminance channel, Y channel of YCbCr colour space and chose 128 histogram bins which is the optimal number. A maximum likelihood classifier is used to classify pixels in digital slides into positively or negatively stained pixels automatically. The model-based tool was developed within ImageJ to quantify targets identified using IHC and histochemistry. The purpose of evaluation was to compare the computer model with human evaluation. Several large datasets were prepared and obtained from human oesophageal cancer, colon cancer and liver cirrhosis with different colour stains. Experimental results have demonstrated the model-based tool achieves more accurate results than colour deconvolution and CMYK model in the detection of brown colour, and is comparable to colour deconvolution in the detection of pink colour. We have also demostrated the proposed model has little inter-dataset variations. A robust and effective statistical model is introduced in this paper. The model-based interactive tool in ImageJ, which can create a visual representation of the statistical model and detect a specified colour automatically, is easy to use and available freely at http://rsb.info.nih.gov/ij/plugins/ihc-toolbox/index.html . Testing to the tool by different users showed only minor inter-observer variations in results.
Eichmann, Mischa; Kugel, Harald; Suslow, Thomas
2008-12-01
Difficulties in identifying and differentiating one's emotions are a central characteristic of alexithymia. In the present study, automatic activation of the fusiform gyrus to facial emotion was investigated as a function of alexithymia as assessed by the 20-item Toronto Alexithymia Scale. During 3 Tesla fMRI scanning, pictures of faces bearing sad, happy, and neutral expressions masked by neutral faces were presented to 22 healthy adults who also responded to the Toronto Alexithymia Scale. The fusiform gyrus was selected as the region of interest, and voxel values of this region were extracted, summarized as means, and tested among the different conditions (sad, happy, and neutral faces). Masked sad facial emotions were associated with greater bilateral activation of the fusiform gyrus than masked neutral faces. The subscale, Difficulty Identifying Feelings, was negatively correlated with the neural response of the fusiform gyrus to masked sad faces. The correlation results suggest that automatic hyporesponsiveness of the fusiform gyrus to negative emotion stimuli may reflect problems in recognizing one's emotions in everyday life.
Zhang, Mingyuan; Fiol, Guilherme Del; Grout, Randall W.; Jonnalagadda, Siddhartha; Medlin, Richard; Mishra, Rashmi; Weir, Charlene; Liu, Hongfang; Mostafa, Javed; Fiszman, Marcelo
2014-01-01
Online knowledge resources such as Medline can address most clinicians’ patient care information needs. Yet, significant barriers, notably lack of time, limit the use of these sources at the point of care. The most common information needs raised by clinicians are treatment-related. Comparative effectiveness studies allow clinicians to consider multiple treatment alternatives for a particular problem. Still, solutions are needed to enable efficient and effective consumption of comparative effectiveness research at the point of care. Objective Design and assess an algorithm for automatically identifying comparative effectiveness studies and extracting the interventions investigated in these studies. Methods The algorithm combines semantic natural language processing, Medline citation metadata, and machine learning techniques. We assessed the algorithm in a case study of treatment alternatives for depression. Results Both precision and recall for identifying comparative studies was 0.83. A total of 86% of the interventions extracted perfectly or partially matched the gold standard. Conclusion Overall, the algorithm achieved reasonable performance. The method provides building blocks for the automatic summarization of comparative effectiveness research to inform point of care decision-making. PMID:23920677
Traffic crash statistics report, 2009
DOT National Transportation Integrated Search
2010-06-23
This report is compiled from long form traffic crash reports submitted by state and local law enforcement agencies. The Department summarizes all the submitted information for this report. in general, the 2009 crash statistics show a positive trend i...
1982-06-01
usefulness to the Untted States Antarctic mission as managed by the National Science Foundation. Various statistical measures were applied to the reported... statistical procedures that would evolve a general meteorological picture of each of these remote sites. Primary texts used as a basis for...processed by station for monthly, seasonal and annual statistics , as appropriate. The following outlines the evaluations completed for both
AISLE: an automatic volumetric segmentation method for the study of lung allometry.
Ren, Hongliang; Kazanzides, Peter
2011-01-01
We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.
Use of Management Statistics in ARL Libraries. SPEC Kit #153.
ERIC Educational Resources Information Center
Vasi, John
A Systems and Procedures Exchange Center (SPEC) survey conducted in 1986 investigated the collection and use of management statistics in Association of Research Libraries (ARL) member libraries, and SPEC Kit #134 (May 1987) summarized the kinds of statistics collected and the reasons given by the 91 respondents for collecting them. This more…
Statistics and Style. Mathematical Linguistics and Automatic Language Processing No. 6.
ERIC Educational Resources Information Center
Dolezel, Lubomir, Ed.; Bailey, Richard W., Ed.
This collection of 17 articles concerning the application of mathematical models and techniques to the study of literary style is an attempt to overcome the communication barriers that exist between scholars in the various fields that find their meeting ground in statistical stylistics. The articles selected were chosen to represent the best…
Statistical Issues in Social Allocation Models of Intelligence: A Review and a Response
ERIC Educational Resources Information Center
Light, Richard J.; Smith, Paul V.
1971-01-01
This is a response to Shockley (1971) which summarizes the original Light and Smith work; outlines Shockley's criticisms; responds to the statistical issues; and concludes with the methodological implications of the disagreement. (VW)
Guam's forest resources, 2002.
Joseph A. Donnegan; Sarah L. Butler; Walter Grabowiecki; Bruce A. Hiserote; David. Limtiaco
2004-01-01
The Forest Inventory and Analysis Program collected, analyzed, and summarized field data on 46 forested plots on the island of Guam. Estimates of forest area, tree stem volume and biomass, the numbers of trees, tree damages, and the distribution of tree sizes were summarized for this statistical sample. Detailed tables and graphical highlights provide a summary of Guam...
Palau's forest resources, 2003.
Joseph A. Donnegan; Sarah L. Butler; Olaf Kuegler; Brent J. Stroud; Bruce A. Hiserote; Kashgar. Rengulbai
2007-01-01
The Forest Inventory and Analysis Program collected, analyzed, and summarized field data on 54 forested plots on the islands in the Republic of Palau. Estimates of forest area, tree stem volume and biomass, the numbers of trees, tree damages, and the distribution of tree sizes were summarized for this statistical sample. Detailed tables and graphical highlights provide...
Six-Inch Shock Tube Characterization
2016-12-09
Change of Address Organizations receiving reports from the U.S. Army Aeromedical Research Laboratory on automatic mailing lists should confirm...92A Figure 2 summarizes the peak levels for shots using 92A Mylar® as a membrane with a linear trend line overlaid on the data, which produced the...peak levels for shots using 500A Mylar® as a membrane with a 6th-order polynomial trend line overlaid on the data, which produced the highest R2 value
NASA Technical Reports Server (NTRS)
Nikravesh, Parviz E.; Gim, Gwanghum; Arabyan, Ara; Rein, Udo
1989-01-01
The formulation of a method known as the joint coordinate method for automatic generation of the equations of motion for multibody systems is summarized. For systems containing open or closed kinematic loops, the equations of motion can be reduced systematically to a minimum number of second order differential equations. The application of recursive and nonrecursive algorithms to this formulation, computational considerations and the feasibility of implementing this formulation on multiprocessor computers are discussed.
NASA Technical Reports Server (NTRS)
Moncrief, V.; Teitelboim, C.
1972-01-01
It is shown that if the Hamiltonian constraint of general relativity is imposed as a restriction on the Hamilton principal functional in the classical theory, or on the state functional in the quantum theory, then the momentum constraints are automatically satisfied. This result holds both for closed and open spaces and it means that the full content of the theory is summarized by a single functional equation of the Tomonaga-Schwinger type.
Using phrases and document metadata to improve topic modeling of clinical reports.
Speier, William; Ong, Michael K; Arnold, Corey W
2016-06-01
Probabilistic topic models provide an unsupervised method for analyzing unstructured text, which have the potential to be integrated into clinical automatic summarization systems. Clinical documents are accompanied by metadata in a patient's medical history and frequently contains multiword concepts that can be valuable for accurately interpreting the included text. While existing methods have attempted to address these problems individually, we present a unified model for free-text clinical documents that integrates contextual patient- and document-level data, and discovers multi-word concepts. In the proposed model, phrases are represented by chained n-grams and a Dirichlet hyper-parameter is weighted by both document-level and patient-level context. This method and three other Latent Dirichlet allocation models were fit to a large collection of clinical reports. Examples of resulting topics demonstrate the results of the new model and the quality of the representations are evaluated using empirical log likelihood. The proposed model was able to create informative prior probabilities based on patient and document information, and captured phrases that represented various clinical concepts. The representation using the proposed model had a significantly higher empirical log likelihood than the compared methods. Integrating document metadata and capturing phrases in clinical text greatly improves the topic representation of clinical documents. The resulting clinically informative topics may effectively serve as the basis for an automatic summarization system for clinical reports. Copyright © 2016 Elsevier Inc. All rights reserved.
Niu, Renjie; Fu, Chenyu; Xu, Zhiyong; Huang, Jianyuan
2016-04-29
Doctors who practice Traditional Chinese Medicine (TCM) diagnose using four methods - inspection, auscultation and olfaction, interrogation, and pulse feeling/palpation. The shape and shape changes of the moon marks on the nails are an important indication when judging the patient's health. There are a series of classical and experimental theories about moon marks in TCM, which does not have support from statistical data. To verify some experiential theories on moon mark in TCM by automatic data-processing equipment. This paper proposes the equipment that utilizes image processing technology to collect moon mark data of different target groups conveniently and quickly, building a database that combines this information with that gathered from the health and mental status questionnaire in each test. This equipment has a simple design, a low cost, and an optimized algorithm. The practice has been proven to quickly complete automatic acquisition and preservation of key data about moon marks. In the future, some conclusions will likely be obtained from these data; some changes of moon marks related to a special pathological change will be established with statistical methods.
Statistical Evaluation of Biometric Evidence in Forensic Automatic Speaker Recognition
NASA Astrophysics Data System (ADS)
Drygajlo, Andrzej
Forensic speaker recognition is the process of determining if a specific individual (suspected speaker) is the source of a questioned voice recording (trace). This paper aims at presenting forensic automatic speaker recognition (FASR) methods that provide a coherent way of quantifying and presenting recorded voice as biometric evidence. In such methods, the biometric evidence consists of the quantified degree of similarity between speaker-dependent features extracted from the trace and speaker-dependent features extracted from recorded speech of a suspect. The interpretation of recorded voice as evidence in the forensic context presents particular challenges, including within-speaker (within-source) variability and between-speakers (between-sources) variability. Consequently, FASR methods must provide a statistical evaluation which gives the court an indication of the strength of the evidence given the estimated within-source and between-sources variabilities. This paper reports on the first ENFSI evaluation campaign through a fake case, organized by the Netherlands Forensic Institute (NFI), as an example, where an automatic method using the Gaussian mixture models (GMMs) and the Bayesian interpretation (BI) framework were implemented for the forensic speaker recognition task.
A segmentation editing framework based on shape change statistics
NASA Astrophysics Data System (ADS)
Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen
2017-02-01
Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.
Automatic cloud coverage assessment of Formosat-2 image
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2011-11-01
Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.
ERIC Educational Resources Information Center
Bailey, Thomas; Jenkins, Davis; Leinbach, Timothy
2005-01-01
This report summarizes the latest available national statistics on access and attainment by low income and minority community college students. The data come from the National Center for Education Statistics' (NCES) Integrated Postsecondary Education Data System (IPEDS) annual surveys of all postsecondary educational institutions and the NCES…
Timber resource statistics for timberland outside National Forests in eastern Oregon.
Neil McKay; Gary J. Lettman; Mary A. Mei
1994-01-01
This report summarizes a 1992 timber resource inventory Of timberland outside National Forests in eastern Oregon. The report presents statistical tables of timberland area, timber volume, growth, mortality, and harvest. It also displays tables of revised 1986-87 timber resource statistics for timberland outside National Forests; the 1992 and 1986-87 tables may be...
School Fire Protection: Contents Count
ERIC Educational Resources Information Center
American School and University, 1976
1976-01-01
The heart of a fire protection system is the sprinkler system. National Fire Protection Association (NFPA) statistics show that automatic sprinklers dramatically reduce fire damage and loss of life. (Author)
Adding Statistical Machine Translation Adaptation to Computer-Assisted Translation
2013-09-01
are automatically searched and used to suggest possible translations; (2) spell-checkers; (3) glossaries; (4) dictionaries ; (5) alignment and...matching against TMs to propose translations; spell-checking, glossary, and dictionary look-up; support for multiple file formats; regular expressions...on Telecommunications. Tehran, 2012, 822–826. Bertoldi, N.; Federico, M. Domain Adaptation for Statistical Machine Translation with Monolingual
AutoBayes Program Synthesis System Users Manual
NASA Technical Reports Server (NTRS)
Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd
2008-01-01
Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.
Opinion Summarizationof CustomerComments
NASA Astrophysics Data System (ADS)
Fan, Miao; Wu, Guoshi
Web 2.0 technologies have enabled more and more customers to freely comment on different kinds of entities, such as sellers, products and services. The large scale of information poses the need and challenge of automatic summarization. In many cases, each of the user-generated short comments implies the opinions which rate the target entity. In this paper, we aim to mine and to summarize all the customer comments of a product. The algorithm proposed in this researchis more reliable on opinion identification because it is unsupervised and the accuracy of the result improves as the number of comments increases. Our research is performed in four steps: (1) mining the frequent aspects of a product that have been commented on by customers; (2) mining the infrequent aspects of a product which have been commented by customers (3) identifying opinion words in each comment and deciding whether each opinion word is positive, negative or neutral; (4) summarizing the comments. This paper proposes several novel techniques to perform these tasks. Our experimental results using comments of a number of products sold online demonstrate the effectiveness of the techniques.
Using clustering and a modified classification algorithm for automatic text summarization
NASA Astrophysics Data System (ADS)
Aries, Abdelkrime; Oufaida, Houda; Nouali, Omar
2013-01-01
In this paper we describe a modified classification method destined for extractive summarization purpose. The classification in this method doesn't need a learning corpus; it uses the input text to do that. First, we cluster the document sentences to exploit the diversity of topics, then we use a learning algorithm (here we used Naive Bayes) on each cluster considering it as a class. After obtaining the classification model, we calculate the score of a sentence in each class, using a scoring model derived from classification algorithm. These scores are used, then, to reorder the sentences and extract the first ones as the output summary. We conducted some experiments using a corpus of scientific papers, and we have compared our results to another summarization system called UNIS.1 Also, we experiment the impact of clustering threshold tuning, on the resulted summary, as well as the impact of adding more features to the classifier. We found that this method is interesting, and gives good performance, and the addition of new features (which is simple using this method) can improve summary's accuracy.
Automatic centring and bonding of lenses
NASA Astrophysics Data System (ADS)
Krey, Stefan; Heinisch, J.; Dumitrescu, E.
2007-05-01
We present an automatic bonding station which is able to center and bond individual lenses or doublets to a barrel with sub micron centring accuracy. The complete manufacturing cycle includes the glue dispensing and UV curing. During the process the state of centring is continuously controlled by the vision software, and the final result is recorded to a file for process statistics. Simple pass or fail results are displayed to the operator at the end of the process.
Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad R.; Pompili, Dario; Soltanian-Zadeh, Hamid
2015-01-01
Hippocampus segmentation is a key step in the evaluation of mesial Temporal Lobe Epilepsy (mTLE) by MR images. Several automated segmentation methods have been introduced for medical image segmentation. Because of multiple edges, missing boundaries, and shape changing along its longitudinal axis, manual outlining still remains the benchmark for hippocampus segmentation, which however, is impractical for large datasets due to time constraints. In this study, four automatic methods, namely FreeSurfer, Hammer, Automatic Brain Structure Segmentation (ABSS), and LocalInfo segmentation, are evaluated to find the most accurate and applicable method that resembles the bench-mark of hippocampus. Results from these four methods are compared against those obtained using manual segmentation for T1-weighted images of 157 symptomatic mTLE patients. For performance evaluation of automatic segmentation, Dice coefficient, Hausdorff distance, Precision, and Root Mean Square (RMS) distance are extracted and compared. Among these four automated methods, ABSS generates the most accurate results and the reproducibility is more similar to expert manual outlining by statistical validation. By considering p-value<0.05, the results of performance measurement for ABSS reveal that, Dice is 4%, 13%, and 17% higher, Hausdorff is 23%, 87%, and 70% lower, precision is 5%, -5%, and 12% higher, and RMS is 19%, 62%, and 65% lower compared to LocalInfo, FreeSurfer, and Hammer, respectively. PMID:25571043
ERIC Educational Resources Information Center
Smarte, Lynn
This 1999 annual report, summarizing the accomplishments of the Educational Resources Information System (ERIC) system in 1998, begins with a section that highlights progress towards meeting goals, as well as selected statistics. The second section, comprising the bulk of the report, provides an overview of ERIC, including the ERIC database, user…
ERIC Educational Resources Information Center
Smarte, Lynn
This 2000 annual report, summarizing the accomplishments of the Educational Resources Information Center (ERIC) system in 1999, begins with a section that highlights progress towards meeting goals, as well as selected statistics. The second section, comprising the bulk of the report, provides an overview of ERIC, including the ERIC database, user…
Kids Count Data Book on Louisiana's Children, 1999.
ERIC Educational Resources Information Center
Agenda for Children, New Orleans, LA.
This Kids Count data book provides statistics on the well-being of children in Louisiana. Data are provided for the state as a whole and for each of Louisiana's 64 parishes. An introduction summarizes findings for 1999, and a final section summarizes trends in the data over the past several years. Indicators used in the report to measure child…
Baldewijns, Greet; Luca, Stijn; Nagels, William; Vanrumste, Bart; Croonenborghs, Tom
2015-01-01
It has been shown that gait speed and transfer times are good measures of functional ability in elderly. However, data currently acquired by systems that measure either gait speed or transfer times in the homes of elderly people require manual reviewing by healthcare workers. This reviewing process is time-consuming. To alleviate this burden, this paper proposes the use of statistical process control methods to automatically detect both positive and negative changes in transfer times. Three SPC techniques: tabular CUSUM, standardized CUSUM and EWMA, known for their ability to detect small shifts in the data, are evaluated on simulated transfer times. This analysis shows that EWMA is the best-suited method with a detection accuracy of 82% and an average detection time of 9.64 days.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.
Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less
The Content of Statistical Requirements for Authors in Biomedical Research Journals
Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang
2016-01-01
Background: Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues’ serious considerations not only at the stage of data analysis but also at the stage of methodological design. Methods: Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Results: Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including “address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation,” and “statistical methods and the reasons.” Conclusions: Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research design, but also standardized statistical reporting, which would be beneficial in providing stronger evidence and making a greater critical appraisal of evidence more accessible. PMID:27748343
The Content of Statistical Requirements for Authors in Biomedical Research Journals.
Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang
2016-10-20
Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues' serious considerations not only at the stage of data analysis but also at the stage of methodological design. Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including "address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation," and "statistical methods and the reasons." Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research design, but also standardized statistical reporting, which would be beneficial in providing stronger evidence and making a greater critical appraisal of evidence more accessible.
J-Adaptive estimation with estimated noise statistics. [for orbit determination
NASA Technical Reports Server (NTRS)
Jazwinski, A. H.; Hipkins, C.
1975-01-01
The J-Adaptive estimator described by Jazwinski and Hipkins (1972) is extended to include the simultaneous estimation of the statistics of the unmodeled system accelerations. With the aid of simulations it is demonstrated that the J-Adaptive estimator with estimated noise statistics can automatically estimate satellite orbits to an accuracy comparable with the data noise levels, when excellent, continuous tracking coverage is available. Such tracking coverage will be available from satellite-to-satellite tracking.
Spatial Light Rebroadcaster Architecture Study
1992-12-01
specifications on the lenslet arrays described in [ Borelli ] and summarized 3 here: Lenslet diameter: 70,m < D < 1000/Am Lenslet spacing: 151m < Delta Focal...which leads to k2 < 1/3. We will use as our baseline, a lenslet array with D, = 300 um and3 A1 = 45 14m which is within the specifications of [ Borelli ...Target Recognizer Working Group), "Automatic Target Recognizer Component Definitions," ATRWG Report No. 87-002, April 1987. Borelli , N., et al
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y; Liao, Z; Jiang, W
Purpose: To evaluate the feasibility of using an automatic segmentation tool to delineate cardiac substructures from computed tomography (CT) images for cardiac toxicity analysis for non-small cell lung cancer (NSCLC) patients after radiotherapy. Methods: A multi-atlas segmentation tool developed in-house was used to delineate eleven cardiac substructures including the whole heart, four heart chambers, and six greater vessels automatically from the averaged 4DCT planning images for 49 NSCLC patients. The automatic segmented contours were edited appropriately by two experienced radiation oncologists. The modified contours were compared with the auto-segmented contours using Dice similarity coefficient (DSC) and mean surface distance (MSD)more » to evaluate how much modification was needed. In addition, the dose volume histogram (DVH) of the modified contours were compared with that of the auto-segmented contours to evaluate the dosimetric difference between modified and auto-segmented contours. Results: Of the eleven structures, the averaged DSC values ranged from 0.73 ± 0.08 to 0.95 ± 0.04 and the averaged MSD values ranged from 1.3 ± 0.6 mm to 2.9 ± 5.1mm for the 49 patients. Overall, the modification is small. The pulmonary vein (PV) and the inferior vena cava required the most modifications. The V30 (volume receiving 30 Gy or above) for the whole heart and the mean dose to the whole heart and four heart chambers did not show statistically significant difference between modified and auto-segmented contours. The maximum dose to the greater vessels did not show statistically significant difference except for the PV. Conclusion: The automatic segmentation of the cardiac substructures did not require substantial modification. The dosimetric evaluation showed no statistically significant difference between auto-segmented and modified contours except for the PV, which suggests that auto-segmented contours for the cardiac dose response study are feasible in the clinical practice with a minor modification to the PV vessel.« less
Timber resource statistics for the Tanana inventory unit, Alaska, 1971-75.
Willem W.S. Van Hees
1984-01-01
Statistics on forest area, total gross and net timber volumes, and annual net growth and mortality are presented for the 1971-75 timber inventory of the Tanana unit, Alaska. This report summarizes statistics previously published for the four inventory blocks of the unit: Fairbanks, Kantishna, Upper Tanana, and Wood-Salcha. Timberland area is estimated at 2.19 million...
Simple Statistics: - Summarized!
ERIC Educational Resources Information Center
Blai, Boris, Jr.
Statistics are an essential tool for making proper judgement decisions. It is concerned with probability distribution models, testing of hypotheses, significance tests and other means of determining the correctness of deductions and the most likely outcome of decisions. Measures of central tendency include the mean, median and mode. A second…
USDA-ARS?s Scientific Manuscript database
Characterizing population genetic structure across geographic space is a fundamental challenge in population genetics. Multivariate statistical analyses are powerful tools for summarizing genetic variability, but geographic information and accompanying metadata is not always easily integrated into t...
NASA Astrophysics Data System (ADS)
Sun, Feng-Rong; Wang, Xiao-Jing; Wu, Qiang; Yao, Gui-Hua; Zhang, Yun
2013-01-01
Left ventricular (LV) torsion is a sensitive and global index of LV systolic and diastolic function, but how to noninvasively measure it is challenging. Two-dimensional echocardiography and the block-matching based speckle tracking method were used to measure LV torsion. Main advantages of the proposed method over the previous ones are summarized as follows: (1) The method is automatic, except for manually selecting some endocardium points on the end-diastolic frame in initialization step. (2) The diamond search strategy is applied, with a spatial smoothness constraint introduced into the sum of absolute differences matching criterion; and the reference frame during the search is determined adaptively. (3) The method is capable of removing abnormal measurement data automatically. The proposed method was validated against that using Doppler tissue imaging and some preliminary clinical experimental studies were presented to illustrate clinical values of the proposed method.
NASA Technical Reports Server (NTRS)
Mysoor, Narayan R.; Perret, Jonathan D.; Kermode, Arthur W.
1992-01-01
The design concepts and measured performance characteristics are summarized of an X band (7162 MHz/8415 MHz) breadboard deep space transponder (DSP) for future spacecraft applications, with the first use scheduled for the Comet Rendezvous Asteroid Flyby (CRAF) and Cassini missions in 1995 and 1996, respectively. The DST consists of a double conversion, superheterodyne, automatic phase tracking receiver, and an X band (8415 MHz) exciter to drive redundant downlink power amplifiers. The receiver acquires and coherently phase tracks the modulated or unmodulated X band (7162 MHz) uplink carrier signal. The exciter phase modulates the band (8415 MHz) downlink signal with composite telemetry and ranging signals. The receiver measured tracking threshold, automatic gain control, static phase error, and phase jitter characteristics of the breadboard DST are in good agreement with the expected performance. The measured results show a receiver tracking threshold of -158 dBm and a dynamic signal range of 88 dB.
NASA Astrophysics Data System (ADS)
Lasaponara, Rosa; Masini, Nicola
2018-06-01
The identification and quantification of disturbance of archaeological sites has been generally approached by visual inspection of optical aerial or satellite pictures. In this paper, we briefly summarize the state of the art of the traditionally satellite-based approaches for looting identification and propose a new automatic method for archaeological looting feature extraction approach (ALFEA). It is based on three steps: the enhancement using spatial autocorrelation, unsupervised classification, and segmentation. ALFEA has been applied to Google Earth images of two test areas, selected in desert environs in Syria (Dura Europos), and in Peru (Cahuachi-Nasca). The reliability of ALFEA was assessed through field surveys in Peru and visual inspection for the Syrian case study. Results from the evaluation procedure showed satisfactory performance from both of the two analysed test cases with a rate of success higher than 90%.
Functional differences between statistical learning with and without explicit training
Reber, Paul J.; Paller, Ken A.
2015-01-01
Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and prepare for incoming input. In this study, we ask whether the function of statistical learning may be enhanced through supplementary explicit training, in which underlying regularities are explicitly taught rather than simply abstracted through exposure. Learners were randomly assigned either to an explicit group or an implicit group. All learners were exposed to a continuous stream of repeating nonsense words. Prior to this implicit training, learners in the explicit group received supplementary explicit training on the nonsense words. Statistical learning was assessed through a speeded reaction-time (RT) task, which measured the extent to which learners used acquired statistical knowledge to optimize online processing. Both RTs and brain potentials revealed significant differences in online processing as a function of training condition. RTs showed a crossover interaction; responses in the explicit group were faster to predictable targets and marginally slower to less predictable targets relative to responses in the implicit group. P300 potentials to predictable targets were larger in the explicit group than in the implicit group, suggesting greater recruitment of controlled, effortful processes. Taken together, these results suggest that information abstracted through passive exposure during statistical learning may be processed more automatically and with less effort than information that is acquired explicitly. PMID:26472644
Data summarization method for chronic disease tracking.
Aleksić, Dejan; Rajković, Petar; Vučković, Dušan; Janković, Dragan; Milenković, Aleksandar
2017-05-01
Bearing in mind the rising prevalence of chronic medical conditions, the chronic disease management is one of the key features required by medical information systems used in primary healthcare. Our research group paid a particular attention to this specific area by offering a set of custom data collection forms and reports in order to improve medical professionals' daily routine. The main idea was to provide an overview of history for chronic diseases, which, as it seems, had not been properly supported in previous administrative workflows. After five years of active use of medical information systems in more than 25 primary healthcare institutions, we were able to identify several scenarios that were often end-user-action dependent and could result in the data related to chronic diagnoses being loosely connected. An additional benefit would be a more effective identification of potentially new patients suffering from chronic diseases. For this particular reason, we introduced an extension of the existing data structures and a summarizing method along with a specific tool that should help in connecting all the data related to a patient and a diagnosis. The summarization method was based on the principle of connecting all of the records pertaining to a specific diagnosis for the selected patient, and it was envisaged to work in both automatic and on-demand mode. The expected results were a more effective identification of new potential patients and a completion of the existing histories of diseases associated with chronic diagnoses. The current system usage analysis shows that a small number of doctors used functionalities specially designed for chronic diseases affecting less than 6% of the total population (around 11,500 out of more than 200,000 patients). In initial tests, the on-demand data summarization mode was applied in general practice and 89 out of 155 users identified more than 3000 new patients with a chronic disease over a three-month test period. During the tests, more than 100,000 medical documents were paired up with the existing histories of diseases. Furthermore, a significant number of physicians that accepted the standard history of disease helped with the identification of the additional 22% of the population. Applying the automatic summarization would help identify all patients with at least one record related to the diagnosis usually marked as chronic, but ultimately, this data had to be filtered and medical professionals should have the final say. Depending on the data filter definition, the total percentage of newly discovered patients with a chronic disease is between 35% and 53%, as expected. Although the medical practitioner should have the final say about any medical record changes, new, innovative methods which can help in the data summarization are welcome. In addition to being focused on the summarization in relation to the patient, or to the diagnosis, this proposed method and tool can be effectively used when the patient-diagnosis relation is not one-to-one but many-to-many. The proposed summarization principles were tested on a single type of the medical information system, but can easily be applied to other medical software packages, too. Depending on the existing data structure of the target system, as well as identified use cases, it is possible to extend the data and customize the proposed summarization method. Copyright © 2017 Elsevier Inc. All rights reserved.
Automatic imitation of pro- and antisocial gestures: Is implicit social behavior censored?
Cracco, Emiel; Genschow, Oliver; Radkova, Ina; Brass, Marcel
2018-01-01
According to social reward theories, automatic imitation can be understood as a means to obtain positive social consequences. In line with this view, it has been shown that automatic imitation is modulated by contextual variables that constrain the positive outcomes of imitation. However, this work has largely neglected that many gestures have an inherent pro- or antisocial meaning. As a result of their meaning, antisocial gestures are considered taboo and should not be used in public. In three experiments, we show that automatic imitation of symbolic gestures is modulated by the social intent of these gestures. Experiment 1 (N=37) revealed reduced automatic imitation of antisocial compared with prosocial gestures. Experiment 2 (N=118) and Experiment 3 (N=118) used a social priming procedure to show that this effect was stronger in a prosocial context than in an antisocial context. These findings were supported in a within-study meta-analysis using both frequentist and Bayesian statistics. Together, our results indicate that automatic imitation is regulated by internalized social norms that act as a stop signal when inappropriate actions are triggered. Copyright © 2017 Elsevier B.V. All rights reserved.
Loktionov, A S; Prianishnikov, V A
1981-05-01
A system has been proposed to provide the automatic analysis of data on: a) point cytophotometry, b) two-wave cytophotometry, c) cytofluorimetry. The system provides the input of the data from a photomultiplier to a specialized computer "Electronica-T3-16M" in addition to the simultaneous statistical analysis of these. The information on the programs used is presented. The advantages of the system, compared with some commercially available cytophotometers, are indicated.
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.
2004-01-01
Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.
Automated detection of diabetic retinopathy in retinal images.
Valverde, Carmen; Garcia, Maria; Hornero, Roberto; Lopez-Galvez, Maria I
2016-01-01
Diabetic retinopathy (DR) is a disease with an increasing prevalence and the main cause of blindness among working-age population. The risk of severe vision loss can be significantly reduced by timely diagnosis and treatment. Systematic screening for DR has been identified as a cost-effective way to save health services resources. Automatic retinal image analysis is emerging as an important screening tool for early DR detection, which can reduce the workload associated to manual grading as well as save diagnosis costs and time. Many research efforts in the last years have been devoted to developing automatic tools to help in the detection and evaluation of DR lesions. However, there is a large variability in the databases and evaluation criteria used in the literature, which hampers a direct comparison of the different studies. This work is aimed at summarizing the results of the available algorithms for the detection and classification of DR pathology. A detailed literature search was conducted using PubMed. Selected relevant studies in the last 10 years were scrutinized and included in the review. Furthermore, we will try to give an overview of the available commercial software for automatic retinal image analysis.
Effects of Urbanization on Stream Water Quality in the City of Atlanta, Georgia, USA
NASA Astrophysics Data System (ADS)
Peters, N. E.
2009-05-01
A long-term stream water-quality monitoring network was established in the City of Atlanta (COA) during 2003 to assess baseline water-quality conditions and the effects of urbanization on stream water quality. Routine hydrologically-based manual stream sampling, including several concurrent manual point and equal width increment sampling, was conducted approximately 12 times per year at 21 stations, with drainage areas ranging from 3.7 to 232 km2. Eleven of the stations are real-time (RT) water-quality stations having continuous measures of stream stage/discharge, pH, dissolved oxygen, specific conductance, water temperature, and turbidity, and automatic samplers for stormwater collection. Samples were analyzed for field parameters, and a broad suite of water-quality and sediment-related constituents. This paper summarizes an evaluation of field parameters and concentrations of major ions, minor and trace metals, nutrient species (nitrogen and phosphorus), and coliform bacteria among stations and with respect to watershed characteristics and plausible sources from 2003 through September 2007. The concentrations of most constituents in the COA streams are statistically higher than those of two nearby reference streams. Concentrations are statistically different among stations for several constituents, despite high variability both within and among stations. The combination of routine manual sampling, automatic sampling during stormflows, and real-time water-quality monitoring provided sufficient information about the variability of urban stream water quality to develop hypotheses for causes of water-quality differences among COA streams. Fecal coliform bacteria concentrations of most individual samples at each station exceeded Georgia's water-quality standard for any water-usage class. High chloride concentrations occur at three stations and are hypothesized to be associated with discharges of chlorinated combined sewer overflows, drainage of swimming pool(s), and dissolution and transport during rainstorms of CaCl2, a deicing salt applied to roads during winter storms. Water quality of one stream was highly affected by the dissolution and transport of ammonium alum [NH4Al(SO4)2] from an alum manufacturing plant in the watershed; streamwater has low pH (<5), low alkalinity and high concentrations of minor and trace metals. Several trace metals (Cu, Pb and Zn) exceed acute and chronic water-quality standards and the high concentrations are attributed to washoff from impervious surfaces.
Knowledge representation and management: transforming textual information into useful knowledge.
Rassinoux, A-M
2010-01-01
To summarize current outstanding research in the field of knowledge representation and management. Synopsis of the articles selected for the IMIA Yearbook 2010. Four interesting papers, dealing with structured knowledge, have been selected for the section knowledge representation and management. Combining the newest techniques in computational linguistics and natural language processing with the latest methods in statistical data analysis, machine learning and text mining has proved to be efficient for turning unstructured textual information into meaningful knowledge. Three of the four selected papers for the section knowledge representation and management corroborate this approach and depict various experiments conducted to .extract meaningful knowledge from unstructured free texts such as extracting cancer disease characteristics from pathology reports, or extracting protein-protein interactions from biomedical papers, as well as extracting knowledge for the support of hypothesis generation in molecular biology from the Medline literature. Finally, the last paper addresses the level of formally representing and structuring information within clinical terminologies in order to render such information easily available and shareable among the health informatics community. Delivering common powerful tools able to automatically extract meaningful information from the huge amount of electronically unstructured free texts is an essential step towards promoting sharing and reusability across applications, domains, and institutions thus contributing to building capacities worldwide.
Multiscale analysis of river networks using the R package linbin
Welty, Ethan Z.; Torgersen, Christian E.; Brenkman, Samuel J.; Duda, Jeffrey J.; Armstrong, Jonathan B.
2015-01-01
Analytical tools are needed in riverine science and management to bridge the gap between GIS and statistical packages that were not designed for the directional and dendritic structure of streams. We introduce linbin, an R package developed for the analysis of riverscapes at multiple scales. With this software, riverine data on aquatic habitat and species distribution can be scaled and plotted automatically with respect to their position in the stream network or—in the case of temporal data—their position in time. The linbin package aggregates data into bins of different sizes as specified by the user. We provide case studies illustrating the use of the software for (1) exploring patterns at different scales by aggregating variables at a range of bin sizes, (2) comparing repeat observations by aggregating surveys into bins of common coverage, and (3) tailoring analysis to data with custom bin designs. Furthermore, we demonstrate the utility of linbin for summarizing patterns throughout an entire stream network, and we analyze the diel and seasonal movements of tagged fish past a stationary receiver to illustrate how linbin can be used with temporal data. In short, linbin enables more rapid analysis of complex data sets by fisheries managers and stream ecologists and can reveal underlying spatial and temporal patterns of fish distribution and habitat throughout a riverscape.
JRC Copernicus Climate Change Service (C3S) F4P platform.
NASA Astrophysics Data System (ADS)
Mota, Bernardo; Cappucci, Fabrizio; Gobron, Nadine
2016-04-01
With the increasing number of Earth Observation satellites and derived land surface products, concerns of quality assurance led the Global Climate Observing System (GCOS) to establish accuracy criteria and standards. In this context, the Climate Change Copernicus Service (C3S) fitness-for-purpose (F4P) platform, developed at the Joint Research Centre, aims assessing the quality of land Essential Climate Variables (ECVs) in compliance with GCOS criteria. In this paper, we first summarize the JRC C3S FP4 goals and secondly present the automatic review platform to assess multi-mission physical consistencies and physical coherence of and between various land products, at global and regional scales. We propose new metrics, such as Gamma Index and Triple Collocation Error Model, for multi-mission product inter-comparison and stability assessment, and resource selection statistical methods to assess physical coherence with other related ECV products. Examples concern the consistency of five global albedo products (GlobAlbedo, GLASS, MCD43C3, GIO and MISR), between 2000 And 2011, and their coherence with four burnt area products (MCD45A1, MCD64A1, Fire_CCI and GIO) for the overlapping period (2006 to 2008). Preliminary results show reasonable agreement with the inherent limitations of each product algorithm and sensor resolution.
A Vehicle for Bivariate Data Analysis
ERIC Educational Resources Information Center
Roscoe, Matt B.
2016-01-01
Instead of reserving the study of probability and statistics for special fourth-year high school courses, the Common Core State Standards for Mathematics (CCSSM) takes a "statistics for all" approach. The standards recommend that students in grades 6-8 learn to summarize and describe data distributions, understand probability, draw…
Characterizing chaotic melodies in automatic music composition
NASA Astrophysics Data System (ADS)
Coca, Andrés E.; Tost, Gerard O.; Zhao, Liang
2010-09-01
In this paper, we initially present an algorithm for automatic composition of melodies using chaotic dynamical systems. Afterward, we characterize chaotic music in a comprehensive way as comprising three perspectives: musical discrimination, dynamical influence on musical features, and musical perception. With respect to the first perspective, the coherence between generated chaotic melodies (continuous as well as discrete chaotic melodies) and a set of classical reference melodies is characterized by statistical descriptors and melodic measures. The significant differences among the three types of melodies are determined by discriminant analysis. Regarding the second perspective, the influence of dynamical features of chaotic attractors, e.g., Lyapunov exponent, Hurst coefficient, and correlation dimension, on melodic features is determined by canonical correlation analysis. The last perspective is related to perception of originality, complexity, and degree of melodiousness (Euler's gradus suavitatis) of chaotic and classical melodies by nonparametric statistical tests.
Automatic Adaptation to Fast Input Changes in a Time-Invariant Neural Circuit
Bharioke, Arjun; Chklovskii, Dmitri B.
2015-01-01
Neurons must faithfully encode signals that can vary over many orders of magnitude despite having only limited dynamic ranges. For a correlated signal, this dynamic range constraint can be relieved by subtracting away components of the signal that can be predicted from the past, a strategy known as predictive coding, that relies on learning the input statistics. However, the statistics of input natural signals can also vary over very short time scales e.g., following saccades across a visual scene. To maintain a reduced transmission cost to signals with rapidly varying statistics, neuronal circuits implementing predictive coding must also rapidly adapt their properties. Experimentally, in different sensory modalities, sensory neurons have shown such adaptations within 100 ms of an input change. Here, we show first that linear neurons connected in a feedback inhibitory circuit can implement predictive coding. We then show that adding a rectification nonlinearity to such a feedback inhibitory circuit allows it to automatically adapt and approximate the performance of an optimal linear predictive coding network, over a wide range of inputs, while keeping its underlying temporal and synaptic properties unchanged. We demonstrate that the resulting changes to the linearized temporal filters of this nonlinear network match the fast adaptations observed experimentally in different sensory modalities, in different vertebrate species. Therefore, the nonlinear feedback inhibitory network can provide automatic adaptation to fast varying signals, maintaining the dynamic range necessary for accurate neuronal transmission of natural inputs. PMID:26247884
ERIC Educational Resources Information Center
Zieffler, Andrew; Garfield, Joan; Alt, Shirley; Dupuis, Danielle; Holleque, Kristine; Chang, Beng
2008-01-01
Since the first studies on the teaching and learning of statistics appeared in the research literature, the scholarship in this area has grown dramatically. Given the diversity of disciplines, methodology, and orientation of the studies that may be classified as "statistics education research," summarizing and critiquing this body of work for…
Quasi-Monochromatic Visual Environments and the Resting Point of Accommodation
1988-01-01
accommodation. No statistically significant differences were revealed to support the possibility of color mediated differential regression to resting...discussed with respect to the general findings of the total sample as well as the specific behavior of individual participants. The summarized statistics ...remaining ten varied considerably with respect to the averaged trends reported in the above descriptive statistics as well as with respect to precision
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
Empirical study of alginate impression materials by customized proportioning system
2016-01-01
PURPOSE Alginate mixers available in the market do not have the automatic proportioning unit. In this study, an automatic proportioning unit for the alginate mixer and controller software were designed and produced for a new automatic proportioning unit. With this device, it was ensured that proportioning operation could arrange weight-based alginate impression materials. MATERIALS AND METHODS The variation of coefficient in the tested groups was compared with the manual proportioning. Compression tension and tear tests were conducted to determine the mechanical properties of alginate impression materials. The experimental data were statistically analyzed using one way ANOVA and Tukey test at the 0.05 level of significance. RESULTS No statistically significant differences in modulus of elastisity (P>0.3), tensional/compresional strength (P>0.3), resilience (P>0.2), strain in failure (P>0.4), and tear energy (P>0.7) of alginate impression materials were seen. However, a decrease in the standard deviation of tested groups was observed when the customized machine was used. To verify the efficiency of the system, powder and powder/water mixing were weighed and significant decrease was observed. CONCLUSION It was possible to obtain more mechanically stable alginate impression materials by using the custom-made proportioning unit. PMID:27826387
An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.
ERIC Educational Resources Information Center
Capraro, Mary Margaret
This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…
Statistics on Children with Visual Impairments.
ERIC Educational Resources Information Center
Viisola, Michelle
This report summarizes statistical data relating to children with visual impairments, including incidence, causes, and education. Data include: (1) prevalence estimates that indicate 1 percent of persons under the age of 18 in the United States have a visual impairment that cannot be corrected with glasses; (2) the leading cause of childhood…
The Progress of Nations, 2000.
ERIC Educational Resources Information Center
United Nations Children's Fund, New York, NY.
This report summarizes the latest available statistics on international progress on children's well-being. Each section of the report contains a commentary, related statistics, and a discussion on progress and disparity in the section's particular area. Following a foreword by United Nations Secretary-General Kofi A. Annan, the sections of the…
La estadistica en el planeamiento educativo (Statistics in Educational Planning).
ERIC Educational Resources Information Center
Leon Pacheco, Tomas
1971-01-01
This document is an English-language abstract (approximately 1500 words) summarizing the author's definitions of the principal physical and human characteristics of elementary and secondary education as presently constituted in Mexico so that school personnel may comply with Mexican regulations that force them to supply educational statistics. For…
BLS Machine-Readable Data and Tabulating Routines.
ERIC Educational Resources Information Center
DiFillipo, Tony
This report describes the machine-readable data and tabulating routines that the Bureau of Labor Statistics (BLS) is prepared to distribute. An introduction discusses the LABSTAT (Labor Statistics) database and the BLS policy on release of unpublished data. Descriptions summarizing data stored in 25 files follow this format: overview, data…
The western juniper resource of eastern Oregon, 1999.
David L. Azuma; Bruce A. Hiserote; Paul A. Dunham
2005-01-01
This report summarizes resource statistics for eastern Oregon's juniper forests, which are in Baker, Crook, Deschutes, Gilliam, Grant, Harney, Jefferson, Klamath, Lake, Malheur, Morrow, Sherman, Umatilla, Union, Wallowa, Wasco, and Wheeler Counties. We sampled all ownerships outside of the National Forest System; we report the statistics on juniper forest on...
The Progress of Nations, 1999.
ERIC Educational Resources Information Center
United Nations Children's Fund, New York, NY.
This report summarizes the latest available statistics on international progress on children's well-being. Each of the report's sections contains a commentary, related statistics, and a discussion on progress and disparity in the section's particular area. Following a foreword by United Nations Secretary-General Kofi A. Annan, the sections of the…
Environmental Radiation Effects: A Need to Question Old Paradigms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinton, T.G.; Bedford, J.; Ulsh, B.
2003-03-27
A historical perspective is given of the current paradigm that does not explicitly protect the environment from radiation, but instead, relies on the concept that if dose limits are set to protect humans then the environment is automatically protected as well. We summarize recent international questioning of this paradigm and briefly present three different frameworks for protecting biota that are being considered by the U.S. DOE, the Canadian government and the International Commission on Radiological Protection. We emphasize that an enhanced collaboration is required between what has traditionally been separated disciplines of radiation biology and radiation ecology if we aremore » going to properly address the current environmental radiation problems. We then summarize results generated from an EMSP grant that allowed us to develop a Low Dose Irradiation Facility that specifically addresses effects of low-level, chronic irradiation on multiple levels of biological organization.« less
A Method for Extracting Important Segments from Documents Using Support Vector Machines
NASA Astrophysics Data System (ADS)
Suzuki, Daisuke; Utsumi, Akira
In this paper we propose an extraction-based method for automatic summarization. The proposed method consists of two processes: important segment extraction and sentence compaction. The process of important segment extraction classifies each segment in a document as important or not by Support Vector Machines (SVMs). The process of sentence compaction then determines grammatically appropriate portions of a sentence for a summary according to its dependency structure and the classification result by SVMs. To test the performance of our method, we conducted an evaluation experiment using the Text Summarization Challenge (TSC-1) corpus of human-prepared summaries. The result was that our method achieved better performance than a segment-extraction-only method and the Lead method, especially for sentences only a part of which was included in human summaries. Further analysis of the experimental results suggests that a hybrid method that integrates sentence extraction with segment extraction may generate better summaries.
Chavarrías, Cristina; García-Vázquez, Verónica; Alemán-Gómez, Yasser; Montesinos, Paula; Pascau, Javier; Desco, Manuel
2016-05-01
The purpose of this study was to develop a multi-platform automatic software tool for full processing of fMRI rodent studies. Existing tools require the usage of several different plug-ins, a significant user interaction and/or programming skills. Based on a user-friendly interface, the tool provides statistical parametric brain maps (t and Z) and percentage of signal change for user-provided regions of interest. The tool is coded in MATLAB (MathWorks(®)) and implemented as a plug-in for SPM (Statistical Parametric Mapping, the Wellcome Trust Centre for Neuroimaging). The automatic pipeline loads default parameters that are appropriate for preclinical studies and processes multiple subjects in batch mode (from images in either Nifti or raw Bruker format). In advanced mode, all processing steps can be selected or deselected and executed independently. Processing parameters and workflow were optimized for rat studies and assessed using 460 male-rat fMRI series on which we tested five smoothing kernel sizes and three different hemodynamic models. A smoothing kernel of FWHM = 1.2 mm (four times the voxel size) yielded the highest t values at the somatosensorial primary cortex, and a boxcar response function provided the lowest residual variance after fitting. fMRat offers the features of a thorough SPM-based analysis combined with the functionality of several SPM extensions in a single automatic pipeline with a user-friendly interface. The code and sample images can be downloaded from https://github.com/HGGM-LIM/fmrat .
A quality score for coronary artery tree extraction results
NASA Astrophysics Data System (ADS)
Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke
2018-02-01
Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.
ON THE DYNAMICAL DERIVATION OF EQUILIBRIUM STATISTICAL MECHANICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prigogine, I.; Balescu, R.; Henin, F.
1960-12-01
Work on nonequilibrium statistical mechanics, which allows an extension of the kinetic proof to all results of equilibrium statistical mechanics involving a finite number of degrees of freedom, is summarized. As an introduction to the general N-body problem, the scattering theory in classical mechanics is considered. The general N-body problem is considered for the case of classical mechanics, quantum mechanics with Boltzmann statistics, and quantum mechanics including quantum statistics. Six basic diagrams, which describe the elementary processes of the dynamics of correlations, were obtained. (M.C.G.)
A method for automatic feature points extraction of human vertebrae three-dimensional model
NASA Astrophysics Data System (ADS)
Wu, Zhen; Wu, Junsheng
2017-05-01
A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.
Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S
2008-01-01
Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images.
A versatile software package for inter-subject correlation based analyses of fMRI.
Kauppi, Jukka-Pekka; Pajula, Juha; Tohka, Jussi
2014-01-01
In the inter-subject correlation (ISC) based analysis of the functional magnetic resonance imaging (fMRI) data, the extent of shared processing across subjects during the experiment is determined by calculating correlation coefficients between the fMRI time series of the subjects in the corresponding brain locations. This implies that ISC can be used to analyze fMRI data without explicitly modeling the stimulus and thus ISC is a potential method to analyze fMRI data acquired under complex naturalistic stimuli. Despite of the suitability of ISC based approach to analyze complex fMRI data, no generic software tools have been made available for this purpose, limiting a widespread use of ISC based analysis techniques among neuroimaging community. In this paper, we present a graphical user interface (GUI) based software package, ISC Toolbox, implemented in Matlab for computing various ISC based analyses. Many advanced computations such as comparison of ISCs between different stimuli, time window ISC, and inter-subject phase synchronization are supported by the toolbox. The analyses are coupled with re-sampling based statistical inference. The ISC based analyses are data and computation intensive and the ISC toolbox is equipped with mechanisms to execute the parallel computations in a cluster environment automatically and with an automatic detection of the cluster environment in use. Currently, SGE-based (Oracle Grid Engine, Son of a Grid Engine, or Open Grid Scheduler) and Slurm environments are supported. In this paper, we present a detailed account on the methods behind the ISC Toolbox, the implementation of the toolbox and demonstrate the possible use of the toolbox by summarizing selected example applications. We also report the computation time experiments both using a single desktop computer and two grid environments demonstrating that parallelization effectively reduces the computing time. The ISC Toolbox is available in https://code.google.com/p/isc-toolbox/
A versatile software package for inter-subject correlation based analyses of fMRI
Kauppi, Jukka-Pekka; Pajula, Juha; Tohka, Jussi
2014-01-01
In the inter-subject correlation (ISC) based analysis of the functional magnetic resonance imaging (fMRI) data, the extent of shared processing across subjects during the experiment is determined by calculating correlation coefficients between the fMRI time series of the subjects in the corresponding brain locations. This implies that ISC can be used to analyze fMRI data without explicitly modeling the stimulus and thus ISC is a potential method to analyze fMRI data acquired under complex naturalistic stimuli. Despite of the suitability of ISC based approach to analyze complex fMRI data, no generic software tools have been made available for this purpose, limiting a widespread use of ISC based analysis techniques among neuroimaging community. In this paper, we present a graphical user interface (GUI) based software package, ISC Toolbox, implemented in Matlab for computing various ISC based analyses. Many advanced computations such as comparison of ISCs between different stimuli, time window ISC, and inter-subject phase synchronization are supported by the toolbox. The analyses are coupled with re-sampling based statistical inference. The ISC based analyses are data and computation intensive and the ISC toolbox is equipped with mechanisms to execute the parallel computations in a cluster environment automatically and with an automatic detection of the cluster environment in use. Currently, SGE-based (Oracle Grid Engine, Son of a Grid Engine, or Open Grid Scheduler) and Slurm environments are supported. In this paper, we present a detailed account on the methods behind the ISC Toolbox, the implementation of the toolbox and demonstrate the possible use of the toolbox by summarizing selected example applications. We also report the computation time experiments both using a single desktop computer and two grid environments demonstrating that parallelization effectively reduces the computing time. The ISC Toolbox is available in https://code.google.com/p/isc-toolbox/ PMID:24550818
NASA Astrophysics Data System (ADS)
Butykai, A.; Domínguez-García, P.; Mor, F. M.; Gaál, R.; Forró, L.; Jeney, S.
2017-11-01
The present document is an update of the previously published MatLab code for the calibration of optical tweezers in the high-resolution detection of the Brownian motion of non-spherical probes [1]. In this instance, an alternative version of the original code, based on the same physical theory [2], but focused on the automation of the calibration of measurements using spherical probes, is outlined. The new added code is useful for high-frequency microrheology studies, where the probe radius is known but the viscosity of the surrounding fluid maybe not. This extended calibration methodology is automatic, without the need of a user's interface. A code for calibration by means of thermal noise analysis [3] is also included; this is a method that can be applied when using viscoelastic fluids if the trap stiffness is previously estimated [4]. The new code can be executed in MatLab and using GNU Octave. Program Files doi:http://dx.doi.org/10.17632/s59f3gz729.1 Licensing provisions: GPLv3 Programming language: MatLab 2016a (MathWorks Inc.) and GNU Octave 4.0 Operating system: Linux and Windows. Supplementary material: A new document README.pdf includes basic running instructions for the new code. Journal reference of previous version: Computer Physics Communications, 196 (2015) 599 Does the new version supersede the previous version?: No. It adds alternative but compatible code while providing similar calibration factors. Nature of problem (approx. 50-250 words): The original code uses a MatLab-provided user's interface, which is not available in GNU Octave, and cannot be used outside of a proprietary software as MatLab. Besides, the process of calibration when using spherical probes needs an automatic method when calibrating big amounts of different data focused to microrheology. Solution method (approx. 50-250 words): The new code can be executed in the latest version of MatLab and using GNU Octave, a free and open-source alternative to MatLab. This code generates an automatic calibration process which requires only to write the input data in the main script. Additionally, we include a calibration method based on thermal noise statistics, which can be used with viscoelastic fluids if the trap stiffness is previously estimated. Reasons for the new version: This version extends the functionality of PFMCal for the particular case of spherical probes and unknown fluid viscosities. The extended code is automatic, works in different operating systems and it is compatible with GNU Octave. Summary of revisions: The original MatLab program in the previous version, which is executed by PFMCal.m, is not changed. Here, we have added two additional main archives named PFMCal_auto.m and PFMCal_histo.m, which implement automatic calculations of the calibration process and calibration through Boltzmann statistics, respectively. The process of calibration using this code for spherical beads is described in the README.pdf file provided in the new code submission. Here, we obtain different calibration factors, β (given in μm/V), according to [2], related to two statistical quantities: the mean-squared displacement (MSD), βMSD, and the velocity autocorrelation function (VAF), βVAF. Using that methodology, the trap stiffness, k, and the zero-shear viscosity of the fluid, η, can be calculated if the value of the particle's radius, a, is previously known. For comparison, we include in the extended code the method of calibration using the corner frequency of the power-spectral density (PSD) [5], providing a calibration factor βPSD. Besides, with the prior estimation of the trap stiffness, along with the known value of the particle's radius, we can use thermal noise statistics to obtain calibration factors, β, according to the quadratic form of the optical potential, βE, and related to the Gaussian distribution of the bead's positions, βσ2. This method has been demonstrated to be applicable to the calibration of optical tweezers when using non-Newtonian viscoelastic polymeric liquids [4]. An example of the results using this calibration process is summarized in Table 1. Using the data provided in the new code submission, for water and acetone fluids, we calculate all the calibration factors by using the original PFMCal.m and by the new non-GUI code PFMCal_auto.m and PFMCal_histo.m. Regarding the new code, PFMCal_auto.m returns η, k, βMSD, βVAF and βPSD, while PFMCal_histo.m provides βσ2 and βE. Table 1 shows how we obtain the expected viscosity of the two fluids at this temperature and how the different methods provide good agreement between trap stiffnesses and calibration factors. Additional comments including Restrictions and Unusual features (approx. 50-250 words): The original code, PFMCal.m, runs under MatLab using the Statistics Toolbox. The extended code, PFMCal_auto.m and PFMCal_histo.m, can be executed without modification using MatLab or GNU Octave. The code has been tested in Linux and Windows operating systems.
Figure-associated text summarization and evaluation.
Polepalli Ramesh, Balaji; Sethi, Ricky J; Yu, Hong
2015-01-01
Biomedical literature incorporates millions of figures, which are a rich and important knowledge resource for biomedical researchers. Scientists need access to the figures and the knowledge they represent in order to validate research findings and to generate new hypotheses. By themselves, these figures are nearly always incomprehensible to both humans and machines and their associated texts are therefore essential for full comprehension. The associated text of a figure, however, is scattered throughout its full-text article and contains redundant information content. In this paper, we report the continued development and evaluation of several figure summarization systems, the FigSum+ systems, that automatically identify associated texts, remove redundant information, and generate a text summary for every figure in an article. Using a set of 94 annotated figures selected from 19 different journals, we conducted an intrinsic evaluation of FigSum+. We evaluate the performance by precision, recall, F1, and ROUGE scores. The best FigSum+ system is based on an unsupervised method, achieving F1 score of 0.66 and ROUGE-1 score of 0.97. The annotated data is available at figshare.com (http://figshare.com/articles/Figure_Associated_Text_Summarization_and_Evaluation/858903).
Figure-Associated Text Summarization and Evaluation
Polepalli Ramesh, Balaji; Sethi, Ricky J.; Yu, Hong
2015-01-01
Biomedical literature incorporates millions of figures, which are a rich and important knowledge resource for biomedical researchers. Scientists need access to the figures and the knowledge they represent in order to validate research findings and to generate new hypotheses. By themselves, these figures are nearly always incomprehensible to both humans and machines and their associated texts are therefore essential for full comprehension. The associated text of a figure, however, is scattered throughout its full-text article and contains redundant information content. In this paper, we report the continued development and evaluation of several figure summarization systems, the FigSum+ systems, that automatically identify associated texts, remove redundant information, and generate a text summary for every figure in an article. Using a set of 94 annotated figures selected from 19 different journals, we conducted an intrinsic evaluation of FigSum+. We evaluate the performance by precision, recall, F1, and ROUGE scores. The best FigSum+ system is based on an unsupervised method, achieving F1 score of 0.66 and ROUGE-1 score of 0.97. The annotated data is available at figshare.com (http://figshare.com/articles/Figure_Associated_Text_Summarization_and_Evaluation/858903). PMID:25643357
Automated determination of dust particles trajectories in the coma of comet 67P
NASA Astrophysics Data System (ADS)
Marín-Yaseli de la Parra, J.; Küppers, M.; Perez Lopez, F.; Besse, S.; Moissl, R.
2017-09-01
During more than two years Rosetta spent at comet 67P, it took thousands of images that contain individual dust particles. To arrive at a statistics of the dust properties, automatic image analysis is required. We present a new methodology for fast-dust identification using a star mask reference system for matching a set of images automatically. The main goal is to derive particle size distributions and to determine if traces of the size distribution of primordial pebbles are still present in today's cometary dust [1].
Parallel auto-correlative statistics with VTK.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre; Bennett, Janine Camille
2013-08-01
This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.
Timber resource statistics for eastern Washington, 1995.
Neil McKay; Patricia M. Bassett; Colin D. MacLean
1995-01-01
This report summarizes a 1990-91 timber resource inventory of Washington east of the crest of the Cascade Range. The inventory was conducted on all private and public lands except National Forests. Timber resource statistics from National Forest inventories also are presented. Detailed tables provide estimates of forest area, timber volume, growth, mortality, and...
Juvenile Court Statistics, 1974.
ERIC Educational Resources Information Center
Corbett, Jacqueline; Vereb, Thomas S.
This report presents information on juvenile court processing of youth in the U.S. during 1974. It is based on data gathered under the National Juvenile Court Statistical Reporting System. Findings can be summarized as follows: (1) 1,252,700 juvenile delinquency cases, excluding traffic offenses, were handled by courts in the U.S. in 1974; (2) the…
American Indian Population Statistics.
ERIC Educational Resources Information Center
Thomason, Timothy C., Ed.
This report summarizes American Indian and Alaska Native (AI/AN) population statistics from the 1990 Census. In 1990 there were about 2 million persons who identified themselves as American Indians in the United States, a 38 percent increase over the 1980 census. More than half of the Indian population lived in six states, with Oklahoma having the…
Statistical description of tectonic motions
NASA Technical Reports Server (NTRS)
Agnew, Duncan Carr
1993-01-01
This report summarizes investigations regarding tectonic motions. The topics discussed include statistics of crustal deformation, Earth rotation studies, using multitaper spectrum analysis techniques applied to both space-geodetic data and conventional astrometric estimates of the Earth's polar motion, and the development, design, and installation of high-stability geodetic monuments for use with the global positioning system.
Design and implementation of fishery rescue data mart system
NASA Astrophysics Data System (ADS)
Pan, Jun; Huang, Haiguang; Liu, Yousong
A novel data mart based system for fishery rescue field was designed and implemented. The system runs ETL process to deal with original data from various databases and data warehouses, and then reorganized the data into the fishery rescue data mart. Next, online analytical processing (OLAP) are carried out and statistical reports are generated automatically. Particularly, quick configuration schemes are designed to configure query dimensions and OLAP data sets. The configuration file will be transformed into statistic interfaces automatically through a wizard-style process. The system provides various forms of reporting files, including crystal reports, flash graphical reports, and two-dimensional data grids. In addition, a wizard style interface was designed to guide users customizing inquiry processes, making it possible for nontechnical staffs to access customized reports. Characterized by quick configuration, safeness and flexibility, the system has been successfully applied in city fishery rescue department.
GAFFE: a gaze-attentive fixation finding engine.
Rajashekar, U; van der Linde, I; Bovik, A C; Cormack, L K
2008-04-01
The ability to automatically detect visually interesting regions in images has many practical applications, especially in the design of active machine vision and automatic visual surveillance systems. Analysis of the statistics of image features at observers' gaze can provide insights into the mechanisms of fixation selection in humans. Using a foveated analysis framework, we studied the statistics of four low-level local image features: luminance, contrast, and bandpass outputs of both luminance and contrast, and discovered that image patches around human fixations had, on average, higher values of each of these features than image patches selected at random. Contrast-bandpass showed the greatest difference between human and random fixations, followed by luminance-bandpass, RMS contrast, and luminance. Using these measurements, we present a new algorithm that selects image regions as likely candidates for fixation. These regions are shown to correlate well with fixations recorded from human observers.
Accelerometry-based classification of human activities using Markov modeling.
Mannini, Andrea; Sabatini, Angelo Maria
2011-01-01
Accelerometers are a popular choice as body-motion sensors: the reason is partly in their capability of extracting information that is useful for automatically inferring the physical activity in which the human subject is involved, beside their role in feeding biomechanical parameters estimators. Automatic classification of human physical activities is highly attractive for pervasive computing systems, whereas contextual awareness may ease the human-machine interaction, and in biomedicine, whereas wearable sensor systems are proposed for long-term monitoring. This paper is concerned with the machine learning algorithms needed to perform the classification task. Hidden Markov Model (HMM) classifiers are studied by contrasting them with Gaussian Mixture Model (GMM) classifiers. HMMs incorporate the statistical information available on movement dynamics into the classification process, without discarding the time history of previous outcomes as GMMs do. An example of the benefits of the obtained statistical leverage is illustrated and discussed by analyzing two datasets of accelerometer time series.
Torres, M E; Añino, M M; Schlotthauer, G
2003-12-01
It is well known that, from a dynamical point of view, sudden variations in physiological parameters which govern certain diseases can cause qualitative changes in the dynamics of the corresponding physiological process. The purpose of this paper is to introduce a technique that allows the automated temporal localization of slight changes in a parameter of the law that governs the nonlinear dynamics of a given signal. This tool takes, from the multiresolution entropies, the ability to show these changes as statistical variations at each scale. These variations are held in the corresponding principal component. Appropriately combining these techniques with a statistical changes detector, a complexity change detection algorithm is obtained. The relevance of the approach, together with its robustness in the presence of moderate noise, is discussed in numerical simulations and the automatic detector is applied to real and simulated biological signals.
Cole, William G.; Michael, Patricia; Blois, Marsden S.
1987-01-01
A computer program was created to use information about the statistical distribution of words in journal abstracts to make probabilistic judgments about the level of description (e.g. molecular, cell, organ) of medical text. Statistical analysis of 7,409 journal abstracts taken from three medical journals representing distinct levels of description revealed that many medical words seem to be highly specific to one or another level of description. For example, the word adrenoreceptors occurred only in the American Journal of Physiology, never in Journal of Biological Chemistry or in Journal of American Medical Association. Such highly specific words occured so frequently that the automatic classification program was able to classify correctly 45 out of 45 test abstracts, with 100% confidence. These findings are interpreted in terms of both a theory of the structure of medical knowledge and the pragmatics of automatic classification.
NASA Technical Reports Server (NTRS)
1978-01-01
The techniques, processes, and equipment required for automatic fabrication and assembly of structural elements in space using the space shuttle as a launch vehicle and construction base were investigated. Additional construction/systems/operational techniques, processes, and equipment which can be developed/demonstrated in the same program to provide further risk reduction benefits to future large space systems were included. Results in the areas of structure/materials, fabrication systems (beam builder, assembly jig, and avionics/controls), mission integration, and programmatics are summarized. Conclusions and recommendations are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, Michael Allen; Marker, Bryan
This report summarizes the progress made as part of a one year lab-directed research and development (LDRD) project to fund the research efforts of Bryan Marker at the University of Texas at Austin. The goal of the project was to develop new techniques for automatically tuning the performance of dense linear algebra kernels. These kernels often represent the majority of computational time in an application. The primary outcome from this work is a demonstration of the value of model driven engineering as an approach to accurately predict and study performance trade-offs for dense linear algebra computations.
Automatic Generation of Issue Maps: Structured, Interactive Outputs for Complex Information Needs
2012-09-01
much can result in behaviour similar to the shortest-path chains. 24 Ronald Goldman Neil Lewis Judge Lance Ito 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 jury...Connecting the Dots has also been explored in non-textual domains. The authors of [ Heath et al., 2010] propose building graphs, called Image Webs, to...could imagine a metro map summarizing a dataset of medical records. 2. Images: In [ Heath et al., 2010], Heath et al build graphs called Image Webs to rep
[Application of iodine metabolism analysis methods in thyroid diseases].
Han, Jian-hua; Qiu, Ling
2013-08-01
The main physiological role of iodine in the body is to synthesize thyroid hormone. Both iodine deficiency and iodine excess can lead to severe thyroid diseases. While its role in thyroid diseases has increasingly been recognized, few relevant platforms and techniques for iodine detection have been available in China. This paper summarizes the advantages and disadvantages of currently iodine detection methods including direct titration, arsenic cerium catalytic spectrophotometry, chromatography with pulsed amperometry, colorimetry based on automatic biochemistry, inductively coupled plasma mass spectrometry, so as to optimize the iodine nutrition for patients with thyroid diseases.
An analysis of environmental data transmission
NASA Astrophysics Data System (ADS)
Yuan, Lina; Chen, Huajun; Gong, Jing
2017-05-01
To comprehensively construct environmental automatic monitoring has become the urgent need of environmental management, is a major measure to implement the scientific outlook on development and build a harmonious socialist society, and is an inevitable choice of “building a resource-conserving and environment-friendly society”, which is of great importance and profound significance to adjust the economic structure and transform growth pattern. This article first introduces the importance of environmental data transmission, then expounds the characteristics, key technologies, transmitting mode, and design ideas of environmental data transmission process, and finally, summarizes the full text.
Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long
2014-09-12
Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.
Gurulingappa, Harsha; Toldo, Luca; Rajput, Abdul Mateen; Kors, Jan A; Taweel, Adel; Tayrouz, Yorki
2013-11-01
The aim of this study was to assess the impact of automatically detected adverse event signals from text and open-source data on the prediction of drug label changes. Open-source adverse effect data were collected from FAERS, Yellow Cards and SIDER databases. A shallow linguistic relation extraction system (JSRE) was applied for extraction of adverse effects from MEDLINE case reports. Statistical approach was applied on the extracted datasets for signal detection and subsequent prediction of label changes issued for 29 drugs by the UK Regulatory Authority in 2009. 76% of drug label changes were automatically predicted. Out of these, 6% of drug label changes were detected only by text mining. JSRE enabled precise identification of four adverse drug events from MEDLINE that were undetectable otherwise. Changes in drug labels can be predicted automatically using data and text mining techniques. Text mining technology is mature and well-placed to support the pharmacovigilance tasks. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Fredouille, Corinne; Pouchoulin, Gilles; Ghio, Alain; Revis, Joana; Bonastre, Jean-François; Giovanni, Antoine
2009-12-01
This paper addresses voice disorder assessment. It proposes an original back-and-forth methodology involving an automatic classification system as well as knowledge of the human experts (machine learning experts, phoneticians, and pathologists). The goal of this methodology is to bring a better understanding of acoustic phenomena related to dysphonia. The automatic system was validated on a dysphonic corpus (80 female voices), rated according to the GRBAS perceptual scale by an expert jury. Firstly, focused on the frequency domain, the classification system showed the interest of 0-3000 Hz frequency band for the classification task based on the GRBAS scale. Later, an automatic phonemic analysis underlined the significance of consonants and more surprisingly of unvoiced consonants for the same classification task. Submitted to the human experts, these observations led to a manual analysis of unvoiced plosives, which highlighted a lengthening of VOT according to the dysphonia severity validated by a preliminary statistical analysis.
Bergmeister, Konstantin D; Gröger, Marion; Aman, Martin; Willensdorfer, Anna; Manzano-Szalai, Krisztina; Salminger, Stefan; Aszmann, Oskar C
2016-08-01
Skeletal muscle consists of different fiber types which adapt to exercise, aging, disease, or trauma. Here we present a protocol for fast staining, automatic acquisition, and quantification of fiber populations with ImageJ. Biceps and lumbrical muscles were harvested from Sprague-Dawley rats. Quadruple immunohistochemical staining was performed on single sections using antibodies against myosin heavy chains and secondary fluorescent antibodies. Slides were scanned automatically with a slide scanner. Manual and automatic analyses were performed and compared statistically. The protocol provided rapid and reliable staining for automated image acquisition. Analyses between manual and automatic data indicated Pearson correlation coefficients for biceps of 0.645-0.841 and 0.564-0.673 for lumbrical muscles. Relative fiber populations were accurate to a degree of ± 4%. This protocol provides a reliable tool for quantification of muscle fiber populations. Using freely available software, it decreases the required time to analyze whole muscle sections. Muscle Nerve 54: 292-299, 2016. © 2016 Wiley Periodicals, Inc.
2013-01-01
Background T2-weighted cardiovascular magnetic resonance (CMR) is clinically-useful for imaging the ischemic area-at-risk and amount of salvageable myocardium in patients with acute myocardial infarction (MI). However, to date, quantification of oedema is user-defined and potentially subjective. Methods We describe a highly automatic framework for quantifying myocardial oedema from bright blood T2-weighted CMR in patients with acute MI. Our approach retains user input (i.e. clinical judgment) to confirm the presence of oedema on an image which is then subjected to an automatic analysis. The new method was tested on 25 consecutive acute MI patients who had a CMR within 48 hours of hospital admission. Left ventricular wall boundaries were delineated automatically by variational level set methods followed by automatic detection of myocardial oedema by fitting a Rayleigh-Gaussian mixture statistical model. These data were compared with results from manual segmentation of the left ventricular wall and oedema, the current standard approach. Results The mean perpendicular distances between automatically detected left ventricular boundaries and corresponding manual delineated boundaries were in the range of 1-2 mm. Dice similarity coefficients for agreement (0=no agreement, 1=perfect agreement) between manual delineation and automatic segmentation of the left ventricular wall boundaries and oedema regions were 0.86 and 0.74, respectively. Conclusion Compared to standard manual approaches, the new highly automatic method for estimating myocardial oedema is accurate and straightforward. It has potential as a generic software tool for physicians to use in clinical practice. PMID:23548176
ERIC Educational Resources Information Center
National Center for Health Statistics (DHHS/PHS), Hyattsville, MD.
This report summarizes current knowledge and research on the quality and reliability of death rates by race and Hispanic origin in official mortality statistics of the United States produced by the National Center for Health Statistics (NCHS). It provides a quantitative assessment of bias in death rates by race and Hispanic origin and identifies…
Smits-Bandstra, Sarah; De Nil, Luc F
2007-01-01
The basal ganglia and cortico-striato-thalamo-cortical connections are known to play a critical role in sequence skill learning and increasing automaticity over practice. The current paper reviews four studies comparing the sequence skill learning and the transition to automaticity of persons who stutter (PWS) and fluent speakers (PNS) over practice. Studies One and Two found PWS to have poor finger tap sequencing skill and nonsense syllable sequencing skill after practice, and on retention and transfer tests relative to PNS. Studies Three and Four found PWS to be significantly less accurate and/or significantly slower after practice on dual tasks requiring concurrent sequencing and colour recognition over practice relative to PNS. Evidence of PWS' deficits in sequence skill learning and automaticity development support the hypothesis that dysfunction in cortico-striato-thalamo-cortical connections may be one etiological component in the development and maintenance of stuttering. As a result of this activity, the reader will: (1) be able to articulate the research regarding the basal ganglia system relating to sequence skill learning; (2) be able to summarize the research on stuttering with indications of sequence skill learning deficits; and (3) be able to discuss basal ganglia mechanisms with relevance for theory of stuttering.
Auditory Scene Analysis: An Attention Perspective
2017-01-01
Purpose This review article provides a new perspective on the role of attention in auditory scene analysis. Method A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601618 PMID:29049599
Automatic thermographic image defect detection of composites
NASA Astrophysics Data System (ADS)
Luo, Bin; Liebenberg, Bjorn; Raymont, Jeff; Santospirito, SP
2011-05-01
Detecting defects, and especially reliably measuring defect sizes, are critical objectives in automatic NDT defect detection applications. In this work, the Sentence software is proposed for the analysis of pulsed thermography and near IR images of composite materials. Furthermore, the Sentence software delivers an end-to-end, user friendly platform for engineers to perform complete manual inspections, as well as tools that allow senior engineers to develop inspection templates and profiles, reducing the requisite thermographic skill level of the operating engineer. Finally, the Sentence software can also offer complete independence of operator decisions by the fully automated "Beep on Defect" detection functionality. The end-to-end automatic inspection system includes sub-systems for defining a panel profile, generating an inspection plan, controlling a robot-arm and capturing thermographic images to detect defects. A statistical model has been built to analyze the entire image, evaluate grey-scale ranges, import sentencing criteria and automatically detect impact damage defects. A full width half maximum algorithm has been used to quantify the flaw sizes. The identified defects are imported into the sentencing engine which then sentences (automatically compares analysis results against acceptance criteria) the inspection by comparing the most significant defect or group of defects against the inspection standards.
A PROPOSED CHEMICAL INFORMATION AND DATA SYSTEM. VOLUME I.
CHEMICAL COMPOUNDS, *DATA PROCESSING, *INFORMATION RETRIEVAL, * CHEMICAL ANALYSIS, INPUT OUTPUT DEVICES, COMPUTER PROGRAMMING, CLASSIFICATION...CONFIGURATIONS, DATA STORAGE SYSTEMS, ATOMS, MOLECULES, PERFORMANCE( ENGINEERING ), MAINTENANCE, SUBJECT INDEXING, MAGNETIC TAPE, AUTOMATIC, MILITARY REQUIREMENTS, TYPEWRITERS, OPTICS, TOPOLOGY, STATISTICAL ANALYSIS, FLOW CHARTING.
Ben Younes, Lassad; Nakajima, Yoshikazu; Saito, Toki
2014-03-01
Femur segmentation is well established and widely used in computer-assisted orthopedic surgery. However, most of the robust segmentation methods such as statistical shape models (SSM) require human intervention to provide an initial position for the SSM. In this paper, we propose to overcome this problem and provide a fully automatic femur segmentation method for CT images based on primitive shape recognition and SSM. Femur segmentation in CT scans was performed using primitive shape recognition based on a robust algorithm such as the Hough transform and RANdom SAmple Consensus. The proposed method is divided into 3 steps: (1) detection of the femoral head as sphere and the femoral shaft as cylinder in the SSM and the CT images, (2) rigid registration between primitives of SSM and CT image to initialize the SSM into the CT image, and (3) fitting of the SSM to the CT image edge using an affine transformation followed by a nonlinear fitting. The automated method provided good results even with a high number of outliers. The difference of segmentation error between the proposed automatic initialization method and a manual initialization method is less than 1 mm. The proposed method detects primitive shape position to initialize the SSM into the target image. Based on primitive shapes, this method overcomes the problem of inter-patient variability. Moreover, the results demonstrate that our method of primitive shape recognition can be used for 3D SSM initialization to achieve fully automatic segmentation of the femur.
Automatic location of L/H transition times for physical studies with a large statistical basis
NASA Astrophysics Data System (ADS)
González, S.; Vega, J.; Murari, A.; Pereira, A.; Dormido-Canto, S.; Ramírez, J. M.; contributors, JET-EFDA
2012-06-01
Completely automatic techniques to estimate and validate L/H transition times can be essential in L/H transition analyses. The generation of databases with hundreds of transition times and without human intervention is an important step to accomplish (a) L/H transition physics analysis, (b) validation of L/H theoretical models and (c) creation of L/H scaling laws. An entirely unattended methodology is presented in this paper to build large databases of transition times in JET using time series. The proposed technique has been applied to a dataset of 551 JET discharges between campaigns C21 and C26. A prediction with discharges that show a clear signature in time series is made through the locating properties of the wavelet transform. It is an accurate prediction and the uncertainty interval is ±3.2 ms. The discharges with a non-clear pattern in the time series use an L/H mode classifier based on discharges with a clear signature. In this case, the estimation error shows a distribution with mean and standard deviation of 27.9 ms and 37.62 ms, respectively. Two different regression methods have been applied to the measurements acquired at the transition times identified by the automatic system. The obtained scaling laws for the threshold power are not significantly different from those obtained using the data at the transition times determined manually by the experts. The automatic methods allow performing physical studies with a large number of discharges, showing, for example, that there are statistically different types of transitions characterized by different scaling laws.
Junger, Axel; Brenck, Florian; Hartmann, Bernd; Klasen, Joachim; Quinzio, Lorenzo; Benson, Matthias; Michel, Achim; Röhrig, Rainer; Hempelmann, Gunter
2004-07-01
The most recent approach to estimate nursing resources consumption has led to the generation of the Nine Equivalents of Nursing Manpower use Score (NEMS). The objective of this prospective study was to establish a completely automatically generated calculation of the NEMS using a patient data management system (PDMS) database and to validate this approach by comparing the results with those of the conventional manual method. Prospective study. Operative intensive care unit of a university hospital. Patients admitted to the ICU between 24 July 2002 and 22 August 2002. Patients under the age of 16 years, and patients undergoing cardiovascular surgery or with burn injuries were excluded. None. The NEMS of all patients was calculated automatically with a PDMS and manually by a physician in parallel. The results of the two methods were compared using the Bland and Altman approach, the interclass correlation coefficient (ICC), and the kappa-statistic. On 20 consecutive working days, the NEMS was calculated in 204 cases. The Bland Altman analysis did not show significant differences in NEMS scoring between the two methods. The ICC (95% confidence intervals) 0.87 (0.84-0.90) revealed a high inter-rater agreement between the PDMS and the physician. The kappa-statistic showed good results (kappa>0.55) for all NEMS items apart from the item "supplementary ventilatory care". This study demonstrates that automatical calculation of the NEMS is possible with high accuracy by means of a PDMS. This may lead to a decrease in consumption of nursing resources.
Reliable clarity automatic-evaluation method for optical remote sensing images
NASA Astrophysics Data System (ADS)
Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen
2015-10-01
Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.
Automatically inserted technical details improve radiology report accuracy.
Abujudeh, Hani H; Govindan, Siddharth; Narin, Ozden; Johnson, Jamlik Omari; Thrall, James H; Rosenthal, Daniel I
2011-09-01
To assess the effect of automatically inserted technical details on the concordance of a radiology report header with the actual procedure performed. The study was IRB approved and informed consent was waived. We obtained radiology report audit data from the hospital's compliance office from the period of January 2005 through December 2009 spanning a total of 20 financial quarters. A "discordance percentage" was defined as the percentage of total studies in which a procedure code change was made during auditing. Using Chi-square analysis we compared discordance percentages between reports with manually inserted technical details (MITD) and automatically inserted technical details (AITD). The second quarter data of 2007 was not included in the analysis as the switch from MITD to AITD occurred during this quarter. The hospital's compliance office audited 9,110 studies from 2005-2009. Excluding the 564 studies in the second quarter of 2007, we analyzed a total of 8,546 studies, 3,948 with MITD and 4,598 with AITD. The discordance percentage in the MITD group was 3.95% (156/3,948, range per quarter, 1.5- 6.1%). The AITD discordance percentage was 1.37% (63/4,598, range per quarter, 0.0-2.6%). A Chi-square analysis determined a statistically significant difference between the 2 groups (P < 0.001). There was a statistically significant improvement in the concordance of a radiology report header with the performed procedure using automatically inserted technical details compared to manually inserted details. Copyright © 2011 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Höink, Anna Janina; Schülke, Christoph; Koch, Raphael; Löhnert, Annika; Kammerer, Sara; Fortkamp, Rasmus; Heindel, Walter; Buerke, Boris
2017-11-01
Purpose To compare measurement precision and interobserver variability in the evaluation of hepatocellular carcinoma (HCC) and liver metastases in MSCT before and after transarterial local ablative therapies. Materials and Methods Retrospective study of 72 patients with malignant liver lesions (42 metastases; 30 HCCs) before and after therapy (43 SIRT procedures; 29 TACE procedures). Established (LAD; SAD; WHO) and vitality-based parameters (mRECIST; mLAD; mSAD; EASL) were assessed manually and semi-automatically by two readers. The relative interobserver difference (RID) and intraclass correlation coefficient (ICC) were calculated. Results The median RID for vitality-based parameters was lower from semi-automatic than from manual measurement of mLAD (manual 12.5 %; semi-automatic 3.4 %), mSAD (manual 12.7 %; semi-automatic 5.7 %) and EASL (manual 10.4 %; semi-automatic 1.8 %). The difference in established parameters was not statistically noticeable (p > 0.05). The ICCs of LAD (manual 0.984; semi-automatic 0.982), SAD (manual 0.975; semi-automatic 0.958) and WHO (manual 0.984; semi-automatic 0.978) are high, both in manual and semi-automatic measurements. The ICCs of manual measurements of mLAD (0.897), mSAD (0.844) and EASL (0.875) are lower. This decrease cannot be found in semi-automatic measurements of mLAD (0.997), mSAD (0.992) and EASL (0.998). Conclusion Vitality-based tumor measurements of HCC and metastases after transarterial local therapies should be performed semi-automatically due to greater measurement precision, thus increasing the reproducibility and in turn the reliability of therapeutic decisions. Key points · Liver lesion measurements according to EASL and mRECIST are more precise when performed semi-automatically.. · The higher reproducibility may facilitate a more reliable classification of therapy response.. · Measurements according to RECIST and WHO offer equivalent precision semi-automatically and manually.. Citation Format · Höink AJ, Schülke C, Koch R et al. Response Evaluation of Malignant Liver Lesions After TACE/SIRT: Comparison of Manual and Semi-Automatic Measurement of Different Response Criteria in Multislice CT. Fortschr Röntgenstr 2017; 189: 1067 - 1075. © Georg Thieme Verlag KG Stuttgart · New York.
Illinois' Forests, 2005: Statistics, Methods, and Quality Assurance
Susan J. Crocker; Charles J. Barnett; Mark A. Hatfield
2013-01-01
The first full annual inventory of Illinois' forests was completed in 2005. This report contains 1) descriptive information on methods, statistics, and quality assurance of data collection, 2) a glossary of terms, 3) tables that summarize quality assurance, and 4) a core set of tabular estimates for a variety of forest resources. A detailed analysis of inventory...
ERIC Educational Resources Information Center
Bailey, Thomas; Jenkins, Davis; Leinbach, Timothy
2005-01-01
This report summarizes statistics on access and attainment in higher education, focusing particularly on community college students, using data from the National Education Longitudinal Study of 1988 (NELS:88), which follows a nationally representative sample of individuals who were eighth graders in the spring of 1988. A sample of these…
Recent progress in automatically extracting information from the pharmacogenomic literature
Garten, Yael; Coulet, Adrien; Altman, Russ B
2011-01-01
The biomedical literature holds our understanding of pharmacogenomics, but it is dispersed across many journals. In order to integrate our knowledge, connect important facts across publications and generate new hypotheses we must organize and encode the contents of the literature. By creating databases of structured pharmocogenomic knowledge, we can make the value of the literature much greater than the sum of the individual reports. We can, for example, generate candidate gene lists or interpret surprising hits in genome-wide association studies. Text mining automatically adds structure to the unstructured knowledge embedded in millions of publications, and recent years have seen a surge in work on biomedical text mining, some specific to pharmacogenomics literature. These methods enable extraction of specific types of information and can also provide answers to general, systemic queries. In this article, we describe the main tasks of text mining in the context of pharmacogenomics, summarize recent applications and anticipate the next phase of text mining applications. PMID:21047206
Automated metadata--final project report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schissel, David
This report summarizes the work of the Automated Metadata, Provenance Cataloging, and Navigable Interfaces: Ensuring the Usefulness of Extreme-Scale Data Project (MPO Project) funded by the United States Department of Energy (DOE), Offices of Advanced Scientific Computing Research and Fusion Energy Sciences. Initially funded for three years starting in 2012, it was extended for 6 months with additional funding. The project was a collaboration between scientists at General Atomics, Lawrence Berkley National Laboratory (LBNL), and Massachusetts Institute of Technology (MIT). The group leveraged existing computer science technology where possible, and extended or created new capabilities where required. The MPO projectmore » was able to successfully create a suite of software tools that can be used by a scientific community to automatically document their scientific workflows. These tools were integrated into workflows for fusion energy and climate research illustrating the general applicability of the project’s toolkit. Feedback was very positive on the project’s toolkit and the value of such automatic workflow documentation to the scientific endeavor.« less
Concept and development of a computerized positioning of prosthetic teeth for complete dentures.
Busch, M; Kordass, B
2006-04-01
To date, CAD/CAM technology has made no noteworthy inroads into removable dentures. We want to present a new area of application for this in our study. Models of the maxilla and edentulous mandible were 3D scanned. The software detects and automatically reconstructs the reference structures that are anatomically important for the set-up of artificial teeth, such as the alveolar ridge centerlines and the interalveolar relations between the alveolar ridges. In a further step, the occlusal plane is semiautomatically defined and the front dental arch is designed. After these design features have been determined, artificial teeth are selected from a database and set up automatically. The dental technician can assess the esthetics and function of the suggested dental set-up on the computer screen and make slight corrections if necessary. Summarizing: The interplay of hardware and software components within on integrated solution including conversion of the "virtual" into a real positioning of prosthetic teeth is presented.
NASA Technical Reports Server (NTRS)
Mysoor, N. R.; Perret, J. D.; Kermode, A. W.
1991-01-01
The design concepts and measured performance characteristics are summarized of an X band (7162 MHz/8415 MHz) breadboard deep space transponder (DSP) for future spacecraft applications, with the first use scheduled for the Comet Rendezvous Asteroid Flyby (CRAF) and Cassini missions in 1995 and 1996, respectively. The DST consists of a double conversion, superheterodyne, automatic phase tracking receiver, and an X band (8415 MHz) exciter to drive redundant downlink power amplifiers. The receiver acquires and coherently phase tracks the modulated or unmodulated X band (7162 MHz) uplink carrier signal. The exciter phase modulates the X band (8415 MHz) downlink signal with composite telemetry and ranging signals. The receiver measured tracking threshold, automatic gain control, static phase error, and phase jitter characteristics of the breadboard DST are in good agreement with the expected performance. The measured results show a receiver tracking threshold of -158 dBm and a dynamic signal range of 88 dB.
Automatic Surveying For Hazard Prevention On Glacier De GiÉtro, Switzerland
NASA Astrophysics Data System (ADS)
Bauder, A.; Funk, M.; Bösch, H.
Breaking off of large ice masses from the steep tongue of Glacier de Giétro may endanger a nearby reservoir. Such a falling ice mass could cause an oversplash over the dam at timeof a nearly filled lake. For this reason the glacier has been monitored intensively since the 1960's. An automatic theodolite was installed three years ago. It allows continuous displacement measurements of several targets on the glacier in order to detect short-term acceleration events. The installation includes a telemetric data transmission, which provides for immediate recognition of hazardous situations and early alarming. The obtained data were analysed in terms of precision and performance of the applied method. A high temporal resolution was gained. The comparison with traditional ob- servations shows clearly the potential of modern instruments to improve monitoring schems. We summarize the main results of this study and discuss the applicability of a modern motorized theodolite with target tracking and recognition ability for moni- toring purposes.
Strategic planning of developing automatic optical inspection (AOI) technologies in Taiwan
NASA Astrophysics Data System (ADS)
Fan, K. C.; Hsu, C.
2005-01-01
In most domestic hi-tech industries in Taiwan, the automatic optical inspection (AOI) equipment is mostly imported. In view of the required specifications, AOI consists of the integration of mechanical-electrical-optical-information technologies. In the past two decades, traditional industries have lost their competitiveness due to the low profit rate. It is possible to promote a new AOI industry in Taiwan through the integration of its strong background in mechatronic technology in positioning stages with the optical image processing techniques. The market requirements are huge not only in domestic need but also in global need. This is the main reason to promote the AOI research for the coming years in Taiwan. Focused industrial applications will be in IC, PCB, LCD, communication, and MEMS parts. This paper will analyze the domestic and global AOI equipment market, summarize the necessary fish bone technology diagrams, survey the actual industrial needs, and propose the strategic plan to be promoted in Taiwan.
Beyond Information Retrieval—Medical Question Answering
Lee, Minsuk; Cimino, James; Zhu, Hai Ran; Sable, Carl; Shanker, Vijay; Ely, John; Yu, Hong
2006-01-01
Physicians have many questions when caring for patients, and frequently need to seek answers for their questions. Information retrieval systems (e.g., PubMed) typically return a list of documents in response to a user’s query. Frequently the number of returned documents is large and makes physicians’ information seeking “practical only ‘after hours’ and not in the clinical settings”. Question answering techniques are based on automatically analyzing thousands of electronic documents to generate short-text answers in response to clinical questions that are posed by physicians. The authors address physicians’ information needs and described the design, implementation, and evaluation of the medical question answering system (MedQA). Although our long term goal is to enable MedQA to answer all types of medical questions, currently, we currently implement MedQA to integrate information retrieval, extraction, and summarization techniques to automatically generate paragraph-level text for definitional questions (i.e., “What is X?”). MedQA can be accessed at http://www.dbmi.columbia.edu/~yuh9001/research/MedQA.html. PMID:17238385
Exceedance statistics of accelerations resulting from thruster firings on the Apollo-Soyuz mission
NASA Technical Reports Server (NTRS)
Fichtl, G. H.; Holland, R. L.
1981-01-01
Spacecraft acceleration resulting from firings of vernier control system thrusters is an important consideration in the design, planning, execution and post-flight analysis of laboratory experiments in space. In particular, scientists and technologists involved with the development of experiments to be performed in space in many instances required statistical information on the magnitude and rate of occurrence of spacecraft accelerations. Typically, these accelerations are stochastic in nature, so that it is useful to characterize these accelerations in statistical terms. Statistics of spacecraft accelerations are summarized.
Garcia, E; Klaas, I; Amigo, J M; Bro, R; Enevoldsen, C
2014-12-01
Lameness causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods. Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3 or 4/4) or not lame (score 1/4). Both models achieved sensitivity and specificity values around 80%, both in calibration and cross-validation. At the optimum values in the receiver operating characteristic curve, the false-positive rate was 28% in the parity 1 model, whereas in the parity 2 model it was about half (16%), which makes it more suitable for practical application; the model error rates were, 23 and 19%, respectively. Based on data registered automatically from one AMS farm, we were able to discriminate nonlame and lame cows, where partial least squares discriminant analysis achieved similar performance to the reference method. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Automatic Event Detection in Search for Inter-Moss Loops in IRIS Si IV Slit-Jaw Images
NASA Technical Reports Server (NTRS)
Fayock, Brian; Winebarger, Amy R.; De Pontieu, Bart
2015-01-01
The high-resolution capabilities of the Interface Region Imaging Spectrometer (IRIS) mission have allowed the exploration of the finer details of the solar magnetic structure from the chromosphere to the lower corona that have previously been unresolved. Of particular interest to us are the relatively short-lived, low-lying magnetic loops that have foot points in neighboring moss regions. These inter-moss loops have also appeared in several AIA pass bands, which are generally associated with temperatures that are at least an order of magnitude higher than that of the Si IV emission seen in the 1400 angstrom pass band of IRIS. While the emission lines seen in these pass bands can be associated with a range of temperatures, the simultaneous appearance of these loops in IRIS 1400 and AIA 171, 193, and 211 suggest that they are not in ionization equilibrium. To study these structures in detail, we have developed a series of algorithms to automatically detect signal brightening or events on a pixel-by-pixel basis and group them together as structures for each of the above data sets. These algorithms have successfully picked out all activity fitting certain adjustable criteria. The resulting groups of events are then statistically analyzed to determine which characteristics can be used to distinguish the inter-moss loops from all other structures. While a few characteristic histograms reveal that manually selected inter-moss loops lie outside the norm, a combination of several characteristics will need to be used to determine the statistical likelihood that a group of events be categorized automatically as a loop of interest. The goal of this project is to be able to automatically pick out inter-moss loops from an entire data set and calculate the characteristics that have previously been determined manually, such as length, intensity, and lifetime. We will discuss the algorithms, preliminary results, and current progress of automatic characterization.
Akterations/corrections to the BRASS Program
NASA Technical Reports Server (NTRS)
Brand, S. N.
1985-01-01
Corrections applied to statistical programs contained in two subroutines of the Bed Rest Analysis Software System (BRASS) are summarized. Two subroutines independently calculate significant values within the BRASS program.
Multiresource forest statistics for Molokai, Hawaii.
Michael G. Buck; Patrick G. Costales; Katharine. McDuffie
1986-01-01
This report summarizes a 1983 multiresource forest inventory of the island of Molokai, Hawaii. Tables of forest area, timber volume, vegetation type, ownership, land class, and wildlife are presented.
Environmental Quality Index webinar
Environmental Quality index, data reduction approaches to help improve statistical efficiency, summarizing information on the wider environment humans are exposed to. air, water, land, built, socio-demographic, human and environmental health
Automated Tumor Registry for Oncology. A VA-DHCP MUMPS application.
Richie, S
1992-01-01
The VA Automated Tumor Registry for Oncology, Version 2, is a multifaceted, completely automated user-friendly cancer database. Easy to use modules include: Automatic Casefinding; Suspense Files; Abstracting and Printing; Follow-up; Annual Reports; Statistical Reports; Utility Functions.
Wind speed statistics for Goldstone, California, anemometer sites
NASA Technical Reports Server (NTRS)
Berg, M.; Levy, R.; Mcginness, H.; Strain, D.
1981-01-01
An exploratory wind survey at an antenna complex was summarized statistically for application to future windmill designs. Data were collected at six locations from a total of 10 anemometers. Statistics include means, standard deviations, cubes, pattern factors, correlation coefficients, and exponents for power law profile of wind speed. Curves presented include: mean monthly wind speeds, moving averages, and diurnal variation patterns. It is concluded that three of the locations have sufficiently strong winds to justify consideration for windmill sites.
Ben Chaabane, Salim; Fnaiech, Farhat
2014-01-23
Color image segmentation has been so far applied in many areas; hence, recently many different techniques have been developed and proposed. In the medical imaging area, the image segmentation may be helpful to provide assistance to doctor in order to follow-up the disease of a certain patient from the breast cancer processed images. The main objective of this work is to rebuild and also to enhance each cell from the three component images provided by an input image. Indeed, from an initial segmentation obtained using the statistical features and histogram threshold techniques, the resulting segmentation may represent accurately the non complete and pasted cells and enhance them. This allows real help to doctors, and consequently, these cells become clear and easy to be counted. A novel method for color edges extraction based on statistical features and automatic threshold is presented. The traditional edge detector, based on the first and the second order neighborhood, describing the relationship between the current pixel and its neighbors, is extended to the statistical domain. Hence, color edges in an image are obtained by combining the statistical features and the automatic threshold techniques. Finally, on the obtained color edges with specific primitive color, a combination rule is used to integrate the edge results over the three color components. Breast cancer cell images were used to evaluate the performance of the proposed method both quantitatively and qualitatively. Hence, a visual and a numerical assessment based on the probability of correct classification (PC), the false classification (Pf), and the classification accuracy (Sens(%)) are presented and compared with existing techniques. The proposed method shows its superiority in the detection of points which really belong to the cells, and also the facility of counting the number of the processed cells. Computer simulations highlight that the proposed method substantially enhances the segmented image with smaller error rates better than other existing algorithms under the same settings (patterns and parameters). Moreover, it provides high classification accuracy, reaching the rate of 97.94%. Additionally, the segmentation method may be extended to other medical imaging types having similar properties.
Gaussian curvature analysis allows for automatic block placement in multi-block hexahedral meshing.
Ramme, Austin J; Shivanna, Kiran H; Magnotta, Vincent A; Grosland, Nicole M
2011-10-01
Musculoskeletal finite element analysis (FEA) has been essential to research in orthopaedic biomechanics. The generation of a volumetric mesh is often the most challenging step in a FEA. Hexahedral meshing tools that are based on a multi-block approach rely on the manual placement of building blocks for their mesh generation scheme. We hypothesise that Gaussian curvature analysis could be used to automatically develop a building block structure for multi-block hexahedral mesh generation. The Automated Building Block Algorithm incorporates principles from differential geometry, combinatorics, statistical analysis and computer science to automatically generate a building block structure to represent a given surface without prior information. We have applied this algorithm to 29 bones of varying geometries and successfully generated a usable mesh in all cases. This work represents a significant advancement in automating the definition of building blocks.
The Accuracy of GBM GRB Localizations
NASA Astrophysics Data System (ADS)
Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.
2010-03-01
We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.
Brix, Tobias Johannes; Bruland, Philipp; Sarfraz, Saad; Ernsting, Jan; Neuhaus, Philipp; Storck, Michael; Doods, Justin; Ständer, Sonja; Dugas, Martin
2018-01-01
A required step for presenting results of clinical studies is the declaration of participants demographic and baseline characteristics as claimed by the FDAAA 801. The common workflow to accomplish this task is to export the clinical data from the used electronic data capture system and import it into statistical software like SAS software or IBM SPSS. This software requires trained users, who have to implement the analysis individually for each item. These expenditures may become an obstacle for small studies. Objective of this work is to design, implement and evaluate an open source application, called ODM Data Analysis, for the semi-automatic analysis of clinical study data. The system requires clinical data in the CDISC Operational Data Model format. After uploading the file, its syntax and data type conformity of the collected data is validated. The completeness of the study data is determined and basic statistics, including illustrative charts for each item, are generated. Datasets from four clinical studies have been used to evaluate the application's performance and functionality. The system is implemented as an open source web application (available at https://odmanalysis.uni-muenster.de) and also provided as Docker image which enables an easy distribution and installation on local systems. Study data is only stored in the application as long as the calculations are performed which is compliant with data protection endeavors. Analysis times are below half an hour, even for larger studies with over 6000 subjects. Medical experts have ensured the usefulness of this application to grant an overview of their collected study data for monitoring purposes and to generate descriptive statistics without further user interaction. The semi-automatic analysis has its limitations and cannot replace the complex analysis of statisticians, but it can be used as a starting point for their examination and reporting.
The analysis of selected orientation methods of architectural objects' scans
NASA Astrophysics Data System (ADS)
Markiewicz, Jakub S.; Kajdewicz, Irmina; Zawieska, Dorota
2015-05-01
The terrestrial laser scanning is commonly used in different areas, inter alia in modelling architectural objects. One of the most important part of TLS data processing is scans registration. It significantly affects the accuracy of generation of high resolution photogrammetric documentation. This process is time consuming, especially in case of a large number of scans. It is mostly based on an automatic detection and a semi-automatic measurement of control points placed on the object. In case of the complicated historical buildings, sometimes it is forbidden to place survey targets on an object or it may be difficult to distribute survey targets in the optimal way. Such problems encourage the search for the new methods of scan registration which enable to eliminate the step of placing survey targets on the object. In this paper the results of target-based registration method are presented The survey targets placed on the walls of historical chambers of the Museum of King Jan III's Palace at Wilanów and on the walls of ruins of the Bishops Castle in Iłża were used for scan orientation. Several variants of orientation were performed, taking into account different placement and different number of survey marks. Afterwards, during next research works, raster images were generated from scans and the SIFT and SURF algorithms for image processing were used to automatically search for corresponding natural points. The case of utilisation of automatically identified points for TLS data orientation was analysed. The results of both methods for TLS data registration were summarized and presented in numerical and graphical forms.
VanTrump, G.; Miesch, A.T.
1977-01-01
RASS is an acronym for Rock Analysis Storage System and STATPAC, for Statistical Package. The RASS and STATPAC computer programs are integrated into the RASS-STATPAC system for the management and statistical reduction of geochemical data. The system, in its present form, has been in use for more than 9 yr by scores of U.S. Geological Survey geologists, geochemists, and other scientists engaged in a broad range of geologic and geochemical investigations. The principal advantage of the system is the flexibility afforded the user both in data searches and retrievals and in the manner of statistical treatment of data. The statistical programs provide for most types of statistical reduction normally used in geochemistry and petrology, but also contain bridges to other program systems for statistical processing and automatic plotting. ?? 1977.
WISARD: workbench for integrated superfast association studies for related datasets.
Lee, Sungyoung; Choi, Sungkyoung; Qiao, Dandi; Cho, Michael; Silverman, Edwin K; Park, Taesung; Won, Sungho
2018-04-20
A Mendelian transmission produces phenotypic and genetic relatedness between family members, giving family-based analytical methods an important role in genetic epidemiological studies-from heritability estimations to genetic association analyses. With the advance in genotyping technologies, whole-genome sequence data can be utilized for genetic epidemiological studies, and family-based samples may become more useful for detecting de novo mutations. However, genetic analyses employing family-based samples usually suffer from the complexity of the computational/statistical algorithms, and certain types of family designs, such as incorporating data from extended families, have rarely been used. We present a Workbench for Integrated Superfast Association studies for Related Data (WISARD) programmed in C/C++. WISARD enables the fast and a comprehensive analysis of SNP-chip and next-generation sequencing data on extended families, with applications from designing genetic studies to summarizing analysis results. In addition, WISARD can automatically be run in a fully multithreaded manner, and the integration of R software for visualization makes it more accessible to non-experts. Comparison with existing toolsets showed that WISARD is computationally suitable for integrated analysis of related subjects, and demonstrated that WISARD outperforms existing toolsets. WISARD has also been successfully utilized to analyze the large-scale massive sequencing dataset of chronic obstructive pulmonary disease data (COPD), and we identified multiple genes associated with COPD, which demonstrates its practical value.
A Not-So-Fundamental Limitation on Studying Complex Systems with Statistics: Comment on Rabin (2011)
NASA Astrophysics Data System (ADS)
Thomas, Drew M.
2012-12-01
Although living organisms are affected by many interrelated and unidentified variables, this complexity does not automatically impose a fundamental limitation on statistical inference. Nor need one invoke such complexity as an explanation of the "Truth Wears Off" or "decline" effect; similar "decline" effects occur with far simpler systems studied in physics. Selective reporting and publication bias, and scientists' biases in favor of reporting eye-catching results (in general) or conforming to others' results (in physics) better explain this feature of the "Truth Wears Off" effect than Rabin's suggested limitation on statistical inference.
Gravitational lenses and large scale structure
NASA Technical Reports Server (NTRS)
Turner, Edwin L.
1987-01-01
Four possible statistical tests of the large scale distribution of cosmic material are described. Each is based on gravitational lensing effects. The current observational status of these tests is also summarized.
Automated Tumor Registry for Oncology. A VA-DHCP MUMPS application.
Richie, S.
1992-01-01
The VA Automated Tumor Registry for Oncology, Version 2, is a multifaceted, completely automated user-friendly cancer database. Easy to use modules include: Automatic Casefinding; Suspense Files; Abstracting and Printing; Follow-up; Annual Reports; Statistical Reports; Utility Functions. PMID:1482866
ERIC Educational Resources Information Center
United Nations Children's Fund, New York, NY.
This report summarizes the latest available statistics on international achievements in registering each child at birth; immunizing all children; assisting adolescents, particularly girls; and addressing homelessness in wealthy nations. Each of the report's sections contains a commentary, related statistics, and a discussion on progress and…
Beyond Description: Converting Web Site Usage Statistics into Concrete Site Improvement Ideas
ERIC Educational Resources Information Center
Arendt, Julie; Wagner, Cassie
2010-01-01
Web site usage statistics are a widely used tool for Web site development, but libraries are still learning how to use them successfully. This case study summarizes how Morris Library at Southern Illinois University Carbondale implemented Google Analytics on its Web site and used the reports to inform a site redesign. As the main campus library at…
Statistical Annex to Employee Training in the Federal Service, Fiscal Year 1968.
ERIC Educational Resources Information Center
Civil Service Commission, Washington, DC. Bureau of Training.
Tables in this statistical supplement are based on data submitted by Federal agencies in their annual training report to the Civil Service Commission for Fiscal Year 1968 (see document AC 004 019). The first table (Tab A) summarizes all training activity and expenditures for the year, with data arranged by occupational levels (GS01-04 through GS…
ERIC Educational Resources Information Center
Grob, George; And Others
This paper is a compilation of the major federal, legislative, administrative and statistical uses of the terms poverty, low income, and related expressions. The first section summarizes the most commonly used definitions of poverty. These are: (1) the official statistical poverty definition, (2) program eligibility guidelines of the Community…
ERIC Educational Resources Information Center
Reshad, Rosalind S.
One of six volumes summarizing through narrative and statistical tables data collected by the Equal Employment Opportunity Commission in its 1974 survey, this fifth volume details nationwide statistics on the employment status of minorities and women working in township governments. Data from 299 actual units of government in fourteen states were…
ERIC Educational Resources Information Center
Skinner, Alice W.
One of six volumes summarizing through narrative and statistical tables data collected by the Equal Employment Opportunity Commission in its 1974 survey, this fourth volume details the employment status of minorities and women in municipal governments. Based on reports filed by 2,230 municipalities, statistics in this study are designed to…
Kakourou, Alexia; Vach, Werner; Nicolardi, Simone; van der Burgt, Yuri; Mertens, Bart
2016-10-01
Mass spectrometry based clinical proteomics has emerged as a powerful tool for high-throughput protein profiling and biomarker discovery. Recent improvements in mass spectrometry technology have boosted the potential of proteomic studies in biomedical research. However, the complexity of the proteomic expression introduces new statistical challenges in summarizing and analyzing the acquired data. Statistical methods for optimally processing proteomic data are currently a growing field of research. In this paper we present simple, yet appropriate methods to preprocess, summarize and analyze high-throughput MALDI-FTICR mass spectrometry data, collected in a case-control fashion, while dealing with the statistical challenges that accompany such data. The known statistical properties of the isotopic distribution of the peptide molecules are used to preprocess the spectra and translate the proteomic expression into a condensed data set. Information on either the intensity level or the shape of the identified isotopic clusters is used to derive summary measures on which diagnostic rules for disease status allocation will be based. Results indicate that both the shape of the identified isotopic clusters and the overall intensity level carry information on the class outcome and can be used to predict the presence or absence of the disease.
A statistical parts-based appearance model of inter-subject variability.
Toews, Matthew; Collins, D Louis; Arbel, Tal
2006-01-01
In this article, we present a general statistical parts-based model for representing the appearance of an image set, applied to the problem of inter-subject MR brain image matching. In contrast with global image representations such as active appearance models, the parts-based model consists of a collection of localized image parts whose appearance, geometry and occurrence frequency are quantified statistically. The parts-based approach explicitly addresses the case where one-to-one correspondence does not exist between subjects due to anatomical differences, as parts are not expected to occur in all subjects. The model can be learned automatically, discovering structures that appear with statistical regularity in a large set of subject images, and can be robustly fit to new images, all in the presence of significant inter-subject variability. As parts are derived from generic scale-invariant features, the framework can be applied in a wide variety of image contexts, in order to study the commonality of anatomical parts or to group subjects according to the parts they share. Experimentation shows that a parts-based model can be learned from a large set of MR brain images, and used to determine parts that are common within the group of subjects. Preliminary results indicate that the model can be used to automatically identify distinctive features for inter-subject image registration despite large changes in appearance.
m-BIRCH: an online clustering approach for computer vision applications
NASA Astrophysics Data System (ADS)
Madan, Siddharth K.; Dana, Kristin J.
2015-03-01
We adapt a classic online clustering algorithm called Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH), to incrementally cluster large datasets of features commonly used in multimedia and computer vision. We call the adapted version modified-BIRCH (m-BIRCH). The algorithm uses only a fraction of the dataset memory to perform clustering, and updates the clustering decisions when new data comes in. Modifications made in m-BIRCH enable data driven parameter selection and effectively handle varying density regions in the feature space. Data driven parameter selection automatically controls the level of coarseness of the data summarization. Effective handling of varying density regions is necessary to well represent the different density regions in data summarization. We use m-BIRCH to cluster 840K color SIFT descriptors, and 60K outlier corrupted grayscale patches. We use the algorithm to cluster datasets consisting of challenging non-convex clustering patterns. Our implementation of the algorithm provides an useful clustering tool and is made publicly available.
QCS : a system for querying, clustering, and summarizing documents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.
2006-08-01
Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test setsmore » from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence ''trimming'', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.« less
QCS: a system for querying, clustering and summarizing documents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Schlesinger, Judith D.; O'Leary, Dianne P.
2006-10-01
Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test setsmore » from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.« less
World War II War Production-Why Were the B-17 and B-24 Produced in Parallel?
1997-03-01
Winton, A Black Hole in the Wild Blue Yonder: The Need for a Comprehensive Theory of Airpower (Air Command and Staff College War Theory Coursebook ... statistical comparisons made, of which most are summarized as follows2: 1. Statistical data compiled on the utilization of both planes showed that the B-17 was...easier to maintain and therefore more available for combat. 2. Statistical data on time from aircraft acceptance to delivery in theater showed that
Cancer survival: an overview of measures, uses, and interpretation.
Mariotto, Angela B; Noone, Anne-Michelle; Howlader, Nadia; Cho, Hyunsoon; Keel, Gretchen E; Garshell, Jessica; Woloshin, Steven; Schwartz, Lisa M
2014-11-01
Survival statistics are of great interest to patients, clinicians, researchers, and policy makers. Although seemingly simple, survival can be confusing: there are many different survival measures with a plethora of names and statistical methods developed to answer different questions. This paper aims to describe and disseminate different survival measures and their interpretation in less technical language. In addition, we introduce templates to summarize cancer survival statistic organized by their specific purpose: research and policy versus prognosis and clinical decision making. Published by Oxford University Press 2014.
Cancer Survival: An Overview of Measures, Uses, and Interpretation
Noone, Anne-Michelle; Howlader, Nadia; Cho, Hyunsoon; Keel, Gretchen E.; Garshell, Jessica; Woloshin, Steven; Schwartz, Lisa M.
2014-01-01
Survival statistics are of great interest to patients, clinicians, researchers, and policy makers. Although seemingly simple, survival can be confusing: there are many different survival measures with a plethora of names and statistical methods developed to answer different questions. This paper aims to describe and disseminate different survival measures and their interpretation in less technical language. In addition, we introduce templates to summarize cancer survival statistic organized by their specific purpose: research and policy versus prognosis and clinical decision making. PMID:25417231
Gala-Alarcón, Paula; Calvo-Lobo, César; Serrano-Imedio, Ana; Garrido-Marín, Alejandro; Martín-Casas, Patricia; Plaza-Manzano, Gustavo
2018-04-18
The purpose of this study was to describe ultrasound (US) changes in muscle thickness produced during automatic activation of the transversus abdominis (TrAb), internal oblique (IO), external oblique (EO), and rectus abdominis (RA), as well as the cross-sectional area (CSA) of the lumbar multifidus (LM), after 1 year of Pilates practice. A 1-year follow-up case series study with a convenience sample of 17 participants was performed. Indeed, TrAb, IO, EO, and RA thickness, as well as LM CSA changes during automatic tests were measured by US scanning before and after 1 year of Pilates practice twice per week. Furthermore, quality of life changes using the 36-Item Short Form Health Survey and US measurement comparisons of participants who practiced exercises other than Pilates were described. Statistically significant changes were observed for the RA muscle thickness reduction during the active straight leg raise test (P = .007). Participants who practiced other exercises presented a larger LM CSA and IO thickness, which was statistically significant (P < .05). Statistically significant changes were not observed for the domains of the analyzed 36-Item Short Form Health Survey (P > .05). A direct moderate correlation was observed (r = 0.562, P = .019) between the TrAb thickness before and after a 1-year follow-up. Long-term Pilates practice may reduce the RA thickness automatic activation during active straight leg raise. Furthermore, LM CSA and IO thickness increases were observed in participants who practice other exercise types in conjunction with Pilates. Despite a moderate positive correlation observed for TrAb thickness, the quality of life did not seem to be modified after long-term Pilates practice. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.
2017-12-01
In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.
Automatic Cataract Hardness Classification Ex Vivo by Ultrasound Techniques.
Caixinha, Miguel; Santos, Mário; Santos, Jaime
2016-04-01
To demonstrate the feasibility of a new methodology for cataract hardness characterization and automatic classification using ultrasound techniques, different cataract degrees were induced in 210 porcine lenses. A 25-MHz ultrasound transducer was used to obtain acoustical parameters (velocity and attenuation) and backscattering signals. B-Scan and parametric Nakagami images were constructed. Ninety-seven parameters were extracted and subjected to a Principal Component Analysis. Bayes, K-Nearest-Neighbours, Fisher Linear Discriminant and Support Vector Machine (SVM) classifiers were used to automatically classify the different cataract severities. Statistically significant increases with cataract formation were found for velocity, attenuation, mean brightness intensity of the B-Scan images and mean Nakagami m parameter (p < 0.01). The four classifiers showed a good performance for healthy versus cataractous lenses (F-measure ≥ 92.68%), while for initial versus severe cataracts the SVM classifier showed the higher performance (90.62%). The results showed that ultrasound techniques can be used for non-invasive cataract hardness characterization and automatic classification. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Women in Physics in India—2008
NASA Astrophysics Data System (ADS)
Chandra, Nutan; Godbole, Rohini M.; Gupte, Neelima; Jolly, Pratibha; Mehta, Anita; Narasimhan, Shobhana; Rao, Sumathi; Sharma, Vinita; Surya, Sumati
2009-04-01
In this paper we summarize the situation for women in physics in India. We provide some statistics, describe new initiatives, and provide comments collated from the widely varying backgrounds of the authors.
Fehre, Karsten; Plössnig, Manuela; Schuler, Jochen; Hofer-Dückelmann, Christina; Rappelsberger, Andrea; Adlassnig, Klaus-Peter
2015-01-01
The detection of adverse drug events (ADEs) is an important aspect of improving patient safety. The iMedication system employs predefined triggers associated with significant events in a patient's clinical data to automatically detect possible ADEs. We defined four clinically relevant conditions: hyperkalemia, hyponatremia, renal failure, and over-anticoagulation. These are some of the most relevant ADEs in internal medical and geriatric wards. For each patient, ADE risk scores for all four situations are calculated, compared against a threshold, and judged to be monitored, or reported. A ward-based cockpit view summarizes the results.
Automated clinical system for chromosome analysis
NASA Technical Reports Server (NTRS)
Castleman, K. R.; Friedan, H. J.; Johnson, E. T.; Rennie, P. A.; Wall, R. J. (Inventor)
1978-01-01
An automatic chromosome analysis system is provided wherein a suitably prepared slide with chromosome spreads thereon is placed on the stage of an automated microscope. The automated microscope stage is computer operated to move the slide to enable detection of chromosome spreads on the slide. The X and Y location of each chromosome spread that is detected is stored. The computer measures the chromosomes in a spread, classifies them by group or by type and also prepares a digital karyotype image. The computer system can also prepare a patient report summarizing the result of the analysis and listing suspected abnormalities.
Development of public science archive system of Subaro Telescope. 2
NASA Astrophysics Data System (ADS)
Yamamoto, Naotaka; Noda, Sachiyo; Taga, Masatoshi; Ozawa, Tomohiko; Horaguchi, Toshihiro; Okumura, Shin-Ichiro; Furusho, Reiko; Baba, Hajime; Yagi, Masafumi; Yasuda, Naoki; Takata, Tadafumi; Ichikawa, Shin-Ichi
2003-09-01
We report various improvements in a public science archive system, SMOKA (Subaru-Mitaka-Okayama-Kiso Archive system). We have developed a new interface to search observational data of minor bodies in the solar system. In addition, the other improvements (1) to search frames by specifying wavelength directly, (2) to find out calibration data set automatically, (3) to browse data on weather, humidity, and temperature, which provide information of image quality, (4) to provide quick-look images of OHS/CISCO and IRCS, and (5) to include the data from OAO HIDES (HIgh Dispersion Echelle Spectrograph), are also summarized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rueegsegger, Michael B.; Bach Cuadra, Meritxell; Pica, Alessia
Purpose: Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors. Methods and Materials: Manual and automatic segmentations were compared for 17 patients, based on head computed tomography (CT) volume scans. A 3Dmore » statistical shape model of the cornea, lens, and sclera as well as of the optic disc position was developed. Furthermore, an active shape model was built to enable automatic fitting of the eye model to CT slice stacks. Cross-validation was performed based on leave-one-out tests for all training shapes by measuring dice coefficients and mean segmentation errors between automatic segmentation and manual segmentation by an expert. Results: Cross-validation revealed a dice similarity of 95% {+-} 2% for the sclera and cornea and 91% {+-} 2% for the lens. Overall, mean segmentation error was found to be 0.3 {+-} 0.1 mm. Average segmentation time was 14 {+-} 2 s on a standard personal computer. Conclusions: Our results show that the solution presented outperforms state-of-the-art methods in terms of accuracy, reliability, and robustness. Moreover, the eye model shape as well as its variability is learned from a training set rather than by making shape assumptions (eg, as with the spherical or elliptical model). Therefore, the model appears to be capable of modeling nonspherically and nonelliptically shaped eyes.« less
R-on-1 automatic mapping: A new tool for laser damage testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hue, J.; Garrec, P.; Dijon, J.
1996-12-31
Laser damage threshold measurement is statistical in nature. For a commercial qualification or for a user, the threshold determined by the weakest point is a satisfactory characterization. When a new coating is designed, threshold mapping is very useful. It enables the technology to be improved and followed more accurately. Different statistical parameters such as the minimum, maximum, average, and standard deviation of the damage threshold as well as spatial parameters such as the threshold uniformity of the coating can be determined. Therefore, in order to achieve a mapping, all the tested sites should give data. This is the major interestmore » of the R-on-1 test in spite of the fact that the laser damage threshold obtained by this method may be different from the 1-on-1 test (smaller or greater). Moreover, on the damage laser test facility, the beam size is smaller (diameters of a few hundred micrometers) than the characteristic sizes of the components in use (diameters of several centimeters up to one meter). Hence, a laser damage threshold mapping appears very interesting, especially for applications linked to large optical components like the Megajoule project or the National Ignition Facility (N.I.F). On the test bench used, damage detection with a Nomarski microscope and scattered light measurement are almost equivalent. Therefore, it becomes possible to automatically detect on line the first defects induced by YAG irradiation. Scattered light mappings and laser damage threshold mappings can therefore be achieved using a X-Y automatic stage (where the test sample is located). The major difficulties due to the automatic capabilities are shown. These characterizations are illustrated at 355 nm. The numerous experiments performed show different kinds of scattering curves, which are discussed in relation with the damage mechanisms.« less
Balanced states of mind in psychopathology and psychological well-being.
Wong, Shyh Shin
2010-08-01
The balanced states of mind (BSOM) model proposes that coping with stress and psychological well-being is a function of the BSOM ratio of positive thoughts to the sum of positive and negative thoughts. Based on different BSOM ratios, different BSOM categories are constructed to quantitatively differentiate levels of coping with stress and psychological well-being. The cognitive content-specificity hypothesis states that there are unique themes of semantic content in self-reported automatic thoughts particular to depression or anxiety. This study investigated the BSOM model and its cognitive content-specificity for depression, anxiety, anger, stress, life satisfaction, and happiness, based on negative and positive automatic thoughts. Three hundred and ninety-eight college students from Singapore participated in this study. First, BSOM ratio and positive automatic thoughts were positively correlated with life satisfaction and happiness, and negatively correlated with stress, anxiety, depression, and anger. In contrast, negative automatic thoughts were positively correlated with stress, anxiety, depression, and anger, and negatively correlated with life satisfaction and happiness. Second, levels of psychopathology and psychological well-being were statistically differentiable among the BSOM categories for depression, happiness, perceived stress, and life satisfaction; and less statistically differentiable among the BSOM categories for anxiety and anger, as expected based on the BSOM model and cognitive content-specificity hypothesis. Third, the results were more supportive of the BSOM model for depression, followed by happiness, perceived stress, life satisfaction, anxiety, and anger in terms of percentage of variance accounted for by BSOM categories, as expected based on the cognitive content-specificity hypothesis. Taken together, the results suggested that the more moderately positive thoughts one has (balanced by negative thoughts), the better mental health outcomes one has. Implications and limitations of these findings are discussed.
Golbaz, Isabelle; Ahlers, Christian; Goesseringer, Nina; Stock, Geraldine; Geitzenauer, Wolfgang; Prünte, Christian; Schmidt-Erfurth, Ursula Margarethe
2011-03-01
This study compared automatic- and manual segmentation modalities in the retina of healthy eyes using high-definition optical coherence tomography (HD-OCT). Twenty retinas in 20 healthy individuals were examined using an HD-OCT system (Carl Zeiss Meditec, Inc.). Three-dimensional imaging was performed with an axial resolution of 6 μm at a maximum scanning speed of 25,000 A-scans/second. Volumes of 6 × 6 × 2 mm were scanned. Scans were analysed using a matlab-based algorithm and a manual segmentation software system (3D-Doctor). The volume values calculated by the two methods were compared. Statistical analysis revealed a high correlation between automatic and manual modes of segmentation. The automatic mode of measuring retinal volume and the corresponding three-dimensional images provided similar results to the manual segmentation procedure. Both methods were able to visualize retinal and subretinal features accurately. This study compared two methods of assessing retinal volume using HD-OCT scans in healthy retinas. Both methods were able to provide realistic volumetric data when applied to raster scan sets. Manual segmentation methods represent an adequate tool with which to control automated processes and to identify clinically relevant structures, whereas automatic procedures will be needed to obtain data in larger patient populations. © 2009 The Authors. Journal compilation © 2009 Acta Ophthalmol.
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Halig, Luma V.; Fei, Baowei
2013-03-01
An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%+/-2.3% and 83.6+/-7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
Real-Time Ada Demonstration Project
1989-05-31
automatic self destruct (due to concerns about countermeasures). 12.2.4 Battle Status Battlefield conditions and statistics shall be continuously displayed...Mouse-oata.CON2status, RESPOSE ); -- wait for response if RESPONSE z Mouse-Oata.data new then Receive-Control(MouseData.CM2-data,RESPNSE); - clear out
The Electronic Supervisor: New Technology, New Tensions.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Office of Technology Assessment.
Computer technology has made it possible for employers to collect and analyze management information about employees' work performance and equipment use. There are three main tools for supervising office activities. Computer-based (electronic) monitoring systems automatically record statistics about the work of employees using computer or…
Assessing Creative Problem-Solving with Automated Text Grading
ERIC Educational Resources Information Center
Wang, Hao-Chuan; Chang, Chun-Yen; Li, Tsai-Yen
2008-01-01
The work aims to improve the assessment of creative problem-solving in science education by employing language technologies and computational-statistical machine learning methods to grade students' natural language responses automatically. To evaluate constructs like creative problem-solving with validity, open-ended questions that elicit…
Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Ye; Ma, Xiaosong; Liu, Qing Gary
2015-01-01
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less
Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.
The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has beenmore » included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys.« less
Low-flow statistics of selected streams in Chester County, Pennsylvania
Schreffler, Curtis L.
1998-01-01
Low-flow statistics for many streams in Chester County, Pa., were determined on the basis of data from 14 continuous-record streamflow stations in Chester County and data from 1 station in Maryland and 1 station in Delaware. The stations in Maryland and Delaware are on streams that drain large areas within Chester County. Streamflow data through the 1994 water year were used in the analyses. The low-flow statistics summarized are the 1Q10, 7Q10, 30Q10, and harmonic mean. Low-flow statistics were estimated at 34 partial-record stream sites throughout Chester County.
Exceedance statistics of accelerations resulting from thruster firings on the Apollo-Soyuz mission
NASA Technical Reports Server (NTRS)
Fichtl, G. H.; Holland, R. L.
1983-01-01
Spacecraft acceleration resulting from firings of vernier control system thrusters is an important consideration in the design, planning, execution and post-flight analysis of laboratory experiments in space. In particular, scientists and technologists involved with the development of experiments to be performed in space in many instances required statistical information on the magnitude and rate of occurrence of spacecraft accelerations. Typically, these accelerations are stochastic in nature, so that it is useful to characterize these accelerations in statistical terms. Statistics of spacecraft accelerations are summarized. Previously announced in STAR as N82-12127
Efficient content-based low-altitude images correlated network and strips reconstruction
NASA Astrophysics Data System (ADS)
He, Haiqing; You, Qi; Chen, Xiaoyong
2017-01-01
The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.
A recent advance in the automatic indexing of the biomedical literature.
Névéol, Aurélie; Shooshan, Sonya E; Humphrey, Susanne M; Mork, James G; Aronson, Alan R
2009-10-01
The volume of biomedical literature has experienced explosive growth in recent years. This is reflected in the corresponding increase in the size of MEDLINE, the largest bibliographic database of biomedical citations. Indexers at the US National Library of Medicine (NLM) need efficient tools to help them accommodate the ensuing workload. After reviewing issues in the automatic assignment of Medical Subject Headings (MeSH terms) to biomedical text, we focus more specifically on the new subheading attachment feature for NLM's Medical Text Indexer (MTI). Natural Language Processing, statistical, and machine learning methods of producing automatic MeSH main heading/subheading pair recommendations were assessed independently and combined. The best combination achieves 48% precision and 30% recall. After validation by NLM indexers, a suitable combination of the methods presented in this paper was integrated into MTI as a subheading attachment feature producing MeSH indexing recommendations compliant with current state-of-the-art indexing practice.
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2012-11-01
Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.
Hannan, Mahammad A.; Hussein, Hussein A.; Mutashar, Saad; Samad, Salina A.; Hussain, Aini
2014-01-01
With the development of communication technologies, the use of wireless systems in biomedical implanted devices has become very useful. Bio-implantable devices are electronic devices which are used for treatment and monitoring brain implants, pacemakers, cochlear implants, retinal implants and so on. The inductive coupling link is used to transmit power and data between the primary and secondary sides of the biomedical implanted system, in which efficient power amplifier is very much needed to ensure the best data transmission rates and low power losses. However, the efficiency of the implanted devices depends on the circuit design, controller, load variation, changes of radio frequency coil's mutual displacement and coupling coefficients. This paper provides a comprehensive survey on various power amplifier classes and their characteristics, efficiency and controller techniques that have been used in bio-implants. The automatic frequency controller used in biomedical implants such as gate drive switching control, closed loop power control, voltage controlled oscillator, capacitor control and microcontroller frequency control have been explained. Most of these techniques keep the resonance frequency stable in transcutaneous power transfer between the external coil and the coil implanted inside the body. Detailed information including carrier frequency, power efficiency, coils displacement, power consumption, supplied voltage and CMOS chip for the controllers techniques are investigated and summarized in the provided tables. From the rigorous review, it is observed that the existing automatic frequency controller technologies are more or less can capable of performing well in the implant devices; however, the systems are still not up to the mark. Accordingly, current challenges and problems of the typical automatic frequency controller techniques for power amplifiers are illustrated, with a brief suggestions and discussion section concerning the progress of implanted device research in the future. This review will hopefully lead to increasing efforts towards the development of low powered, highly efficient, high data rate and reliable automatic frequency controllers for implanted devices. PMID:25615728
Mutual interference between statistical summary perception and statistical learning.
Zhao, Jiaying; Ngo, Nhi; McKendrick, Ryan; Turk-Browne, Nicholas B
2011-09-01
The visual system is an efficient statistician, extracting statistical summaries over sets of objects (statistical summary perception) and statistical regularities among individual objects (statistical learning). Although these two kinds of statistical processing have been studied extensively in isolation, their relationship is not yet understood. We first examined how statistical summary perception influences statistical learning by manipulating the task that participants performed over sets of objects containing statistical regularities (Experiment 1). Participants who performed a summary task showed no statistical learning of the regularities, whereas those who performed control tasks showed robust learning. We then examined how statistical learning influences statistical summary perception by manipulating whether the sets being summarized contained regularities (Experiment 2) and whether such regularities had already been learned (Experiment 3). The accuracy of summary judgments improved when regularities were removed and when learning had occurred in advance. In sum, calculating summary statistics impeded statistical learning, and extracting statistical regularities impeded statistical summary perception. This mutual interference suggests that statistical summary perception and statistical learning are fundamentally related.
NASA Astrophysics Data System (ADS)
Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.
2013-12-01
In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to the fact that each definitive BCSF SQIs is determined by an expert analysis. We compare the SQIs obtained by these methods from our database and discuss the coherency and variations between automatic and manual processes. These methods lead to high scores with up to 85% of the forms well classified and most of the remaining forms classified with only a shift of one intensity degree. This allows us to use the ranking methods as the best automatic methods to fast SQIs estimation and to produce fast shakemaps. The next step, to improve the use of these methods, will be to identify explanations for the forms not classified at the correct value and a way to select the few remaining forms that should be analyzed by the expert. Note that beyond intensity VI, on-line questionnaires are insufficient and a field survey is indispensable to estimate intensity. For such survey, in France, BCSF leads a macroseismic intervention group (GIM).
Fürbass, F; Ossenblok, P; Hartmann, M; Perko, H; Skupch, A M; Lindinger, G; Elezi, L; Pataraia, E; Colon, A J; Baumgartner, C; Kluge, T
2015-06-01
A method for automatic detection of epileptic seizures in long-term scalp-EEG recordings called EpiScan will be presented. EpiScan is used as alarm device to notify medical staff of epilepsy monitoring units (EMUs) in case of a seizure. A prospective multi-center study was performed in three EMUs including 205 patients. A comparison between EpiScan and the Persyst seizure detector on the prospective data will be presented. In addition, the detection results of EpiScan on retrospective EEG data of 310 patients and the public available CHB-MIT dataset will be shown. A detection sensitivity of 81% was reached for unequivocal electrographic seizures with false alarm rate of only 7 per day. No statistical significant differences in the detection sensitivities could be found between the centers. The comparison to the Persyst seizure detector showed a lower false alarm rate of EpiScan but the difference was not of statistical significance. The automatic seizure detection method EpiScan showed high sensitivity and low false alarm rate in a prospective multi-center study on a large number of patients. The application as seizure alarm device in EMUs becomes feasible and will raise the efficiency of video-EEG monitoring and the safety levels of patients. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye
2017-10-01
Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Statistical methodology for the analysis of dye-switch microarray experiments
Mary-Huard, Tristan; Aubert, Julie; Mansouri-Attia, Nadera; Sandra, Olivier; Daudin, Jean-Jacques
2008-01-01
Background In individually dye-balanced microarray designs, each biological sample is hybridized on two different slides, once with Cy3 and once with Cy5. While this strategy ensures an automatic correction of the gene-specific labelling bias, it also induces dependencies between log-ratio measurements that must be taken into account in the statistical analysis. Results We present two original statistical procedures for the statistical analysis of individually balanced designs. These procedures are compared with the usual ML and REML mixed model procedures proposed in most statistical toolboxes, on both simulated and real data. Conclusion The UP procedure we propose as an alternative to usual mixed model procedures is more efficient and significantly faster to compute. This result provides some useful guidelines for the analysis of complex designs. PMID:18271965
Timber resource statistics for southwest Washington.
Patricia M. Bassett; Daniel D. Oswald
1981-01-01
This report summarizes a 1978 timber-resource inventory of six counties in southwest Washington: Clark, Cowlitz, Lewis, Pacific, Skamania, and Wahkiakum. Detailed tables of forest area, timber volume, growth, mortality, and harvest are presented.
Forest statistics for eastern Oregon, 1977.
Thomas O. Farrenkopf
1982-01-01
This report summarizes a 1977 inventory of timber resources in 17 Oregon counties east of the crest of the Cascade Range. Detailed data on forest area, timber volume, growth, mortality, and harvest are presented.
NPS national transit inventory, 2013
DOT National Transportation Integrated Search
2014-07-31
This document summarizes key highlights from the National Park Service (NPS) 2013 National Transit Inventory, and presents data for NPS transit systems system-wide. The document discusses statistics related to ridership, business models, fleet charac...
Preliminary timber resource statistics for southwest Washington.
Colin D. MacLean; Janet L. Ohmann; Patricia M. Bassett
1991-01-01
This report summarizes a 1988 timber inventory of six counties in southwest Washington: Clark, Cowlitz, Lewis, Pacific, Skamania, and Wahkiakum. Detailed tables of forest area, timber volume, growth, mortality, and harvest are presented.
ERIC Educational Resources Information Center
US Department of Education, 2008
2008-01-01
The No Child Left Behind Blue Ribbon Schools Program honors public and private K-12 schools that are either academically superior in their state or that demonstrate dramatic gains in student achievement. This document includes 2008 statistical information for public and private blue ribbon schools for 45 states. In addition to summarized totals,…
ERIC Educational Resources Information Center
United Nations Children's Fund, New York, NY.
This report summarizes the latest available statistics on international achievements in child survival, health, nutrition, education, water and sanitation, and the plight of women. Each section contains a commentary, related statistics, and a discussion on progress and disparity in the section's particular area. Following a foreword by United…
A clinical research analytics toolkit for cohort study.
Yu, Yiqin; Zhu, Yu; Sun, Xingzhi; Tao, Ying; Zhang, Shuo; Xu, Linhao; Pan, Yue
2012-01-01
This paper presents a clinical informatics toolkit that can assist physicians to conduct cohort studies effectively and efficiently. The toolkit has three key features: 1) support of procedures defined in epidemiology, 2) recommendation of statistical methods in data analysis, and 3) automatic generation of research reports. On one hand, our system can help physicians control research quality by leveraging the integrated knowledge of epidemiology and medical statistics; on the other hand, it can improve productivity by reducing the complexities for physicians during their cohort studies.
Murray, Andrea K; Feng, Kaiyan; Moore, Tonia L; Allen, Phillip D; Taylor, Christopher J; Herrick, Ariane L
2011-08-01
Nailfold capillaroscopy is well established in screening patients with Raynaud's phenomenon for underlying SSc-spectrum disorders, by identifying abnormal capillaries. Our aim was to compare semi-automatic feature measurement from newly developed software with manual measurements, and determine the degree to which semi-automated data allows disease group classification. Images from 46 healthy controls, 21 patients with PRP and 49 with SSc were preprocessed, and semi-automated measurements of intercapillary distance and capillary width, tortuosity, and derangement were performed. These were compared with manual measurements. Features were used to classify images into the three subject groups. Comparison of automatic and manual measures for distance, width, tortuosity, and derangement had correlations of r=0.583, 0.624, 0.495 (p<0.001), and 0.195 (p=0.040). For automatic measures, correlations were found between width and intercapillary distance, r=0.374, and width and tortuosity, r=0.573 (p<0.001). Significant differences between subject groups were found for all features (p<0.002). Overall, 75% of images correctly matched clinical classification using semi-automated features, compared with 71% for manual measurements. Semi-automatic and manual measurements of distance, width, and tortuosity showed moderate (but statistically significant) correlations. Correlation for derangement was weaker. Semi-automatic measurements are faster than manual measurements. Semi-automatic parameters identify differences between groups, and are as good as manual measurements for between-group classification. © 2011 John Wiley & Sons Ltd.
Information processing requirements for on-board monitoring of automatic landing
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Karmarkar, J. S.
1977-01-01
A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.
Natural-Annotation-based Unsupervised Construction of Korean-Chinese Domain Dictionary
NASA Astrophysics Data System (ADS)
Liu, Wuying; Wang, Lin
2018-03-01
The large-scale bilingual parallel resource is significant to statistical learning and deep learning in natural language processing. This paper addresses the automatic construction issue of the Korean-Chinese domain dictionary, and presents a novel unsupervised construction method based on the natural annotation in the raw corpus. We firstly extract all Korean-Chinese word pairs from Korean texts according to natural annotations, secondly transform the traditional Chinese characters into the simplified ones, and finally distill out a bilingual domain dictionary after retrieving the simplified Chinese words in an extra Chinese domain dictionary. The experimental results show that our method can automatically build multiple Korean-Chinese domain dictionaries efficiently.
Automatic Adviser on Mobile Objects Status Identification and Classification
NASA Astrophysics Data System (ADS)
Shabelnikov, A. N.; Liabakh, N. N.; Gibner, Ya M.; Saryan, A. S.
2018-05-01
A mobile object status identification task is defined within the image discrimination theory. It is proposed to classify objects into three classes: object operation status; its maintenance is required and object should be removed from the production process. Two methods were developed to construct the separating boundaries between the designated classes: a) using statistical information on the research objects executed movement, b) basing on regulatory documents and expert commentary. Automatic Adviser operation simulation and the operation results analysis complex were synthesized. Research results are commented using a specific example of cuts rolling from the hump yard. The work was supported by Russian Fundamental Research Fund, project No. 17-20-01040.
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Frollo, Ivan
2017-12-01
The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.
Cross correlation anomaly detection system
NASA Technical Reports Server (NTRS)
Micka, E. Z. (Inventor)
1975-01-01
This invention provides a method for automatically inspecting the surface of an object, such as an integrated circuit chip, whereby the data obtained by the light reflected from the surface, caused by a scanning light beam, is automatically compared with data representing acceptable values for each unique surface. A signal output provided indicated of acceptance or rejection of the chip. Acceptance is based on predetermined statistical confidence intervals calculated from known good regions of the object being tested, or their representative values. The method can utilize a known good chip, a photographic mask from which the I.C. was fabricated, or a computer stored replica of each pattern being tested.
Code of Federal Regulations, 2010 CFR
2010-04-01
... reports with appropriate statistical methodology in accordance with § 820.100. (c) Each manufacturer who... chapter shall automatically consider the report a complaint and shall process it in accordance with the... device serviced; (2) Any device identification(s) and control number(s) used; (3) The date of service; (4...
Modeling and Simulation with INS.
ERIC Educational Resources Information Center
Roberts, Stephen D.; And Others
INS, the Integrated Network Simulation language, puts simulation modeling into a network framework and automatically performs such programming activities as placing the problem into a next event structure, coding events, collecting statistics, monitoring status, and formatting reports. To do this, INS provides a set of symbols (nodes and branches)…
2011-06-01
implementing, and evaluating many feature selection algorithms. Mucciardi and Gose compared seven different techniques for choosing subsets of pattern...122 THIS PAGE INTENTIONALLY LEFT BLANK 123 LIST OF REFERENCES [1] A. Mucciardi and E. Gose , “A comparison of seven techniques for
The Mucciardi-Gose Clustering Algorithm and Its Applications in Automatic Pattern Recognition.
A procedure known as the Mucciardi- Gose clustering algorithm, CLUSTR, for determining the geometrical or statistical relationships among groups of N...discussion of clustering algorithms is given; the particular advantages of the Mucciardi- Gose procedure are described. The mathematical basis for, and the
In Situ Distribution Guided Analysis and Visualization of Transonic Jet Engine Simulations.
Dutta, Soumya; Chen, Chun-Ming; Heinlein, Gregory; Shen, Han-Wei; Chen, Jen-Ping
2017-01-01
Study of flow instability in turbine engine compressors is crucial to understand the inception and evolution of engine stall. Aerodynamics experts have been working on detecting the early signs of stall in order to devise novel stall suppression technologies. A state-of-the-art Navier-Stokes based, time-accurate computational fluid dynamics simulator, TURBO, has been developed in NASA to enhance the understanding of flow phenomena undergoing rotating stall. Despite the proven high modeling accuracy of TURBO, the excessive simulation data prohibits post-hoc analysis in both storage and I/O time. To address these issues and allow the expert to perform scalable stall analysis, we have designed an in situ distribution guided stall analysis technique. Our method summarizes statistics of important properties of the simulation data in situ using a probabilistic data modeling scheme. This data summarization enables statistical anomaly detection for flow instability in post analysis, which reveals the spatiotemporal trends of rotating stall for the expert to conceive new hypotheses. Furthermore, the verification of the hypotheses and exploratory visualization using the summarized data are realized using probabilistic visualization techniques such as uncertain isocontouring. Positive feedback from the domain scientist has indicated the efficacy of our system in exploratory stall analysis.
Algorithm for Video Summarization of Bronchoscopy Procedures
2011-01-01
Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the challenge of generating summaries of bronchoscopy video recordings. PMID:22185344
Evaluating topic model interpretability from a primary care physician perspective.
Arnold, Corey W; Oh, Andrea; Chen, Shawn; Speier, William
2016-02-01
Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician's point of view. Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann-Whitney U tests for each of the tasks. While the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks. This work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Evaluating Topic Model Interpretability from a Primary Care Physician Perspective
Arnold, Corey W.; Oh, Andrea; Chen, Shawn; Speier, William
2015-01-01
Background and Objective Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician’s point of view. Methods Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann-Whitney U tests for each of the tasks. Results While the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks. Conclusion This work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization. PMID:26614020
Hierarchical video summarization based on context clustering
NASA Astrophysics Data System (ADS)
Tseng, Belle L.; Smith, John R.
2003-11-01
A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.
Overholser, Brian R; Sowinski, Kevin M
2007-12-01
Biostatistics is the application of statistics to biologic data. The field of statistics can be broken down into 2 fundamental parts: descriptive and inferential. Descriptive statistics are commonly used to categorize, display, and summarize data. Inferential statistics can be used to make predictions based on a sample obtained from a population or some large body of information. It is these inferences that are used to test specific research hypotheses. This 2-part review will outline important features of descriptive and inferential statistics as they apply to commonly conducted research studies in the biomedical literature. Part 1 in this issue will discuss fundamental topics of statistics and data analysis. Additionally, some of the most commonly used statistical tests found in the biomedical literature will be reviewed in Part 2 in the February 2008 issue.
Crashes & Fatalities Related To Driver Drowsiness/Fatigue
DOT National Transportation Integrated Search
1994-11-01
THIS REPORT SUMMARIZES RECENT NATIONAL STATISTICS ON THE INCIDENCE AND CHARACTERISTICS OF CRASHES INVOLVING DRIVER FATIGUE, DROWSINESS, OR "ASLEEP-AT-THE-WHEEL." FOR THE PURPOSES OF THIS REPORT, THESE TERMS ARE : CONSIDERED SYNONYMOUS. PRINCIPAL DATA...
Timber resource statistics for eastern Washington.
Patricia M. Bassett; Daniel D. Oswald
1983-01-01
This report summarizes a 1980 timber resource inventory of the 16 forested counties in Washington east of the crest of the Cascade Range. Detailed tables of forest area, timber volume, growth, mortality, and harvest are presented.
NPS National Transit Inventory and Performance Report, 2014
DOT National Transportation Integrated Search
2015-09-09
This document summarizes key highlights and performance measures from the National Park Service (NPS) 2014 National Transit Inventory, and presents data for NPS transit systems system-wide. The document discusses statistics related to ridership, busi...
A system for learning statistical motion patterns.
Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve
2006-09-01
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.
Dalal, Ankur; Moss, Randy H.; Stanley, R. Joe; Stoecker, William V.; Gupta, Kapil; Calcara, David A.; Xu, Jin; Shrestha, Bijaya; Drugge, Rhett; Malters, Joseph M.; Perry, Lindall A.
2011-01-01
Dermoscopy, also known as dermatoscopy or epiluminescence microscopy (ELM), permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. White areas, prominent in early malignant melanoma and melanoma in situ, contribute to early detection of these lesions. An adaptive detection method has been investigated to identify white and hypopigmented areas based on lesion histogram statistics. Using the Euclidean distance transform, the lesion is segmented in concentric deciles. Overlays of the white areas on the lesion deciles are determined. Calculated features of automatically detected white areas include lesion decile ratios, normalized number of white areas, absolute and relative size of largest white area, relative size of all white areas, and white area eccentricity, dispersion, and irregularity. Using a back-propagation neural network, the white area statistics yield over 95% diagnostic accuracy of melanomas from benign nevi. White and hypopigmented areas in melanomas tend to be central or paracentral. The four most powerful features on multivariate analysis are lesion decile ratios. Automatic detection of white and hypopigmented areas in melanoma can be accomplished using lesion statistics. A neural network can achieve good discrimination of melanomas from benign nevi using these areas. Lesion decile ratios are useful white area features. PMID:21074971
Autoregressive statistical pattern recognition algorithms for damage detection in civil structures
NASA Astrophysics Data System (ADS)
Yao, Ruigen; Pakzad, Shamim N.
2012-08-01
Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.
40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.
Code of Federal Regulations, 2010 CFR
2010-07-01
... compensated for automatically and statistical process control demonstrates equal or better quality control... calibrations, adjustments, and quality control-EPA 91. 85.2233 Section 85.2233 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE...
Scalable descriptive and correlative statistics with Titan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, David C.; Pebay, Philippe Pierre
This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.
[Coding Causes of Death with IRIS Software. Impact in Navarre Mortality Statistic].
Floristán Floristán, Yugo; Delfrade Osinaga, Josu; Carrillo Prieto, Jesus; Aguirre Perez, Jesus; Moreno-Iribas, Conchi
2016-08-02
There are few studies that analyze changes in mortality statistics derived from the use of IRIS software, an automatic system for coding multiple causes of death and for the selection of the underlying cause of death, compared to manual coding. This study evaluated the impact of the use of IRIS in the Navarre mortality statistic. We proceeded to double coding 5,060 death certificates corresponding to residents in Navarra in 2014. We calculated coincidence between the two encodings for ICD10 chapters and for the list of causes of the Spanish National Statistics Institute (INE-102) and we estimated the change on mortality rates. IRIS automatically coded 90% of death certificates. The coincidence to 4 characters and in the same chapter of the CIE10 was 79.1% and 92.0%, respectively. Furthermore, coincidence with the short INE-102 list was 88.3%. Higher matches were found in death certificate of people under 65 years. In comparison with manual coding there was an increase in deaths from endocrine diseases (31%), mental disorders (19%) and disease of nervous system (9%), while a decrease of genitourinary system diseases was observed (21%). The coincidence at level of ICD10 chapters coding by IRIS in comparison to manual coding was 9 out of 10 deaths, similar to what is observed in other studies. The implementation of IRIS has led to increased of endocrine diseases, especially diabetes and hyperlipidaemia, and mental disorders, especially dementias.
NASA Technical Reports Server (NTRS)
Calise, A. J.; Kadushin, I.; Kramer, F.
1981-01-01
The current status of research on the application of variable structure system (VSS) theory to design aircraft flight control systems is summarized. Two aircraft types are currently being investigated: the Augmentor Wing Jet STOL Research Aircraft (AWJSRA), and AV-8A Harrier. The AWJSRA design considers automatic control of longitudinal dynamics during the landing phase. The main task for the AWJSRA is to design an automatic landing system that captures and tracks a localizer beam. The control task for the AV-8A is to track velocity commands in a hovering flight configuration. Much effort was devoted to developing computer programs that are needed to carry out VSS design in a multivariable frame work, and in becoming familiar with the dynamics and control problems associated with the aircraft types under investigation. Numerous VSS design schemes were explored, particularly for the AWJSRA. The approaches that appear best suited for these aircraft types are presented. Examples are given of the numerical results currently being generated.
Generating Poetry Title Based on Semantic Relevance with Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Li, Z.; Niu, K.; He, Z. Q.
2017-09-01
Several approaches have been proposed to automatically generate Chinese classical poetry (CCP) in the past few years, but automatically generating the title of CCP is still a difficult problem. The difficulties are mainly reflected in two aspects. First, the words used in CCP are very different from modern Chinese words and there are no valid word segmentation tools. Second, the semantic relevance of characters in CCP not only exists in one sentence but also exists between the same positions of adjacent sentences, which is hard to grasp by the traditional text summarization models. In this paper, we propose an encoder-decoder model for generating the title of CCP. Our model encoder is a convolutional neural network (CNN) with two kinds of filters. To capture the commonly used words in one sentence, one kind of filters covers two characters horizontally at each step. The other covers two characters vertically at each step and can grasp the semantic relevance of characters between adjacent sentences. Experimental results show that our model is better than several other related models and can capture the semantic relevance of CCP more accurately.
Tóth, László; Hoffmann, Ildikó; Gosztolya, Gábor; Vincze, Veronika; Szatlóczki, Gréta; Bánréti, Zoltán; Pákáski, Magdolna; Kálmán, János
2018-01-01
Background: Even today the reliable diagnosis of the prodromal stages of Alzheimer’s disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive de-cline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Methods: Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech sig-nals, first manually (using the Praat software), and then automatically, with an automatic speech recogni-tion (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. Results: The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process – that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. Conclusion: The temporal analysis of spontaneous speech can be exploited in implementing a new, auto-matic detection-based tool for screening MCI for the community. PMID:29165085
Toth, Laszlo; Hoffmann, Ildiko; Gosztolya, Gabor; Vincze, Veronika; Szatloczki, Greta; Banreti, Zoltan; Pakaski, Magdolna; Kalman, Janos
2018-01-01
Even today the reliable diagnosis of the prodromal stages of Alzheimer's disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive decline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech signals, first manually (using the Praat software), and then automatically, with an automatic speech recognition (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process - that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. The temporal analysis of spontaneous speech can be exploited in implementing a new, automatic detection-based tool for screening MCI for the community. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Parenting the Internet: Resources for Parents and Children.
ERIC Educational Resources Information Center
Bushong, Sara
2002-01-01
Presents general statistical findings of Internet use by children, discusses the recent Children's Online Privacy Protection Act (COPPA), and summarizes online child safety considerations. Considers filtering and includes current resources for parents and children. (Author/LRW)
Cost model validation: a technical and cultural approach
NASA Technical Reports Server (NTRS)
Hihn, J.; Rosenberg, L.; Roust, K.; Warfield, K.
2001-01-01
This paper summarizes how JPL's parametric mission cost model (PMCM) has been validated using both formal statistical methods and a variety of peer and management reviews in order to establish organizational acceptance of the cost model estimates.
Forest statistics for northeast Washington.
John W. Hazard
1963-01-01
This publication summarizes the results of the third inventory of six northeast Washington counties: Ferry, Lincoln, Pend Oreille, Spokane, Stevens, and Whitman. The collection of field data was made during the years 1957 to 1961 in three separate inventory projects.
2002 commodity flow survey : state summaries
DOT National Transportation Integrated Search
2005-07-01
This report summarizes the 2002 Commodity Flow Survey (CFS) : state reports released in March 2005 by the Bureau of Transportation : Statistics (BTS) of the Research and Innovative Technology Administration : of the USDOT and the Census Bureau of the...
1993 commodity flow survey : state summaries
DOT National Transportation Integrated Search
1997-06-01
This report summarizes the Commodity Flow Survey (CFS) state reports released between February 1996 and July 1996 by the Bureau of the Census and the 1993 Commodity Flow Survey: Preliminary Observations by the Bureau of Transportation Statistics. Inf...
Kim, Young Jae; Kim, Kwang Gi
2018-01-01
Existing drusen measurement is difficult to use in clinic because it requires a lot of time and effort for visual inspection. In order to resolve this problem, we propose an automatic drusen detection method to help clinical diagnosis of age-related macular degeneration. First, we changed the fundus image to a green channel and extracted the ROI of the macular area based on the optic disk. Next, we detected the candidate group using the difference image of the median filter within the ROI. We also segmented vessels and removed them from the image. Finally, we detected the drusen through Renyi's entropy threshold algorithm. We performed comparisons and statistical analysis between the manual detection results and automatic detection results for 30 cases in order to verify validity. As a result, the average sensitivity was 93.37% (80.95%~100%) and the average DSC was 0.73 (0.3~0.98). In addition, the value of the ICC was 0.984 (CI: 0.967~0.993, p < 0.01), showing the high reliability of the proposed automatic method. We expect that the automatic drusen detection helps clinicians to improve the diagnostic performance in the detection of drusen on fundus image.
Regulation and Measurement of the Heat Generated by Automatic Tooth Preparation in a Confined Space.
Yuan, Fusong; Zheng, Jianqiao; Sun, Yuchun; Wang, Yong; Lyu, Peijun
2017-06-01
The aim of this study was to assess and regulate heat generation in the dental pulp cavity and circumambient temperature around a tooth during laser ablation with a femtosecond laser in a confined space. The automatic tooth preparing technique is one of the traditional oral clinical technology innovations. In this technique, a robot controlled an ultrashort pulse laser to automatically complete the three-dimensional teeth preparing in a confined space. The temperature control is the main measure for protecting the tooth nerve. Ten tooth specimens were irradiated with a femtosecond laser controlled by a robot in a confined space to generate 10 teeth preparation. During the process, four thermocouple sensors were used to record the pulp cavity and circumambient environment temperatures with or without air cooling. A statistical analysis of the temperatures was performed between the conditions with and without air cooling (p < 0.05). The recordings showed that the temperature with air cooling was lower than that without air cooling and that the heat generated in the pulp cavity was lower than the threshold for dental pulp damage. These results indicate that femtosecond laser ablation with air cooling might be an appropriate method for automatic tooth preparing.
NASA Technical Reports Server (NTRS)
Hall, Justin R.; Hastrup, Rolf C.
1990-01-01
The principal challenges in providing effective deep space navigation, telecommunications, and information management architectures and designs for Mars exploration support are presented. The fundamental objectives are to provide the mission with the means to monitor and control mission elements, obtain science, navigation, and engineering data, compute state vectors and navigate, and to move these data efficiently and automatically between mission nodes for timely analysis and decision making. New requirements are summarized, and related issues and challenges including the robust connectivity for manned and robotic links, are identified. Enabling strategies are discussed, and candidate architectures and driving technologies are described.
Luna, Jorge M; Yip, Natalie; Pivovarov, Rimma; Vawdrey, David K
2016-08-01
Clinical teams in acute inpatient settings can greatly benefit from automated charting technologies that continuously monitor patient vital status. NewYork-Presbyterian has designed and developed a real-time patient monitoring system that integrates vital signs sensors, networking, and electronic health records, to allow for automatic charting of patient status. We evaluate the representativeness (a combination of agreement, safety and timing) of a core vital sign across nursing intensity care protocols for preliminary feasibility assessment. Our findings suggest an automated way of summarizing heart rate provides representation of true heart rate status and can facilitate alternatives approaches to burdensome manual nurse charting of physiological parameters.
Implicit cognitive processes in psychopathology: an introduction.
Wiers, Reinout W; Teachman, Bethany A; De Houwer, Jan
2007-06-01
Implicit or automatic processes are important in understanding the etiology and maintenance of psychopathological problems. In order to study implicit processes in psychopathology, measures are needed that are valid and reliable when applied to clinical problems. One of the main topics in this special issue concerns the development and validation of new or modified implicit tests in different domains of psychopathology. The other main topic concerns the prediction of clinical outcomes and new ways to directly influence implicit processes in psychopathology. We summarize the contributions to this special issue and discuss how they further our knowledge of implicit processes in psychopathology and how to measure them.
NASA Astrophysics Data System (ADS)
Hall, Justin R.; Hastrup, Rolf C.
1990-10-01
The principal challenges in providing effective deep space navigation, telecommunications, and information management architectures and designs for Mars exploration support are presented. The fundamental objectives are to provide the mission with the means to monitor and control mission elements, obtain science, navigation, and engineering data, compute state vectors and navigate, and to move these data efficiently and automatically between mission nodes for timely analysis and decision making. New requirements are summarized, and related issues and challenges including the robust connectivity for manned and robotic links, are identified. Enabling strategies are discussed, and candidate architectures and driving technologies are described.
Meston, Cindy M; Bradford, Andrea
2007-01-01
In this article, we summarize the definition, etiology, assessment, and treatment of sexual dysfunctions in women. Although the Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV-TR) is our guiding framework for classifying and defining women's sexual dysfunctions, we draw special attention to recent discussion in the literature criticizing the DSM-IV-TR diagnostic criteria and their underlying assumptions. Our review of clinical research on sexual dysfunction summarizes psychosocial and biomedical management approaches, with a critical examination of the empirical support for commonly prescribed therapies and limitations of recent clinical trials.
[Statistical analysis using freely-available "EZR (Easy R)" software].
Kanda, Yoshinobu
2015-10-01
Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.
Semi-automatic semantic annotation of PubMed Queries: a study on quality, efficiency, satisfaction
Névéol, Aurélie; Islamaj-Doğan, Rezarta; Lu, Zhiyong
2010-01-01
Information processing algorithms require significant amounts of annotated data for training and testing. The availability of such data is often hindered by the complexity and high cost of production. In this paper, we investigate the benefits of a state-of-the-art tool to help with the semantic annotation of a large set of biomedical information queries. Seven annotators were recruited to annotate a set of 10,000 PubMed® queries with 16 biomedical and bibliographic categories. About half of the queries were annotated from scratch, while the other half were automatically pre-annotated and manually corrected. The impact of the automatic pre-annotations was assessed on several aspects of the task: time, number of actions, annotator satisfaction, inter-annotator agreement, quality and number of the resulting annotations. The analysis of annotation results showed that the number of required hand annotations is 28.9% less when using pre-annotated results from automatic tools. As a result, the overall annotation time was substantially lower when pre-annotations were used, while inter-annotator agreement was significantly higher. In addition, there was no statistically significant difference in the semantic distribution or number of annotations produced when pre-annotations were used. The annotated query corpus is freely available to the research community. This study shows that automatic pre-annotations are found helpful by most annotators. Our experience suggests using an automatic tool to assist large-scale manual annotation projects. This helps speed-up the annotation time and improve annotation consistency while maintaining high quality of the final annotations. PMID:21094696
Effectiveness of an automatic manual wheelchair braking system in the prevention of falls.
Martorello, Laura; Swanson, Edward
2006-01-01
The purpose of this study was to evaluate the effectiveness of an automatic manual wheelchair braking system in the reduction of falls for patients at high risk of falls while transferring to and from a manual wheelchair. The study design was a normative survey carried out through the use of a written questionnaire sent to 60 skilled nursing facilities to collect data from the medical charts, which identified patients at high risk for falls who used an automatic wheelchair braking system. The facilities participating in the study identified a frequency of falls of high-risk patients while transferring to and from the wheelchair ranging from 2 to 10 per year, with a median fall rate per facility of 4 falls. One year after the installation of the automatic wheelchair braking system, participating facilities demonstrated a reduction of zero to three falls during transfers by high-risk patients, with a median fall rate of zero falls. This represents a statistically significant reduction of 78% in the fall rate of high-risk patients while transferring to and from the wheelchair, t (18) = 6.39, p < .0001. Incident reports of falls to and from manual wheelchairs were reviewed retrospectively for a 1-year period. This study suggests that high-risk fallers transferring to or from manual wheelchairs sustained significantly fewer falls when the Steddy Mate automatic braking system for manual wheelchairs was installed. The application of the automatic braking system allows clients, families/caregivers, and facility personnel an increased safety factor for the reduction of falls from the wheelchair.
Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images
NASA Technical Reports Server (NTRS)
Fischer, Bernd
2004-01-01
Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.
Quantitative topographic differentiation of the neonatal EEG.
Paul, Karel; Krajca, Vladimír; Roth, Zdenek; Melichar, Jan; Petránek, Svojmil
2006-09-01
To test the discriminatory topographic potential of a new method of the automatic EEG analysis in neonates. A quantitative description of the neonatal EEG can contribute to the objective assessment of the functional state of the brain, and may improve the precision of diagnosing cerebral dysfunctions manifested by 'disorganization', 'dysrhythmia' or 'dysmaturity'. 21 healthy, full-term newborns were examined polygraphically during sleep (EEG-8 referential derivations, respiration, ECG, EOG, EMG). From each EEG record, two 5-min samples (one from the middle of quiet sleep, the other from the middle of active sleep) were subject to subsequent automatic analysis and were described by 13 variables: spectral features and features describing shape and variability of the signal. The data from individual infants were averaged and the number of variables was reduced by factor analysis. All factors identified by factor analysis were statistically significantly influenced by the location of derivation. A large number of statistically significant differences were also established when comparing the effects of individual derivations on each of the 13 measured variables. Both spectral features and features describing shape and variability of the signal are largely accountable for the topographic differentiation of the neonatal EEG. The presented method of the automatic EEG analysis is capable to assess the topographic characteristics of the neonatal EEG, and it is adequately sensitive and describes the neonatal electroencephalogram with sufficient precision. The discriminatory capability of the used method represents a promise for their application in the clinical practice.
NASA Astrophysics Data System (ADS)
Kim, D.; Youn, J.; Kim, C.
2017-08-01
As a malfunctioning PV (Photovoltaic) cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.
3D automatic anatomy recognition based on iterative graph-cut-ASM
NASA Astrophysics Data System (ADS)
Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.
2010-02-01
We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.
Automatic analysis of attack data from distributed honeypot network
NASA Astrophysics Data System (ADS)
Safarik, Jakub; Voznak, MIroslav; Rezac, Filip; Partila, Pavol; Tomala, Karel
2013-05-01
There are many ways of getting real data about malicious activity in a network. One of them relies on masquerading monitoring servers as a production one. These servers are called honeypots and data about attacks on them brings us valuable information about actual attacks and techniques used by hackers. The article describes distributed topology of honeypots, which was developed with a strong orientation on monitoring of IP telephony traffic. IP telephony servers can be easily exposed to various types of attacks, and without protection, this situation can lead to loss of money and other unpleasant consequences. Using a distributed topology with honeypots placed in different geological locations and networks provides more valuable and independent results. With automatic system of gathering information from all honeypots, it is possible to work with all information on one centralized point. Communication between honeypots and centralized data store use secure SSH tunnels and server communicates only with authorized honeypots. The centralized server also automatically analyses data from each honeypot. Results of this analysis and also other statistical data about malicious activity are simply accessible through a built-in web server. All statistical and analysis reports serve as information basis for an algorithm which classifies different types of used VoIP attacks. The web interface then brings a tool for quick comparison and evaluation of actual attacks in all monitored networks. The article describes both, the honeypots nodes in distributed architecture, which monitor suspicious activity, and also methods and algorithms used on the server side for analysis of gathered data.
Automatic detection of tweets reporting cases of influenza like illnesses in Australia
2015-01-01
Early detection of disease outbreaks is critical for disease spread control and management. In this work we investigate the suitability of statistical machine learning approaches to automatically detect Twitter messages (tweets) that are likely to report cases of possible influenza like illnesses (ILI). Empirical results obtained on a large set of tweets originating from the state of Victoria, Australia, in a 3.5 month period show evidence that machine learning classifiers are effective in identifying tweets that mention possible cases of ILI (up to 0.736 F-measure, i.e. the harmonic mean of precision and recall), regardless of the specific technique implemented by the classifier investigated in the study. PMID:25870759
Automatic classification of bottles in crates
NASA Astrophysics Data System (ADS)
Aas, Kjersti; Eikvil, Line; Bremnes, Dag; Norbryhn, Andreas
1995-03-01
This paper presents a statistical method for classification of bottles in crates for use in automatic return bottle machines. For the automatons to reimburse the correct deposit, a reliable recognition is important. The images are acquired by a laser range scanner coregistering the distance to the object and the strength of the reflected signal. The objective is to identify the crate and the bottles from a library with a number of legal types. The bottles with significantly different size are separated using quite simple methods, while a more sophisticated recognizer is required to distinguish the more similar bottle types. Good results have been obtained when testing the method developed on bottle types which are difficult to distinguish using simple methods.
National Urban Mass Transportation Statistics (1982 - Section 15 Report)
DOT National Transportation Integrated Search
1983-11-01
This report summarizes the financial and operating data submitted annually to the Urban Mass Transportation Administration (UMTA) by the nation's public transit operators, pursuant to Section 15 of the Urban Mass Transportation (UMT) Act of 1964, as ...
National Urban Mass Transportation Statistics (1979 - Section 15 Report)
DOT National Transportation Integrated Search
1981-05-01
This report summarizes the financial and operating data submitted annually to the Urban Mass Transportation Administration (UMTA) by the nation's public transit operators, pursuant to section 15 of UMTA Act of 1964, amended. The report consists of tw...
Preliminary timber resource statistics for the Olympic Peninsula, Washington.
Colin D. MacLean; Janet L. Ohmann; Patricia M. Bassett
1991-01-01
This report summarizes a 1989 timber resource inventory of five counties in the Olympic Peninsula region of Washington: Clallam, Grays Harbor, Jefferson, Mason, and Thurston. Detailed tables of forest area, timber volume, growth, mortality, and harvest are presented.
Timber resource statistics for the Olympic Peninsula, Washington.
Patricia M. Bassett; Daniel D. Oswald
1961-01-01
This report summarizes a 1978-79 timber resource inventory of five counties in the Olympic Peninsula of Washington: Clallam, Grays Harbor, Jefferson, Mason, and Thurston. Detailed tables of forest area, timber volume, growth, mortality, and harvest are presented.
Pilot study of proposed revisions to specifications for hydraulic cement concrete.
DOT National Transportation Integrated Search
1985-01-01
This report summarizes the results of a pilot study of the statistical acceptance procedures proposed for adoption by the Virginia Department of Highways and Transportation. The proposed procedures were recommended in the report titled "Improved Spec...
National Urban Mass Transportation Statistics (1981 - Section 15 Report)
DOT National Transportation Integrated Search
1982-05-01
This report summarizes the financial and operating data submitted annually to the Urban Mass Transportation Administration (UMTA) by the nation's public transit operators, pursuant to Section 15 of the UMTA Act of 1964, as amended. The report consist...
Calculation of streamflow statistics for Ontario and the Great Lakes states
Piggott, Andrew R.; Neff, Brian P.
2005-01-01
Basic, flow-duration, and n-day frequency statistics were calculated for 779 current and historical streamflow gages in Ontario and 3,157 streamflow gages in the Great Lakes states with length-of-record daily mean streamflow data ending on December 31, 2000 and September 30, 2001, respectively. The statistics were determined using the U.S. Geological Survey’s SWSTAT and IOWDM, ANNIE, and LIBANNE software and Linux shell and PERL programming that enabled the mass processing of the data and calculation of the statistics. Verification exercises were performed to assess the accuracy of the processing and calculations. The statistics and descriptions, longitudes and latitudes, and drainage areas for each of the streamflow gages are summarized in ASCII text files and ESRI shapefiles.
Evaluating model accuracy for model-based reasoning
NASA Technical Reports Server (NTRS)
Chien, Steve; Roden, Joseph
1992-01-01
Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.
Ludeña-Choez, Jimmy; Quispe-Soncco, Raisa; Gallardo-Antolín, Ascensión
2017-01-01
Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC.
speaq 2.0: A complete workflow for high-throughput 1D NMR spectra processing and quantification.
Beirnaert, Charlie; Meysman, Pieter; Vu, Trung Nghia; Hermans, Nina; Apers, Sandra; Pieters, Luc; Covaci, Adrian; Laukens, Kris
2018-03-01
Nuclear Magnetic Resonance (NMR) spectroscopy is, together with liquid chromatography-mass spectrometry (LC-MS), the most established platform to perform metabolomics. In contrast to LC-MS however, NMR data is predominantly being processed with commercial software. Meanwhile its data processing remains tedious and dependent on user interventions. As a follow-up to speaq, a previously released workflow for NMR spectral alignment and quantitation, we present speaq 2.0. This completely revised framework to automatically analyze 1D NMR spectra uses wavelets to efficiently summarize the raw spectra with minimal information loss or user interaction. The tool offers a fast and easy workflow that starts with the common approach of peak-picking, followed by grouping, thus avoiding the binning step. This yields a matrix consisting of features, samples and peak values that can be conveniently processed either by using included multivariate statistical functions or by using many other recently developed methods for NMR data analysis. speaq 2.0 facilitates robust and high-throughput metabolomics based on 1D NMR but is also compatible with other NMR frameworks or complementary LC-MS workflows. The methods are benchmarked using a simulated dataset and two publicly available datasets. speaq 2.0 is distributed through the existing speaq R package to provide a complete solution for NMR data processing. The package and the code for the presented case studies are freely available on CRAN (https://cran.r-project.org/package=speaq) and GitHub (https://github.com/beirnaert/speaq).
speaq 2.0: A complete workflow for high-throughput 1D NMR spectra processing and quantification
Pieters, Luc; Covaci, Adrian
2018-01-01
Nuclear Magnetic Resonance (NMR) spectroscopy is, together with liquid chromatography-mass spectrometry (LC-MS), the most established platform to perform metabolomics. In contrast to LC-MS however, NMR data is predominantly being processed with commercial software. Meanwhile its data processing remains tedious and dependent on user interventions. As a follow-up to speaq, a previously released workflow for NMR spectral alignment and quantitation, we present speaq 2.0. This completely revised framework to automatically analyze 1D NMR spectra uses wavelets to efficiently summarize the raw spectra with minimal information loss or user interaction. The tool offers a fast and easy workflow that starts with the common approach of peak-picking, followed by grouping, thus avoiding the binning step. This yields a matrix consisting of features, samples and peak values that can be conveniently processed either by using included multivariate statistical functions or by using many other recently developed methods for NMR data analysis. speaq 2.0 facilitates robust and high-throughput metabolomics based on 1D NMR but is also compatible with other NMR frameworks or complementary LC-MS workflows. The methods are benchmarked using a simulated dataset and two publicly available datasets. speaq 2.0 is distributed through the existing speaq R package to provide a complete solution for NMR data processing. The package and the code for the presented case studies are freely available on CRAN (https://cran.r-project.org/package=speaq) and GitHub (https://github.com/beirnaert/speaq). PMID:29494588
Diky, Vladimir; Chirico, Robert D; Muzny, Chris D; Kazakov, Andrei F; Kroenlein, Kenneth; Magee, Joseph W; Abdulagatov, Ilmutdin; Frenkel, Michael
2013-12-23
ThermoData Engine (TDE) is the first full-scale software implementation of the dynamic data evaluation concept, as reported in this journal. The present article describes the background and implementation for new additions in latest release of TDE. Advances are in the areas of program architecture and quality improvement for automatic property evaluations, particularly for pure compounds. It is shown that selection of appropriate program architecture supports improvement of the quality of the on-demand property evaluations through application of a readily extensible collection of constraints. The basis and implementation for other enhancements to TDE are described briefly. Other enhancements include the following: (1) implementation of model-validity enforcement for specific equations that can provide unphysical results if unconstrained, (2) newly refined group-contribution parameters for estimation of enthalpies of formation for pure compounds containing carbon, hydrogen, and oxygen, (3) implementation of an enhanced group-contribution method (NIST-Modified UNIFAC) in TDE for improved estimation of phase-equilibrium properties for binary mixtures, (4) tools for mutual validation of ideal-gas properties derived through statistical calculations and those derived independently through combination of experimental thermodynamic results, (5) improvements in program reliability and function that stem directly from the recent redesign of the TRC-SOURCE Data Archival System for experimental property values, and (6) implementation of the Peng-Robinson equation of state for binary mixtures, which allows for critical evaluation of mixtures involving supercritical components. Planned future developments are summarized.
Quispe-Soncco, Raisa
2017-01-01
Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC. PMID:28628630
The Development of Bayesian Theory and Its Applications in Business and Bioinformatics
NASA Astrophysics Data System (ADS)
Zhang, Yifei
2018-03-01
Bayesian Theory originated from an Essay of a British mathematician named Thomas Bayes in 1763, and after its development in 20th century, Bayesian Statistics has been taking a significant part in statistical study of all fields. Due to the recent breakthrough of high-dimensional integral, Bayesian Statistics has been improved and perfected, and now it can be used to solve problems that Classical Statistics failed to solve. This paper summarizes Bayesian Statistics’ history, concepts and applications, which are illustrated in five parts: the history of Bayesian Statistics, the weakness of Classical Statistics, Bayesian Theory and its development and applications. The first two parts make a comparison between Bayesian Statistics and Classical Statistics in a macroscopic aspect. And the last three parts focus on Bayesian Theory in specific -- from introducing some particular Bayesian Statistics’ concepts to listing their development and finally their applications.
Automatic Rock Detection and Mapping from HiRISE Imagery
NASA Technical Reports Server (NTRS)
Huertas, Andres; Adams, Douglas S.; Cheng, Yang
2008-01-01
This system includes a C-code software program and a set of MATLAB software tools for statistical analysis and rock distribution mapping. The major functions include rock detection and rock detection validation. The rock detection code has been evolved into a production tool that can be used by engineers and geologists with minor training.
ERIC Educational Resources Information Center
Harris, Julian; Maurer, Hermann
An investigation into high level event monitoring within the scope of a well-known multimedia application, HyperCard--a program on the Macintosh computer, is carried out. A monitoring system is defined as a system which automatically monitors usage of some activity and gathers statistics based on what is has observed. Monitor systems can give the…
Sleepers' Lag: Study on Motion and Attention
ERIC Educational Resources Information Center
Raca, Mirko; Tormey, Roland; Dillenbourg, Pierre
2016-01-01
Body language is an essential source of information in everyday communication. Low signal-to-noise ratio prevents us from using it in the automatic processing of student behaviour, an obstacle that we are slowly overcoming with advanced statistical methods. Instead of profiling individual behaviour of students in the classroom, the idea is to…
ERIC Educational Resources Information Center
Montoya, Isaac D.
2008-01-01
Three classification techniques (Chi-square Automatic Interaction Detection [CHAID], Classification and Regression Tree [CART], and discriminant analysis) were tested to determine their accuracy in predicting Temporary Assistance for Needy Families program recipients' future employment. Technique evaluation was based on proportion of correctly…
Log Defect Recognition Using CT-images and Neural Net Classifiers
Daniel L. Schmoldt; Pei Li; A. Lynn Abbott
1995-01-01
Although several approaches have been introduced to automatically identify internal log defects using computed tomography (CT) imagery, most of these have been feasibility efforts and consequently have had several limitations: (1) reports of classification accuracy are largely subjective, not statistical, (2) there has been no attempt to achieve real-time operation,...
TableSeer: Automatic Table Extraction, Search, and Understanding
ERIC Educational Resources Information Center
Liu, Ying
2009-01-01
Tables are ubiquitous with a history that pre-dates that of sentential text. Authors often report a summary of their most important findings using tabular structure in documents. For example, scientists widely use tables to present the latest experimental results or statistical data in a condensed fashion. Along with the explosive development of…
Web information retrieval for health professionals.
Ting, S L; See-To, Eric W K; Tse, Y K
2013-06-01
This paper presents a Web Information Retrieval System (WebIRS), which is designed to assist the healthcare professionals to obtain up-to-date medical knowledge and information via the World Wide Web (WWW). The system leverages the document classification and text summarization techniques to deliver the highly correlated medical information to the physicians. The system architecture of the proposed WebIRS is first discussed, and then a case study on an application of the proposed system in a Hong Kong medical organization is presented to illustrate the adoption process and a questionnaire is administrated to collect feedback on the operation and performance of WebIRS in comparison with conventional information retrieval in the WWW. A prototype system has been constructed and implemented on a trial basis in a medical organization. It has proven to be of benefit to healthcare professionals through its automatic functions in classification and summarizing the medical information that the physicians needed and interested. The results of the case study show that with the use of the proposed WebIRS, significant reduction of searching time and effort, with retrieval of highly relevant materials can be attained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruess, K.; Oldenburg, C.; Moridis, G.
1997-12-31
This paper summarizes recent advances in methods for simulating water and tracer injection, and presents illustrative applications to liquid- and vapor-dominated geothermal reservoirs. High-resolution simulations of water injection into heterogeneous, vertical fractures in superheated vapor zones were performed. Injected water was found to move in dendritic patterns, and to experience stronger lateral flow effects than predicted from homogeneous medium models. Higher-order differencing methods were applied to modeling water and tracer injection into liquid-dominated systems. Conventional upstream weighting techniques were shown to be adequate for predicting the migration of thermal fronts, while higher-order methods give far better accuracy for tracer transport.more » A new fluid property module for the TOUGH2 simulator is described which allows a more accurate description of geofluids, and includes mineral dissolution and precipitation effects with associated porosity and permeability change. Comparisons between numerical simulation predictions and data for laboratory and field injection experiments are summarized. Enhanced simulation capabilities include a new linear solver package for TOUGH2, and inverse modeling techniques for automatic history matching and optimization.« less
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
genipe: an automated genome-wide imputation pipeline with automatic reporting and statistical tools.
Lemieux Perreault, Louis-Philippe; Legault, Marc-André; Asselin, Géraldine; Dubé, Marie-Pierre
2016-12-01
Genotype imputation is now commonly performed following genome-wide genotyping experiments. Imputation increases the density of analyzed genotypes in the dataset, enabling fine-mapping across the genome. However, the process of imputation using the most recent publicly available reference datasets can require considerable computation power and the management of hundreds of large intermediate files. We have developed genipe, a complete genome-wide imputation pipeline which includes automatic reporting, imputed data indexing and management, and a suite of statistical tests for imputed data commonly used in genetic epidemiology (Sequence Kernel Association Test, Cox proportional hazards for survival analysis, and linear mixed models for repeated measurements in longitudinal studies). The genipe package is an open source Python software and is freely available for non-commercial use (CC BY-NC 4.0) at https://github.com/pgxcentre/genipe Documentation and tutorials are available at http://pgxcentre.github.io/genipe CONTACT: louis-philippe.lemieux.perreault@statgen.org or marie-pierre.dube@statgen.orgSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Automated Assessment of Child Vocalization Development Using LENA.
Richards, Jeffrey A; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance
2017-07-12
To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Assessment was based on full-day audio recordings collected in a child's unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess children's expressive vocal development. More research remains to identify specific mechanisms of operation.
A Recent Advance in the Automatic Indexing of the Biomedical Literature
Névéol, Aurélie; Shooshan, Sonya E.; Humphrey, Susanne M.; Mork, James G.; Aronson, Alan R.
2009-01-01
The volume of biomedical literature has experienced explosive growth in recent years. This is reflected in the corresponding increase in the size of MEDLINE®, the largest bibliographic database of biomedical citations. Indexers at the U.S. National Library of Medicine (NLM) need efficient tools to help them accommodate the ensuing workload. After reviewing issues in the automatic assignment of Medical Subject Headings (MeSH® terms) to biomedical text, we focus more specifically on the new subheading attachment feature for NLM’s Medical Text Indexer (MTI). Natural Language Processing, statistical, and machine learning methods of producing automatic MeSH main heading/subheading pair recommendations were assessed independently and combined. The best combination achieves 48% precision and 30% recall. After validation by NLM indexers, a suitable combination of the methods presented in this paper was integrated into MTI as a subheading attachment feature producing MeSH indexing recommendations compliant with current state-of-the-art indexing practice. PMID:19166973
A preliminary study of DTI Fingerprinting on stroke analysis.
Ma, Heather T; Ye, Chenfei; Wu, Jun; Yang, Pengfei; Chen, Xuhui; Yang, Zhengyi; Ma, Jingbo
2014-01-01
DTI (Diffusion Tensor Imaging) is a well-known MRI (Magnetic Resonance Imaging) technique which provides useful structural information about human brain. However, the quantitative measurement to physiological variation of subtypes of ischemic stroke is not available. An automatically quantitative method for DTI analysis will enhance the DTI application in clinics. In this study, we proposed a DTI Fingerprinting technology to quantitatively analyze white matter tissue, which was applied in stroke classification. The TBSS (Tract Based Spatial Statistics) method was employed to generate mask automatically. To evaluate the clustering performance of the automatic method, lesion ROI (Region of Interest) is manually drawn on the DWI images as a reference. The results from the DTI Fingerprinting were compared with those obtained from the reference ROIs. It indicates that the DTI Fingerprinting could identify different states of ischemic stroke and has promising potential to provide a more comprehensive measure of the DTI data. Further development should be carried out to improve DTI Fingerprinting technology in clinics.
Land cover classification of VHR airborne images for citrus grove identification
NASA Astrophysics Data System (ADS)
Amorós López, J.; Izquierdo Verdiguier, E.; Gómez Chova, L.; Muñoz Marí, J.; Rodríguez Barreiro, J. Z.; Camps Valls, G.; Calpe Maravilla, J.
Managing land resources using remote sensing techniques is becoming a common practice. However, data analysis procedures should satisfy the high accuracy levels demanded by users (public or private companies and governments) in order to be extensively used. This paper presents a multi-stage classification scheme to update the citrus Geographical Information System (GIS) of the Comunidad Valenciana region (Spain). Spain is the first citrus fruit producer in Europe and the fourth in the world. In particular, citrus fruits represent 67% of the agricultural production in this region, with a total production of 4.24 million tons (campaign 2006-2007). The citrus GIS inventory, created in 2001, needs to be regularly updated in order to monitor changes quickly enough, and allow appropriate policy making and citrus production forecasting. Automatic methods are proposed in this work to facilitate this update, whose processing scheme is summarized as follows. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution aerial images (0.5 m). Next, several automatic classifiers (decision trees, artificial neural networks, and support vector machines) are trained and combined to improve the final classification accuracy. Finally, the citrus GIS is automatically updated if a high enough level of confidence, based on the agreement between classifiers, is achieved. This is the case for 85% of the parcels and accuracy results exceed 94%. The remaining parcels are classified by expert photo-interpreters in order to guarantee the high accuracy demanded by policy makers.
Beyond Statistics: The Economic Content of Risk Scores.
Einav, Liran; Finkelstein, Amy; Kluender, Raymond; Schrimpf, Paul
2016-04-01
"Big data" and statistical techniques to score potential transactions have transformed insurance and credit markets. In this paper, we observe that these widely-used statistical scores summarize a much richer heterogeneity, and may be endogenous to the context in which they get applied. We demonstrate this point empirically using data from Medicare Part D, showing that risk scores confound underlying health and endogenous spending response to insurance. We then illustrate theoretically that when individuals have heterogeneous behavioral responses to contracts, strategic incentives for cream skimming can still exist, even in the presence of "perfect" risk scoring under a given contract.
Minnesota Land Ownership Trends, 1962-1977
Pamela J. Jakes; Alexander Vasilevsky
1980-01-01
The distribution of Minnesota's commercial forest land among ownership classes has remained stable between 1962 and 1977. This note summarizes commercial forest ownership data by Forest Survey Unit for 1962 and 1977 and presents more detailed area statistics for Minnesota's 17 northern countries.
Speckle noise in satellite based lidar systems
NASA Technical Reports Server (NTRS)
Gardner, C. S.
1977-01-01
The lidar system model was described, and the statistics of the signal and noise at the receiver output were derived. Scattering media effects were discussed along with polarization and atmospheric turbulence. The major equations were summarized and evaluated for some typical parameters.
Comparison of skid resistance testing to stopping distance testing.
DOT National Transportation Integrated Search
1998-01-01
This report is intended to statistically summarize the results of a side-by-side test of the skid resistance testing trailer utilized by the Oregon Department of Transportation (ODOT), and the stopping distance car utilized by the Oregon State Police...
Digital image analysis techniques for fiber and soil mixtures.
DOT National Transportation Integrated Search
1999-05-01
The objective of image processing is to visually enhance, quantify, and/or statistically evaluate some aspect of an image not readily apparent in its original form. Processed digital image data can be analyzed in numerous ways. In order to summarize ...
A transportation profile of New York State
DOT National Transportation Integrated Search
1996-01-01
This report provides a convenient reference for transportation statistics in New York State. The focus of the document is on demographic and related travel measures which, for the convenience of the user, are summarized under one cover. Most of the i...
Camera network video summarization
NASA Astrophysics Data System (ADS)
Panda, Rameswar; Roy-Chowdhury, Amit K.
2017-05-01
Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.
Scalable gastroscopic video summarization via similar-inhibition dictionary selection.
Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin
2016-01-01
This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.
Cruz-Montecinos, Carlos; Cerda, Mauricio; Sanzana-Cuche, Rodolfo; Martín-Martín, Jaime; Cuesta-Vargas, Antonio
2016-01-01
The fascia provides and transmits forces for connective tissues, thereby regulating human posture and movement. One way to assess the myofascial interaction is a fascia ultrasound recording. Ultrasound can follow fascial displacement either manually or automatically through two-dimensional (2D) method. One possible method is the iterated Lucas-Kanade Pyramid (LKP) algorithm, which is based on automatic pixel tracking during passive movements in 2D fascial displacement assessments. Until now, the accumulated error over time has not been considered, even though it could be crucial for detecting fascial displacement in low amplitude movements. The aim of this study was to assess displacement of the medial gastrocnemius fascia during cervical spine flexion in a kyphotic posture with the knees extended and ankles at 90°. The ultrasound transducer was placed on the extreme dominant belly of the medial gastrocnemius. Displacement was calculated from nine automatically selected tracking points. To determine cervical flexion, an established 2D marker protocol was implemented. Offline pressure sensors were used to synchronize the 2D kinematic data from cervical flexion and deep fascia displacement of the medial gastrocnemius. Fifteen participants performed the cervical flexion task. The basal tracking error was 0.0211 mm. In 66 % of the subjects, a proximal fascial tissue displacement of the fascia above the basal error (0.076 mm ± 0.006 mm) was measured. Fascia displacement onset during cervical spine flexion was detected over 70 % of the cycle; however, only when detected for more than 80 % of the cycle was displacement considered statistically significant as compared to the first 10 % of the cycle (ANOVA, p < 0.05). By using an automated tracking method, the present analyses suggest statistically significant displacement of deep fascia. Further studies are needed to corroborate and fully understand the mechanisms associated with these results.
3D variational brain tumor segmentation using Dirichlet priors on a clustered feature set.
Popuri, Karteek; Cobzas, Dana; Murtha, Albert; Jägersand, Martin
2012-07-01
Brain tumor segmentation is a required step before any radiation treatment or surgery. When performed manually, segmentation is time consuming and prone to human errors. Therefore, there have been significant efforts to automate the process. But, automatic tumor segmentation from MRI data is a particularly challenging task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. In our work, we propose an automatic brain tumor segmentation method that addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multidimensional feature set. Then, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this work is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned region statistics in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters from the normal brain region to be in the tumor region. This leads to a better disambiguation of the tumor from brain tissue. We evaluated the performance of our automatic segmentation method on 15 real MRI scans of brain tumor patients, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Validation with the expert segmentation labels yielded encouraging results: Jaccard (58%), Precision (81%), Recall (67%), Hausdorff distance (24 mm). Using priors on the brain/tumor appearance, our proposed automatic 3D variational segmentation method was able to better disambiguate the tumor from the surrounding tissue.
Oechsner, Markus; Chizzali, Barbara; Devecka, Michal; Combs, Stephanie Elisabeth; Wilkens, Jan Jakob; Duma, Marciana Nona
2016-10-26
The aim of this study was to analyze differences in couch shifts (setup errors) resulting from image registration of different CT datasets with free breathing cone beam CTs (FB-CBCT). As well automatic as manual image registrations were performed and registration results were correlated to tumor characteristics. FB-CBCT image registration was performed for 49 patients with lung lesions using slow planning CT (PCT), average intensity projection (AIP), maximum intensity projection (MIP) and mid-ventilation CTs (MidV) as reference images. Both, automatic and manual image registrations were applied. Shift differences were evaluated between the registered CT datasets for automatic and manual registration, respectively. Furthermore, differences between automatic and manual registration were analyzed for the same CT datasets. The registration results were statistically analyzed and correlated to tumor characteristics (3D tumor motion, tumor volume, superior-inferior (SI) distance, tumor environment). Median 3D shift differences over all patients were between 0.5 mm (AIPvsMIP) and 1.9 mm (MIPvsPCT and MidVvsPCT) for the automatic registration and between 1.8 mm (AIPvsPCT) and 2.8 mm (MIPvsPCT and MidVvsPCT) for the manual registration. For some patients, large shift differences (>5.0 mm) were found (maximum 10.5 mm, automatic registration). Comparing automatic vs manual registrations for the same reference CTs, ∆AIP achieved the smallest (1.1 mm) and ∆MIP the largest (1.9 mm) median 3D shift differences. The standard deviation (variability) for the 3D shift differences was also the smallest for ∆AIP (1.1 mm). Significant correlations (p < 0.01) between 3D shift difference and 3D tumor motion (AIPvsMIP, MIPvsMidV) and SI distance (AIPvsMIP) (automatic) and also for 3D tumor motion (∆PCT, ∆MidV; automatic vs manual) were found. Using different CT datasets for image registration with FB-CBCTs can result in different 3D couch shifts. Manual registrations achieved partly different 3D shifts than automatic registrations. AIP CTs yielded the smallest shift differences and might be the most appropriate CT dataset for registration with 3D FB-CBCTs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-14
This package contains statistical routines for extracting features from multivariate time-series data which can then be used for subsequent multivariate statistical analysis to identify patterns and anomalous behavior. It calculates local linear or quadratic regression model fits to moving windows for each series and then summarizes the model coefficients across user-defined time intervals for each series. These methods are domain agnostic-but they have been successfully applied to a variety of domains, including commercial aviation and electric power grid data.
Sowden, Sophie; Koehne, Svenja; Catmur, Caroline; Dziobek, Isabel; Bird, Geoffrey
2016-02-01
A lack of imitative behavior is frequently described as a core feature of Autism Spectrum Disorder (ASD), and is consistent with claims of mirror neuron system dysfunction in these individuals. Previous research has questioned this characterization of ASD however, arguing that when tests of automatic imitation are used--which do not require higher-level cognitive processing--imitative behavior is intact or even enhanced in individuals with ASD. In Experiment 1, 60 adult individuals with ASD and a matched Control group completed an automatic imitation task in which they were required to perform an index or a middle finger lift while observing a hand making either the same, or the alternate, finger movement. Both groups demonstrated a significant imitation effect whereby actions were executed faster when preceded by observation of the same action, than when preceded by the alternate action. The magnitude of this "imitation effect" was statistically indistinguishable in the ASD and Control groups. Experiment 2 utilized an improved automatic imitation paradigm to demonstrate that, when automatic imitation effects are isolated from those due to spatial compatibility, increasing autism symptom severity is associated with an increased tendency to imitate. Notably, there was no association between autism symptom severity and spatial compatibility, demonstrating the specificity of the link between ASD symptoms and increased imitation. These results provide evidence against claims of a lack of imitative behavior in ASD, and challenge the "Broken Mirror Theory of Autism." © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Development of an automated ultrasonic testing system
NASA Astrophysics Data System (ADS)
Shuxiang, Jiao; Wong, Brian Stephen
2005-04-01
Non-Destructive Testing is necessary in areas where defects in structures emerge over time due to wear and tear and structural integrity is necessary to maintain its usability. However, manual testing results in many limitations: high training cost, long training procedure, and worse, the inconsistent test results. A prime objective of this project is to develop an automatic Non-Destructive testing system for a shaft of the wheel axle of a railway carriage. Various methods, such as the neural network, pattern recognition methods and knowledge-based system are used for the artificial intelligence problem. In this paper, a statistical pattern recognition approach, Classification Tree is applied. Before feature selection, a thorough study on the ultrasonic signals produced was carried out. Based on the analysis of the ultrasonic signals, three signal processing methods were developed to enhance the ultrasonic signals: Cross-Correlation, Zero-Phase filter and Averaging. The target of this step is to reduce the noise and make the signal character more distinguishable. Four features: 1. The Auto Regressive Model Coefficients. 2. Standard Deviation. 3. Pearson Correlation 4. Dispersion Uniformity Degree are selected. And then a Classification Tree is created and applied to recognize the peak positions and amplitudes. Searching local maximum is carried out before feature computing. This procedure reduces much computation time in the real-time testing. Based on this algorithm, a software package called SOFRA was developed to recognize the peaks, calibrate automatically and test a simulated shaft automatically. The automatic calibration procedure and the automatic shaft testing procedure are developed.
Johansson, Jarkko; Alakurtti, Kati; Joutsa, Juho; Tohka, Jussi; Ruotsalainen, Ulla; Rinne, Juha O
2016-10-01
The striatum is the primary target in regional C-raclopride-PET studies, and despite its small volume, it contains several functional and anatomical subregions. The outcome of the quantitative dopamine receptor study using C-raclopride-PET depends heavily on the quality of the region-of-interest (ROI) definition of these subregions. The aim of this study was to evaluate subregional analysis techniques because new approaches have emerged, but have not yet been compared directly. In this paper, we compared manual ROI delineation with several automatic methods. The automatic methods used either direct clustering of the PET image or individualization of chosen brain atlases on the basis of MRI or PET image normalization. State-of-the-art normalization methods and atlases were applied, including those provided in the FreeSurfer, Statistical Parametric Mapping8, and FSL software packages. Evaluation of the automatic methods was based on voxel-wise congruity with the manual delineations and the test-retest variability and reliability of the outcome measures using data from seven healthy male participants who were scanned twice with C-raclopride-PET on the same day. The results show that both manual and automatic methods can be used to define striatal subregions. Although most of the methods performed well with respect to the test-retest variability and reliability of binding potential, the smallest average test-retest variability and SEM were obtained using a connectivity-based atlas and PET normalization (test-retest variability=4.5%, SEM=0.17). The current state-of-the-art automatic ROI methods can be considered good alternatives for subjective and laborious manual segmentation in C-raclopride-PET studies.
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Fei, Baowei
2013-11-01
An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
Short communication: Automatic washing of hooves can help control digital dermatitis in dairy cows.
Thomsen, Peter T; Ersbøll, Annette Kjær; Sørensen, Jan Tind
2012-12-01
The objectives of this study were to develop and test a system for automatic washing of the hooves of dairy cows and to evaluate the effect of frequent automatic washing on the prevalence of digital dermatitis (DD). An automatic hoof washer was developed in an experimental dairy herd and tested in 6 commercial dairy herds in 2 experiments (1 and 2). In the experimental herd, automatic hoof washing resulted in cleaner hooves. In experiments 1 and 2, cows were washed after each milking on the left side only, leaving the right side unwashed as a within-cow control. In experiment 1, hooves were washed with a water and 0.4% soap solution. In experiment 2, hooves were washed with water only. In each experiment, DD was scored in a hoof-trimming chute approximately 60 d after the start of hoof washing. Data were analyzed using a generalized linear mixed model. The outcome was the DD status of each leg (DD positive or DD negative). Herd and cow within herd were included as random effects, and treatment (washing or control) was included as a fixed effect. The statistical analyses showed that the odds ratio of having DD was 1.48 in the control leg compared with the washed leg in experiment 1. In experiment 2, the odds ratio of having DD was 1.27 in the control leg compared with the washed leg. We concluded that automatic washing of hooves with water and soap can help decrease the prevalence of DD in commercial dairy herds. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Wisconsin's forest, 2004: statistics and quality assurance
Mark H. Hansen; Charles H. Perry; Gary Brand; Ronald E. McRoberts
2008-01-01
The first full, annualized inventory of Wisconsin's forests was completed in 2004 after 6,478 forested plots were visited. An earlier publication summarized the results and presented issue - driven analyses (Perry et al. 2008) . This report includes detailed information on forest inventory methods...
Blowout Prevention System Events and Equipment Component Failures : 2016 SafeOCS Annual Report
DOT National Transportation Integrated Search
2017-09-22
The SafeOCS 2016 Annual Report, produced by the Bureau of Transportation Statistics (BTS), summarizes blowout prevention (BOP) equipment failures on marine drilling rigs in the Outer Continental Shelf. It includes an analysis of equipment component f...
National Urban Mass Transportation Statistics. Second Annual Report, Section 15 Reporting System
DOT National Transportation Integrated Search
1982-06-01
This report summarizes the financial and operating data submitted annually to the Urban Mass Transportation Administration (UMTA) by the nation's public transit operators, pursuant to Section 15 of the UMT Act of 1964, as amended. The report consists...
Constructing Maps Collaboratively.
ERIC Educational Resources Information Center
Leinhardt, Gaea; Stainton, Catherine; Bausmith, Jennifer Merriman
1998-01-01
Summarizes a study that maintains that students who work together in small groups had a better understanding of map concepts. Discusses why making maps in groups can enhance students' conceptual geographic understanding and offers suggestions for improving geography instructions using small group configurations. Includes statistical and graphic…
DOT National Transportation Integrated Search
1984-04-01
This report summarizes the cost experiences and trends of sixteen domestic AGT systems. Capital costs, operation and maintenance costs, system characteristics, operational statistics, and unit cost measures are presented to provide useful information...
DEVELOPING MEANINGFUL COHORTS FOR HUMAN EXPOSURE MODELS
This paper summarizes numerous statistical analyses focused on the U.S. Environmental Protection Agency's Consolidated Human Activity Database (CHAD), used by many exposure modelers as the basis for data on what people do and where they spend their time. In doing so, modelers ...
Environmental Technician Training in the United Kingdom.
ERIC Educational Resources Information Center
Potter, John F.
1985-01-01
Stresses the need for qualified environmental science technicians and for training courses in this area. Provides program information and statistical summarization of a national diploma program for environmental technicians titled "Business and Technician Education Council." Reviews the program areas of environmental analysis and…
Timber resource statistics for non-Federal forest land in southwest Oregon.
Donald R. Gedney; Patricia M. Bassett; Mary A. Mei
1986-01-01
This report summarizes a 1985 timber resource inventory of the non-Federal forest land in the five counties (Coos, Curry, Douglas, Jackson, and Josephine) in southwest Oregon. Detailed tables of forest area, timber volume, growth, mortality, and harvest are presented.
Timber resource statistics for the Puget Sound area, Washington.
Patricia M. Bassett; Daniel D. Oswald
1982-01-01
This report summarizes a 1979 timber resource inventory of eight counties in the Puget Sound area of Washington: Island, King, Kitsap, Pierce, San Juan, Skagit, Snohomish, and Whatcom. Detailed tables of forest area, timber volume, growth, mortality, and harvest are presented.
Preliminary timber resource statistics for the Puget Sound area, Washington.
Colin D. MacLean; Janet L. Ohmann; Patricia M. Bassett
1991-01-01
This report summarizes a 1989 timber resource inventory of eight counties in the Puget Sound region of Washington: Island, King, Kitsap, Pierce, Skagit, San Juan, Snohomish, and Whatcom. Detailed tables of forest area, timber volume, growth, mortality, and harvest are presented.
A Class of Population Covariance Matrices in the Bootstrap Approach to Covariance Structure Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Hayashi, Kentaro; Yanagihara, Hirokazu
2007-01-01
Model evaluation in covariance structure analysis is critical before the results can be trusted. Due to finite sample sizes and unknown distributions of real data, existing conclusions regarding a particular statistic may not be applicable in practice. The bootstrap procedure automatically takes care of the unknown distribution and, for a given…
Data mining: sophisticated forms of managed care modeling through artificial intelligence.
Borok, L S
1997-01-01
Data mining is a recent development in computer science that combines artificial intelligence algorithms and relational databases to discover patterns automatically, without the use of traditional statistical methods. Work with data mining tools in health care is in a developmental stage that holds great promise, given the combination of demographic and diagnostic information.
Farm Persistence and Adaptation at the Rural-Urban Interface: Succession and Farm Adjustment
ERIC Educational Resources Information Center
Inwood, Shoshanah M.; Sharp, Jeff S.
2012-01-01
Despite assumptions that agriculture will automatically go into a mode of decline at the Rural Urban Interface (RUI), official statistics suggest that agriculture as a whole remains a strong (and in some cases a growing) industry in many U.S. RUI counties. RUI scholars have acknowledged internal family dynamics can significantly influence farm…
ERIC Educational Resources Information Center
Miller, L. Dee; Soh, Leen-Kiat; Samal, Ashok; Kupzyk, Kevin; Nugent, Gwen
2015-01-01
Learning objects (LOs) are important online resources for both learners and instructors and usage for LOs is growing. Automatic LO tracking collects large amounts of metadata about individual students as well as data aggregated across courses, learning objects, and other demographic characteristics (e.g. gender). The challenge becomes identifying…
Using Statistical Techniques and Web Search to Correct ESL Errors
ERIC Educational Resources Information Center
Gamon, Michael; Leacock, Claudia; Brockett, Chris; Dolan, William B.; Gao, Jianfeng; Belenko, Dmitriy; Klementiev, Alexandre
2009-01-01
In this paper we present a system for automatic correction of errors made by learners of English. The system has two novel aspects. First, machine-learned classifiers trained on large amounts of native data and a very large language model are combined to optimize the precision of suggested corrections. Second, the user can access real-life web…
Automatic extraction of tree crowns from aerial imagery in urban environment
NASA Astrophysics Data System (ADS)
Liu, Jiahang; Li, Deren; Qin, Xunwen; Yang, Jianfeng
2006-10-01
Traditionally, field-based investigation is the main method to investigate greenbelt in urban environment, which is costly and low updating frequency. In higher resolution image, the imagery structure and texture of tree canopy has great similarity in statistics despite the great difference in configurations of tree canopy, and their surface structures and textures of tree crown are very different from the other types. In this paper, we present an automatic method to detect tree crowns using high resolution image in urban environment without any apriori knowledge. Our method catches unique structure and texture of tree crown surface, use variance and mathematical expectation of defined image window to position the candidate canopy blocks coarsely, then analysis their inner structure and texture to refine these candidate blocks. The possible spans of all the feature parameters used in our method automatically generate from the small number of samples, and HOLE and its distribution as an important characteristics are introduced into refining processing. Also the isotropy of candidate image block and holes' distribution is integrated in our method. After introduction the theory of our method, aerial imageries were used ( with a resolution about 0.3m ) to test our method, and the results indicate that our method is an effective approach to automatically detect tree crown in urban environment.
DALMATIAN: An Algorithm for Automatic Cell Detection and Counting in 3D.
Shuvaev, Sergey A; Lazutkin, Alexander A; Kedrov, Alexander V; Anokhin, Konstantin V; Enikolopov, Grigori N; Koulakov, Alexei A
2017-01-01
Current 3D imaging methods, including optical projection tomography, light-sheet microscopy, block-face imaging, and serial two photon tomography enable visualization of large samples of biological tissue. Large volumes of data obtained at high resolution require development of automatic image processing techniques, such as algorithms for automatic cell detection or, more generally, point-like object detection. Current approaches to automated cell detection suffer from difficulties originating from detection of particular cell types, cell populations of different brightness, non-uniformly stained, and overlapping cells. In this study, we present a set of algorithms for robust automatic cell detection in 3D. Our algorithms are suitable for, but not limited to, whole brain regions and individual brain sections. We used watershed procedure to split regional maxima representing overlapping cells. We developed a bootstrap Gaussian fit procedure to evaluate the statistical significance of detected cells. We compared cell detection quality of our algorithm and other software using 42 samples, representing 6 staining and imaging techniques. The results provided by our algorithm matched manual expert quantification with signal-to-noise dependent confidence, including samples with cells of different brightness, non-uniformly stained, and overlapping cells for whole brain regions and individual tissue sections. Our algorithm provided the best cell detection quality among tested free and commercial software.
A novel automatic segmentation workflow of axial breast DCE-MRI
NASA Astrophysics Data System (ADS)
Besbes, Feten; Gargouri, Norhene; Damak, Alima; Sellami, Dorra
2018-04-01
In this paper we propose a novel process of a fully automatic breast tissue segmentation which is independent from expert calibration and contrast. The proposed algorithm is composed by two major steps. The first step consists in the detection of breast boundaries. It is based on image content analysis and Moore-Neighbour tracing algorithm. As a processing step, Otsu thresholding and neighbors algorithm are applied. Then, the external area of breast is removed to get an approximated breast region. The second preprocessing step is the delineation of the chest wall which is considered as the lowest cost path linking three key points; These points are located automatically at the breast. They are respectively, the left and right boundary points and the middle upper point placed at the sternum region using statistical method. For the minimum cost path search problem, we resolve it through Dijkstra algorithm. Evaluation results reveal the robustness of our process face to different breast densities, complex forms and challenging cases. In fact, the mean overlap between manual segmentation and automatic segmentation through our method is 96.5%. A comparative study shows that our proposed process is competitive and faster than existing methods. The segmentation of 120 slices with our method is achieved at least in 20.57+/-5.2s.
NASA Astrophysics Data System (ADS)
Lemanzyk, Thomas; Anding, Katharina; Linss, Gerhard; Rodriguez Hernández, Jorge; Theska, René
2015-02-01
The following paper deals with the classification of seeds and seed components of the South-American Incanut plant and the modification of a machine to handle this task. Initially the state of the art is being illustrated. The research was executed in Germany and with a relevant part in Peru and Ecuador. Theoretical considerations for the solution of an automatically analysis of the Incanut seeds were specified. The optimization of the analyzing software and the separation unit of the mechanical hardware are carried out with recognition results. In a final step the practical application of the analysis of the Incanut seeds is held on a trial basis and rated on the bases of statistic values.
Aozan: an automated post-sequencing data-processing pipeline.
Perrin, Sandrine; Firmo, Cyril; Lemoine, Sophie; Le Crom, Stéphane; Jourdren, Laurent
2017-07-15
Data management and quality control of output from Illumina sequencers is a disk space- and time-consuming task. Thus, we developed Aozan to automatically handle data transfer, demultiplexing, conversion and quality control once a run has finished. This software greatly improves run data management and the monitoring of run statistics via automatic emails and HTML web reports. Aozan is implemented in Java and Python, supported on Linux systems, and distributed under the GPLv3 License at: http://www.outils.genomique.biologie.ens.fr/aozan/ . Aozan source code is available on GitHub: https://github.com/GenomicParisCentre/aozan . aozan@biologie.ens.fr. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Improvement and scale-up of the NASA Redox storage system
NASA Technical Reports Server (NTRS)
Reid, M. A.; Thaller, L. H.
1980-01-01
A preprototype 1.0 kW redox system (2 kW peak) with 11 kWh storage capacity was built and integrated with the NASA/DOE photovoltaic test facility at NASA Lewis. This full function redox system includes four substacks of 39 cells each (1/3 cu ft active area) which are connected hydraulically in parallel and electrically in series. An open circuit voltage cell and a set of rebalance cells are used to continuously monitor the system state of charge and automatically maintain the anode and cathode reactants electrochemically in balance. Recent membrane and electrode advances are summarized and the results of multicell stack tests of 1 cu ft are described.
NASA Technical Reports Server (NTRS)
Djorgovski, George
1993-01-01
The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multiparameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resource.
NASA Technical Reports Server (NTRS)
Djorgovski, Stanislav
1992-01-01
The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multi parameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resources.
Gap-free segmentation of vascular networks with automatic image processing pipeline.
Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas
2017-03-01
Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Different binarization processes validated against manual counts of fluorescent bacterial cells.
Tamminga, Gerrit G; Paulitsch-Fuchs, Astrid H; Jansen, Gijsbert J; Euverink, Gert-Jan W
2016-09-01
State of the art software methods (such as fixed value approaches or statistical approaches) to create a binary image of fluorescent bacterial cells are not as accurate and precise as they should be for counting bacteria and measuring their area. To overcome these bottlenecks, we introduce biological significance to obtain a binary image from a greyscale microscopic image. Using our biological significance approach we are able to automatically count about the same number of cells as an individual researcher would do by manual/visual counting. Using the fixed value or statistical approach to obtain a binary image leads to about 20% less cells in automatic counting. In our procedure we included the area measurements of the bacterial cells to determine the right parameters for background subtraction and threshold values. In an iterative process the threshold and background subtraction values were incremented until the number of particles smaller than a typical bacterial cell is less than the number of bacterial cells with a certain area. This research also shows that every image has a specific threshold with respect to the optical system, magnification and staining procedure as well as the exposure time. The biological significance approach shows that automatic counting can be performed with the same accuracy, precision and reproducibility as manual counting. The same approach can be used to count bacterial cells using different optical systems (Leica, Olympus and Navitar), magnification factors (200× and 400×), staining procedures (DNA (Propidium Iodide) and RNA (FISH)) and substrates (polycarbonate filter or glass). Copyright © 2016 Elsevier B.V. All rights reserved.
Towards an automatic wind speed and direction profiler for Wide Field adaptive optics systems
NASA Astrophysics Data System (ADS)
Sivo, G.; Turchi, A.; Masciadri, E.; Guesalaga, A.; Neichel, B.
2018-05-01
Wide Field Adaptive Optics (WFAO) systems are among the most sophisticated adaptive optics (AO) systems available today on large telescopes. Knowledge of the vertical spatio-temporal distribution of wind speed (WS) and direction (WD) is fundamental to optimize the performance of such systems. Previous studies already proved that the Gemini Multi-Conjugated AO system (GeMS) is able to retrieve measurements of the WS and WD stratification using the SLOpe Detection And Ranging (SLODAR) technique and to store measurements in the telemetry data. In order to assess the reliability of these estimates and of the SLODAR technique applied to such complex AO systems, in this study we compared WS and WD values retrieved from GeMS with those obtained with the atmospheric model Meso-NH on a rich statistical sample of nights. It has previously been proved that the latter technique provided excellent agreement with a large sample of radiosoundings, both in statistical terms and on individual flights. It can be considered, therefore, as an independent reference. The excellent agreement between GeMS measurements and the model that we find in this study proves the robustness of the SLODAR approach. To bypass the complex procedures necessary to achieve automatic measurements of the wind with GeMS, we propose a simple automatic method to monitor nightly WS and WD using Meso-NH model estimates. Such a method can be applied to whatever present or new-generation facilities are supported by WFAO systems. The interest of this study is, therefore, well beyond the optimization of GeMS performance.
Automatic cortical thickness analysis on rodent brain
NASA Astrophysics Data System (ADS)
Lee, Joohwi; Ehlers, Cindy; Crews, Fulton; Niethammer, Marc; Budin, Francois; Paniagua, Beatriz; Sulik, Kathy; Johns, Josephine; Styner, Martin; Oguz, Ipek
2011-03-01
Localized difference in the cortex is one of the most useful morphometric traits in human and animal brain studies. There are many tools and methods already developed to automatically measure and analyze cortical thickness for the human brain. However, these tools cannot be directly applied to rodent brains due to the different scales; even adult rodent brains are 50 to 100 times smaller than humans. This paper describes an algorithm for automatically measuring the cortical thickness of mouse and rat brains. The algorithm consists of three steps: segmentation, thickness measurement, and statistical analysis among experimental groups. The segmentation step provides the neocortex separation from other brain structures and thus is a preprocessing step for the thickness measurement. In the thickness measurement step, the thickness is computed by solving a Laplacian PDE and a transport equation. The Laplacian PDE first creates streamlines as an analogy of cortical columns; the transport equation computes the length of the streamlines. The result is stored as a thickness map over the neocortex surface. For the statistical analysis, it is important to sample thickness at corresponding points. This is achieved by the particle correspondence algorithm which minimizes entropy between dynamically moving sample points called particles. Since the computational cost of the correspondence algorithm may limit the number of corresponding points, we use thin-plate spline based interpolation to increase the number of corresponding sample points. As a driving application, we measured the thickness difference to assess the effects of adolescent intermittent ethanol exposure that persist into adulthood and performed t-test between the control and exposed rat groups. We found significantly differing regions in both hemispheres.
NASA Astrophysics Data System (ADS)
Royer, P.; De Ridder, J.; Vandenbussche, B.; Regibo, S.; Huygen, R.; De Meester, W.; Evans, D. J.; Martinez, J.; Korte-Stapff, M.
2016-07-01
We present the first results of a study aimed at finding new and efficient ways to automatically process spacecraft telemetry for automatic health monitoring. The goal is to reduce the load on the flight control team while extending the "checkability" to the entire telemetry database, and provide efficient, robust and more accurate detection of anomalies in near real time. We present a set of effective methods to (a) detect outliers in the telemetry or in its statistical properties, (b) uncover and visualise special properties of the telemetry and (c) detect new behavior. Our results are structured around two main families of solutions. For parameters visiting a restricted set of signal values, i.e. all status parameters and about one third of all the others, we focus on a transition analysis, exploiting properties of Poincare plots. For parameters with an arbitrarily high number of possible signal values, we describe the statistical properties of the signal via its Kernel Density Estimate. We demonstrate that this allows for a generic and dynamic approach of the soft-limit definition. Thanks to a much more accurate description of the signal and of its time evolution, we are more sensitive and more responsive to outliers than the traditional checks against hard limits. Our methods were validated on two years of Venus Express telemetry. They are generic for assisting in health monitoring of any complex system with large amounts of diagnostic sensor data. Not only spacecraft systems but also present-day astronomical observatories can benefit from them.
Applying Statistical Models and Parametric Distance Measures for Music Similarity Search
NASA Astrophysics Data System (ADS)
Lukashevich, Hanna; Dittmar, Christian; Bastuck, Christoph
Automatic deriving of similarity relations between music pieces is an inherent field of music information retrieval research. Due to the nearly unrestricted amount of musical data, the real-world similarity search algorithms have to be highly efficient and scalable. The possible solution is to represent each music excerpt with a statistical model (ex. Gaussian mixture model) and thus to reduce the computational costs by applying the parametric distance measures between the models. In this paper we discuss the combinations of applying different parametric modelling techniques and distance measures and weigh the benefits of each one against the others.
United States Air Force Statistical Digest, Jan 1949-Jun 1950, Fifth Edition
1951-04-25
94.7437 0 • 51 (Face p. 10) PERSONNEL - MILITARY AND CIVILIAN The data presented in this part of the ~USAF Statistical DigesP are bll.$ed on audited ...Quarter ~rtor ~""r ~""r II’ORUlll’IDB ’TOrage Personnel Strength 159,757 16>,996 626,871 168,101 154,643 150,574 152,953 Acca "ioM _ ToW • ~ ~ ~ ~ UU l...Strategic Intelligence Army Language School Accountin& and Auditing AJ’lllJ’ Security Agency Officer, General Automatic Telephone syst8m 1l.aintenance Basic
Timber resources of western Oregon—highlights and statistics.
Donald R. Gedney
1982-01-01
This report summarizes and interprets the results of a timber resource inventory of western Oregon made between 1973 and 1976. Detailed tables give land and forest area, timber volume, growth, and mortality for western Oregon and for southwest Oregon, west-central Oregon, and northwest Oregon.
Secondary School Mathematics Curriculum Improvement Study Information Bulletin 7.
ERIC Educational Resources Information Center
Secondary School Mathematics Curriculum Improvement Study, New York, NY.
The background, objectives, and design of Secondary School Mathematics Curriculum Improvement Study (SSMCIS) are summarized. Details are given of the content of the text series, "Unified Modern Mathematics," in the areas of algebra, geometry, linear algebra, probability and statistics, analysis (calculus), logic, and computer…
Selective Mutism: Definition, Issues, and Treatment.
ERIC Educational Resources Information Center
Brigham, Frederick J.; Cole, Jane E.
This paper reviews definitions and issues in selective mutism in children and summarizes results of interventions conducted and published since 1982. Definitions and diagnostic criteria of the American Psychiatric Association's "Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) (1994)" and the World Health Organization's…
Historical changes in large river fish assemblages of the Americas: A synthesis
The objective of this synthesis is to summarize patterns in historical changes in the fish assemblages of selected large American rivers, to document causes for those changes, and to suggest rehabilitation measures. Although not a statistically representative sample of large riv...
Timber resource statistics for non-Federal forest land in west-central Oregon.
Donald R. Gedney; Patricia M. Bassett; Mary A. Mei
1987-01-01
This report summarizes a 1985-86 timber resource inventory of the non-Federal forest land in the four counties (Benton, Lane, Lincoln, and Linn) in west-central Oregon. Detailed tables of forest area, timber volume, growth, mortality, and harvest are presented.
Focus on Statistical Physics Modeling in Economics and Finance
NASA Astrophysics Data System (ADS)
Mantegna, Rosario N.; Kertész, János
2011-02-01
This focus issue presents a collection of papers on recent results in statistical physics modeling in economics and finance, commonly known as econophysics. We touch briefly on the history of this relatively new multi-disciplinary field, summarize the motivations behind its emergence and try to characterize its specific features. We point out some research aspects that must be improved and briefly discuss the topics the research field is moving toward. Finally, we give a short account of the papers collected in this issue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rojas, Joseph Maurice
We summarize the contributions of the Texas A\\&M University Group to the project (DE-FG02-09ER25949/DE-SC0002505: Topology for Statistical Modeling of Petascale Data - an ASCR-funded collaboration between Sandia National Labs, Texas A\\&M U, and U Utah) during 6/9/2011 -- 2/27/2013.
Aboraya, Ahmed
2010-11-01
The publication of the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V) is anticipated in May 2013 with many new additions and changes. In this article, the author summarizes the phases of psychiatric classification from the turn of the 20th century until today. Psychiatry 2010 offers a DSM-V Scientific Forum and invites readers to submit comments, recommendations, and articles to Psychiatry 2010 and DSM-V Task Force.
Statistical methods and computing for big data.
Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun
2016-01-01
Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.
Statistical methods and computing for big data
Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing
2016-01-01
Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593
Fallah, Faezeh; Machann, Jürgen; Martirosian, Petros; Bamberg, Fabian; Schick, Fritz; Yang, Bin
2017-04-01
To evaluate and compare conventional T1-weighted 2D turbo spin echo (TSE), T1-weighted 3D volumetric interpolated breath-hold examination (VIBE), and two-point 3D Dixon-VIBE sequences for automatic segmentation of visceral adipose tissue (VAT) volume at 3 Tesla by measuring and compensating for errors arising from intensity nonuniformity (INU) and partial volume effects (PVE). The body trunks of 28 volunteers with body mass index values ranging from 18 to 41.2 kg/m 2 (30.02 ± 6.63 kg/m 2 ) were scanned at 3 Tesla using three imaging techniques. Automatic methods were applied to reduce INU and PVE and to segment VAT. The automatically segmented VAT volumes obtained from all acquisitions were then statistically and objectively evaluated against the manually segmented (reference) VAT volumes. Comparing the reference volumes with the VAT volumes automatically segmented over the uncorrected images showed that INU led to an average relative volume difference of -59.22 ± 11.59, 2.21 ± 47.04, and -43.05 ± 5.01 % for the TSE, VIBE, and Dixon images, respectively, while PVE led to average differences of -34.85 ± 19.85, -15.13 ± 11.04, and -33.79 ± 20.38 %. After signal correction, differences of -2.72 ± 6.60, 34.02 ± 36.99, and -2.23 ± 7.58 % were obtained between the reference and the automatically segmented volumes. A paired-sample two-tailed t test revealed no significant difference between the reference and automatically segmented VAT volumes of the corrected TSE (p = 0.614) and Dixon (p = 0.969) images, but showed a significant VAT overestimation using the corrected VIBE images. Under similar imaging conditions and spatial resolution, automatically segmented VAT volumes obtained from the corrected TSE and Dixon images agreed with each other and with the reference volumes. These results demonstrate the efficacy of the signal correction methods and the similar accuracy of TSE and Dixon imaging for automatic volumetry of VAT at 3 Tesla.
Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer
2013-10-01
The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV segments classified by the LD classifier. A combination of linear/nonlinear features from HRV signals is effective in automatic sleep staging. Moreover, time-frequency features are more informative than others. In addition, a separability measure and classification results showed that HRV signal features, especially nonlinear features, extracted from 5-min segments are more discriminative than those from 0.5-min segments in automatic sleep staging. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Du, Hongbo; Al-Jubouri, Hanan; Sellahewa, Harin
2014-05-01
Content-based image retrieval is an automatic process of retrieving images according to image visual contents instead of textual annotations. It has many areas of application from automatic image annotation and archive, image classification and categorization to homeland security and law enforcement. The key issues affecting the performance of such retrieval systems include sensible image features that can effectively capture the right amount of visual contents and suitable similarity measures to find similar and relevant images ranked in a meaningful order. Many different approaches, methods and techniques have been developed as a result of very intensive research in the past two decades. Among many existing approaches, is a cluster-based approach where clustering methods are used to group local feature descriptors into homogeneous regions, and search is conducted by comparing the regions of the query image against those of the stored images. This paper serves as a review of works in this area. The paper will first summarize the existing work reported in the literature and then present the authors' own investigations in this field. The paper intends to highlight not only achievements made by recent research but also challenges and difficulties still remaining in this area.
Exploiting range imagery: techniques and applications
NASA Astrophysics Data System (ADS)
Armbruster, Walter
2009-07-01
Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.
Research on Key Technologies of Cloud Computing
NASA Astrophysics Data System (ADS)
Zhang, Shufen; Yan, Hongcan; Chen, Xuebin
With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.
Huvane, Jacqueline; Komarow, Lauren; Hill, Carol; Tran, Thuy Tien T.; Pereira, Carol; Rosenkranz, Susan L.; Finnemeyer, Matt; Earley, Michelle; Jiang, Hongyu (Jeanne); Wang, Rui; Lok, Judith
2017-01-01
Abstract The Statistical and Data Management Center (SDMC) provides the Antibacterial Resistance Leadership Group (ARLG) with statistical and data management expertise to advance the ARLG research agenda. The SDMC is active at all stages of a study, including design; data collection and monitoring; data analyses and archival; and publication of study results. The SDMC enhances the scientific integrity of ARLG studies through the development and implementation of innovative and practical statistical methodologies and by educating research colleagues regarding the application of clinical trial fundamentals. This article summarizes the challenges and roles, as well as the innovative contributions in the design, monitoring, and analyses of clinical trials and diagnostic studies, of the ARLG SDMC. PMID:28350899
Formative evaluation of a patient-specific clinical knowledge summarization tool
Del Fiol, Guilherme; Mostafa, Javed; Pu, Dongqiuye; Medlin, Richard; Slager, Stacey; Jonnalagadda, Siddhartha R.; Weir, Charlene R.
2015-01-01
Objective To iteratively design a prototype of a computerized clinical knowledge summarization (CKS) tool aimed at helping clinicians finding answers to their clinical questions; and to conduct a formative assessment of the usability, usefulness, efficiency, and impact of the CKS prototype on physicians’ perceived decision quality compared with standard search of UpToDate and PubMed. Materials and methods Mixed-methods observations of the interactions of 10 physicians with the CKS prototype vs. standard search in an effort to solve clinical problems posed as case vignettes. Results The CKS tool automatically summarizes patient-specific and actionable clinical recommendations from PubMed (high quality randomized controlled trials and systematic reviews) and UpToDate. Two thirds of the study participants completed 15 out of 17 usability tasks. The median time to task completion was less than 10 s for 12 of the 17 tasks. The difference in search time between the CKS and standard search was not significant (median = 4.9 vs. 4.5 min). Physician’s perceived decision quality was significantly higher with the CKS than with manual search (mean = 16.6 vs. 14.4; p = 0.036). Conclusions The CKS prototype was well-accepted by physicians both in terms of usability and usefulness. Physicians perceived better decision quality with the CKS prototype compared to standard search of PubMed and UpToDate within a similar search time. Due to the formative nature of this study and a small sample size, conclusions regarding efficiency and efficacy are exploratory. PMID:26612774
Formative evaluation of a patient-specific clinical knowledge summarization tool.
Del Fiol, Guilherme; Mostafa, Javed; Pu, Dongqiuye; Medlin, Richard; Slager, Stacey; Jonnalagadda, Siddhartha R; Weir, Charlene R
2016-02-01
To iteratively design a prototype of a computerized clinical knowledge summarization (CKS) tool aimed at helping clinicians finding answers to their clinical questions; and to conduct a formative assessment of the usability, usefulness, efficiency, and impact of the CKS prototype on physicians' perceived decision quality compared with standard search of UpToDate and PubMed. Mixed-methods observations of the interactions of 10 physicians with the CKS prototype vs. standard search in an effort to solve clinical problems posed as case vignettes. The CKS tool automatically summarizes patient-specific and actionable clinical recommendations from PubMed (high quality randomized controlled trials and systematic reviews) and UpToDate. Two thirds of the study participants completed 15 out of 17 usability tasks. The median time to task completion was less than 10s for 12 of the 17 tasks. The difference in search time between the CKS and standard search was not significant (median=4.9 vs. 4.5m in). Physician's perceived decision quality was significantly higher with the CKS than with manual search (mean=16.6 vs. 14.4; p=0.036). The CKS prototype was well-accepted by physicians both in terms of usability and usefulness. Physicians perceived better decision quality with the CKS prototype compared to standard search of PubMed and UpToDate within a similar search time. Due to the formative nature of this study and a small sample size, conclusions regarding efficiency and efficacy are exploratory. Published by Elsevier Ireland Ltd.
Heterogeneity image patch index and its application to consumer video summarization.
Dang, Chinh T; Radha, Hayder
2014-06-01
Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.
Beyond Statistics: The Economic Content of Risk Scores
Einav, Liran; Finkelstein, Amy; Kluender, Raymond
2016-01-01
“Big data” and statistical techniques to score potential transactions have transformed insurance and credit markets. In this paper, we observe that these widely-used statistical scores summarize a much richer heterogeneity, and may be endogenous to the context in which they get applied. We demonstrate this point empirically using data from Medicare Part D, showing that risk scores confound underlying health and endogenous spending response to insurance. We then illustrate theoretically that when individuals have heterogeneous behavioral responses to contracts, strategic incentives for cream skimming can still exist, even in the presence of “perfect” risk scoring under a given contract. PMID:27429712
1999 Customer Satisfaction Survey Report: How Do We Measure Up?
ERIC Educational Resources Information Center
Salvucci, Sameena; Parker, Albert C. E.; Cash, R. William; Thurgood, Lori
2001-01-01
Summarizes results of a 1999 survey regarding the satisfaction of various groups with publications, databases, and services of the National Center for Education Statistics. Groups studied were federal, state, and local policymakers; academic researchers; and journalists. Compared 1999 results with 1997 results. (Author/SLD)
USDA-ARS?s Scientific Manuscript database
We describe new methods for characterizing gene tree discordance in phylogenomic datasets, which screen for deviations from neutral expectations, summarize variation in statistical support among gene trees, and allow comparison of the patterns of discordance induced by various analysis choices. Usin...
Analysis of the 1996 Wisconsin forest statistics by habitat type.
John Kotar; Joseph A. Kovach; Gary Brand
1999-01-01
The fifth inventory of Wisconsin's forests is presented from the perspective of habitat type as a classification tool. Habitat type classifies forests based on the species composition of the understory plant community. Various forest attributes are summarized by habitat type and management implications are discussed.
Summarizing Monte Carlo Results in Methodological Research.
ERIC Educational Resources Information Center
Harwell, Michael R.
Monte Carlo studies of statistical tests are prominently featured in the methodological research literature. Unfortunately, the information from these studies does not appear to have significantly influenced methodological practice in educational and psychological research. One reason is that Monte Carlo studies lack an overarching theory to guide…
ERIC Educational Resources Information Center
Department of Justice, Washington, DC. Office of Juvenile Justice and Delinquency Prevention.
This report summarizes the activities and achievements of the Office of Juvenile Justice and Delinquency Prevention's (OJJDP) Research Division from August 1999 to the present in the areas of research, evaluation, and statistics. It provides new findings on very young offenders; the causes and correlates of delinquency; juvenile transfers to…
DOT National Transportation Integrated Search
2012-04-01
This report summarizes a study that seeks to identify the factors leading to the high crash rate experienced on Louisiana highways. Factors were identified by comparing statistics from the Louisiana Crash Database with those from peer states using th...
A Concurrent Support Course for Intermediate Algebra
ERIC Educational Resources Information Center
Cooper, Cameron I.
2011-01-01
This article summarizes the creation and implementation of a concurrent support class for TRS 92--Intermediate Algebra, a developmental mathematics course at Fort Lewis College in Durango, Colorado. The concurrent course outlined in this article demonstrates a statistically significant increase in student success rates since its inception.…
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
The Evolution of the Language Laboratory: Changes During Fifteen Years of Operation
ERIC Educational Resources Information Center
Stack, Edward M.
1977-01-01
This article summarizes conditions and changes in language laboratories. Types of laboratories and equipment are listed; laboratory personnel include technicians, librarians and student assistants. Most maintenance was done by institution personnel; student use is outlined. Professional attitudes and equipment statistics are surveyed. (CHK)
In silico prediction of post-translational modifications.
Liu, Chunmei; Li, Hui
2011-01-01
Methods for predicting protein post-translational modifications have been developed extensively. In this chapter, we review major post-translational modification prediction strategies, with a particular focus on statistical and machine learning approaches. We present the workflow of the methods and summarize the advantages and disadvantages of the methods.
ERIC Educational Resources Information Center
Ledford, Bruce R.; Ledford, Suzanne Y.
This study investigated whether grade six students' self-esteem could be affected by the presentation of a selected stimulus below the threshold of conscious awareness via the medium of a specially prepared paper. It also investigated whether any statistically significant differences existed between the effects on self-esteem of a selected…
ERIC Educational Resources Information Center
Rodríguez, Cristóbal
2016-01-01
This study focuses on Texas Borderland students admitted through the Texas Top 10% admissions policy, which assumes that Top 10% students are college ready for any public university and provides Top 10% high school graduates automatic admission to any 4-year public university in Texas. Using descriptive and inferential statistics, results…
ERIC Educational Resources Information Center
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
NASA Astrophysics Data System (ADS)
Rusu-Anghel, S.
2017-01-01
Analytical modeling of the flow of manufacturing process of the cement is difficult because of their complexity and has not resulted in sufficiently precise mathematical models. In this paper, based on a statistical model of the process and using the knowledge of human experts, was designed a fuzzy system for automatic control of clinkering process.
Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun
2018-05-01
Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.
Tympanometric changes and eustachian tube function in patients with hypothyroidism.
Orhan, Israfil; Palit, Fatih; Aydin, Salih; Soylu, Erkan; Sakallioglu, Oner
2014-05-01
The aim of this study was to investigate the tympanometric changes and eustachian tube function (ETF) in patients with hypothyroidism. Automatic ETF tests were performed and tympanometric measurements were evaluated to assess ETF in 40 patients diagnosed with hypothyroidism and a 40-patient euthyroid control group. Levothyroxine sodium tablet treatment was started in patients with hypothyroidism. After achieving a euthyroid state in these patients, the tympanometric measurements and automatic ETF tests were repeated. When the patient groups (hypothyroid and control) were compared in terms of ETF, a statistically significant ET dysfunction was observed in the hypothyroid patient group (P < 0.01).When hypothyroid patients were evaluated in terms of ETF before and after treatment, whereas 61.3% of cases had ET function before treatment, this ratio increased to 78.8% after treatment. Furthermore, according to pressure and compliance measurements, statistically significant increases were found in the after treatment measurements (P < 0.05). As a result of this study, we have come to the opinion that hypothyroidism can change tympanometric measurements and also cause ET dysfunction. However, more comprehensive and detailed studies researching the effects of hypothyroidism on tympanometric measurements are needed.
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)
Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128